But as another idea, if we can put 2 kinect in each room of the house in opposite of each other or more, they will scan and see almost everything in the room, and then connect them, to a “PC/Arduino/Raspberry Pi“, and then to “one server/or the cloud/AI:artificial intelligence” for the hole house, then we can give the blind more info on what he has in his entire home.
As an example, the kitchen tools, a blind would request by his voice to get the sugar jar, and kinect using “Cortana/Siri/Google Now” will respond by commands on how to reach it, while its monitoring his walk till he can get to it.
As we can show each object to kinect to scan it, and save it in a database with the name that we give it, so it can easily search for it in the house when we need it.
And for the objects that can’t be scanned easily, we can put a special sticker with an ink that can be seen with the IR of kinect, so it can search for that object by its new label.
The joke is that this can help all people, not just blind, as we tend to forget or loose a lot of things, and this can keep an eye on everything, and tell us where it is.
OpenNI: For programming Kinect on any device
Kinect for blind walk
Microsoft’s new app helps blind people
Top 10 Voice Recognition Services
List of artificial intelligence projects