The device comprises a 3D camera, a belt with five vibrational motors, and an electronically reconfigurable Braille interface to give users more information about their immediate environments.
“In a nutshell, our system scans the world and finds the walkable space and obstacles in front of the user with visual impairment,” Robert Katzschmann, a graduate student in mechanical engineering at MIT, told Digital Trends. “The user does not need to explore the space by contacting each part with a white cane. What makes the system especially exciting is that it can detect obstacles of use, such as chairs and tables. All the information is presented to the user through the use of vibrations around his or her abdomen, and through the use of an electronic Braille character display.”
Using technology similar to that employed to (literally and figuratively) drives 3D cars, the device relies on a system able to interpret 3D camera data. It involves smart image recognition algorithms to, for instance, recognize whether a chair is empty or not — rather than just writing it off as an obstacle to be avoided. Information can be conveyed to users surreptitiously, a particular motor vibrates if a person comes within two meters of an obstacle. They also receive information — such as whether it is a table or chair that has been detected — through reconfigurable Braille pads.
“Primarily, the real-world applications are day-to-day scenarios [in which a] user with visual impairment is confronted with navigating a cafeteria, finding his or her way around in a hotel lobby, or finding an empty chair in the bus or train,” Dr. Hsueh-Cheng Wang, a former postdoctoral researcher at MIT and now an assistant professor of electrical and computer engineering at National Chiao Tung University in Taiwan, told us.
In tests, the researchers found that the chair-finding system reduced subjects’ collisions with non-chair objects by 80 percent, while the separate navigation system reduced the number of cane collisions with people in a hallway by 86 percent.
“We plan [next] to extend this work from indoor to outdoor environments, and detect more objects a blind user wishes to interact with,” Katzschmann continued. Long term, the hope is to commercialize the technology, so as to bring it to whoever needs it.
- MIT’s smart capsule could be used to release drugs in response to a fever
- Meet the MIT scientist who’s growing semi-sentient cyborg houseplants
- Cities looking to get smart take a lesson from an iconic shopping mall
- The best air purifiers for your home or office
- Breakthrough A.I.-powered stethoscope diagnoses pneumonia like a robot doctor