As convenient as voice assistants have proven themselves to be, these A.I.-powered tools have lagged in their inability to serve the deaf community. But one enterprising computer scientist has developed a solution that will allow those with hearing impediments to experience the highs and lows of Amazon Alexa, Google Assistant, and Siri along with everyone else. Abhishek Singh, who first rose to prominence when he built Super Mario Bros in augmented reality, has created a web application that uses a camera to read sign language, and then translates those words into spoken language for an Amazon Echo. When the Echo speaks its response, the whole process plays out in reverse, resulting in a typed reply for hearing impaired or deaf people to read.
This app goes beyond the pseudo-solution provided by the Amazon Echo Show, which added a screen to the smart home hub in order to allow members of the deaf community to interact with the smart home assistant. However, this still did not allow Alexa and its user to truly have a conversation – Singh’s offering, however, fixes this.
“The project was a thought experiment inspired by observing a trend among companies of pushing voice-based assistants as a way to create instant, seamless interactions,” he told Fast Company. “If these devices are to become a central way we interact with our homes or perform tasks, then some thought needs to be given to those who cannot hear or speak. Seamless design needs to be inclusive in nature.”
In building the system, Singh trained an A.I. system within machine learning platform Tensorflow, signing words through his webcam ad nauseam in order to “teach” the system sign language. He then added Google’s text-to-speech capabilities in order to translate the sign language into spoken words.
While Singh’s solution is an elegant one, the coder still hopes that Amazon will ultimately recognize sign language on its own. “That’s where I hope this heads. And if this project leads to a push in that direction in any small way, then mission accomplished,” he said. “In an ideal world, I would have built this on the Show directly, but the devices aren’t that hackable yet, [I] wasn’t able to find a way to do it.”
- Make Alexa your own: How to change the digital assistant’s voice
- Skype’s real-time A.I. captions and subtitles aim for better collaboration
- Google’s Gboard keyboard app reaches 500 languages in just two years
- Here’s how to change your preferred language in Google Chrome
- Bosch is developing a Rosetta Stone for autonomous and connected cars