Skip to main content

Amazing new glove can translate sign language into spoken words in real time

Wearable Sign-to-Speech Translation

Researchers at the University of California, Los Angeles, have developed a wearable device, resembling something approaching the Nintendo Power Glove, that’s able to translate American Sign Language into speech in real time using a smartphone app. While it’s still in the prototype phase, it could one day help those who rely on sign language to communicate more easily with non-signers, along with assisting novices who are learning sign language.

“Analog triboelectrification and electrostatic induction-based signals generated by sign language components — including hand configurations and motions, and facial expressions — are converted to the digital domain by the wearable sign-to-speech translation system to implement sign-to-speech translation,” Jun Chen, assistant professor of bioengineering at the UCLA Samueli School of Engineering, told Digital Trends. “Our system offers good mechanical and chemical durability, high sensitivity, quick response times, and excellent stretchability.”

The gloves contain thin, stretchable sensors made of electrically conductive yarn which run along the length of all five fingers. They communicate the finger movements of the wearer to a small, coin-sized circuit board that’s worn on the wrist, which in turn transmits the data to a connected smartphone. Because American Sign Language relies on facial expressions in addition to hand movements, the system also involves sensors adhered to users’ eyebrows and the sides of their mouths. Built around machine learning algorithms, the wearable is currently able to recognize 660 signs, including every letter of the alphabet and numbers zero through nine.

ASL reading system 1
University of California, Los Angeles

Chen said that previous sign language translation devices have been based on a wide range of techniques, including electromyography, the piezoresistive effect, ionic conduction, the capacitive effect, and photography and image processing. But the inherent complexity of these tools, in addition to how cumbersome they are, has made them little more than proof-of-concept lab experiments.

“For example, vision-based sign language translation systems have high requirements for optimal lighting,” Chen said. “If the available lighting is poor, this compromises the visual quality of signing motion captured by the camera and consequently affects the recognition results. Alternatively, sign language translation systems based on surface electromyography have strict requirements for the position of the worn sensors, which can impact translation accuracy and reliability.”

The hope is that this wearable sign-to-speech translation system could be more realistically used in real-world settings. In addition to not being affected by external variables like light, the UCLA sign language wearable could be produced inexpensively. “We are still working to polish the system,” Chen said. “It may take three to five years to get it commercialized.”

A paper describing the work was recently published in the journal Nature Electronics.

Editors' Recommendations