Skip to main content

Amazing new glove can translate sign language into spoken words in real time

Wearable Sign-to-Speech Translation

Researchers at the University of California, Los Angeles, have developed a wearable device, resembling something approaching the Nintendo Power Glove, that’s able to translate American Sign Language into speech in real time using a smartphone app. While it’s still in the prototype phase, it could one day help those who rely on sign language to communicate more easily with non-signers, along with assisting novices who are learning sign language.

“Analog triboelectrification and electrostatic induction-based signals generated by sign language components — including hand configurations and motions, and facial expressions — are converted to the digital domain by the wearable sign-to-speech translation system to implement sign-to-speech translation,” Jun Chen, assistant professor of bioengineering at the UCLA Samueli School of Engineering, told Digital Trends. “Our system offers good mechanical and chemical durability, high sensitivity, quick response times, and excellent stretchability.”

The gloves contain thin, stretchable sensors made of electrically conductive yarn which run along the length of all five fingers. They communicate the finger movements of the wearer to a small, coin-sized circuit board that’s worn on the wrist, which in turn transmits the data to a connected smartphone. Because American Sign Language relies on facial expressions in addition to hand movements, the system also involves sensors adhered to users’ eyebrows and the sides of their mouths. Built around machine learning algorithms, the wearable is currently able to recognize 660 signs, including every letter of the alphabet and numbers zero through nine.

ASL reading system 1
University of California, Los Angeles

Chen said that previous sign language translation devices have been based on a wide range of techniques, including electromyography, the piezoresistive effect, ionic conduction, the capacitive effect, and photography and image processing. But the inherent complexity of these tools, in addition to how cumbersome they are, has made them little more than proof-of-concept lab experiments.

“For example, vision-based sign language translation systems have high requirements for optimal lighting,” Chen said. “If the available lighting is poor, this compromises the visual quality of signing motion captured by the camera and consequently affects the recognition results. Alternatively, sign language translation systems based on surface electromyography have strict requirements for the position of the worn sensors, which can impact translation accuracy and reliability.”

The hope is that this wearable sign-to-speech translation system could be more realistically used in real-world settings. In addition to not being affected by external variables like light, the UCLA sign language wearable could be produced inexpensively. “We are still working to polish the system,” Chen said. “It may take three to five years to get it commercialized.”

A paper describing the work was recently published in the journal Nature Electronics.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
AMD’s canceled GPU could have crushed Nvidia
The AMD Radeon RX 7900 XTX graphics card.

For months now, we've been hearing rumors that AMD gave up on its best graphics card from the upcoming RDNA 4 lineup, and instead opted to target the midrange segment. However, that doesn't mean that such a GPU was never in the works. Data mining revealed that the card may indeed have been planned, and if it was ever released, it would've given Nvidia's RTX 4090 a run for its money.

The top GPU in question, commonly referred to as Navi 4C or Navi 4X, was spotted in some patch information for AMD's GFX12 lineup -- which appears to be a code name for RDNA 4. The data was then posted by Kepler_L2, a well-known hardware leaker, on Anandtech forums. What at first glance seems to be many lines of code actually reveals the specs of the reportedly canceled graphics card.

Read more
You’ll never guess what this YouTuber built into a PC this time
A woman stands next to a custom-built gaming PC with a coffee maker inside.

There are gaming PCs, and there are coffee makers -- and the two do not mix. After all, who would want boiling hot coffee inside their high-end gaming desktop? The idea alone makes me shiver, but Nerdforge's Martina was brave enough to come up with this project and create a fully custom-built PC that doesn't just run, but it also makes coffee at the press of a button.

Nerdforge is a YouTube channel run by a Norwegian couple, Martina and Hansi, who dabble in all sorts of innovative crafts. And it's safe to say that this falls under that category. The project started with an idea: What if, instead of having to get up to fetch a cup of coffee, you could have a coffee maker installed right inside your PC?

Read more
Watch Boston Dynamics’ dog-like robot don a dog suit and dance
Boston Dynamics' Spot robot dressed as a dog.

Meet Sparkles | Boston Dynamics

Boston Dynamics has shared a video of its dog-like Spot robot dancing in a dog costume.

Read more