Skip to main content

Cognitive hearing aid uses AI and brain waves to enhance voices

hearing aid
Alexander Raths/123RF
Whether it’s Apple’s smart cochlear implant collaboration or tools designed to make sign language communication easier, there is no shortage of cutting-edge gadgetry available to make life easier for people who are deaf or hard of hearing. A new piece of technology coming out of Columbia University School of Engineering and Applied Science could make things even better, however — courtesy of a hearing aid that is designed to read brain activity to determine which voice a hearing aid user is most interested in listening to and then focusing in on it. The resulting “cognitive hearing aid” could be transformative in settings like crowded rooms in which multiple people are speaking at the same time.

“My research has been focused on understanding how speech is processed in the brain, and to create models of it that can be used in automatic speech-recognition technologies,” Nima Mesgarani, an associate professor of electrical engineering, told Digital Trends. “Working at the intersection of brain science and engineering, I saw a unique opportunity to combine the latest advances from both fields, to create a solution for decoding the attention of a listener to a specific speaker in a crowded scene which can be used to amplify that speaker relative to others.”

Mesgarani says that, up until now, no hearing aid on the market has addressed this specific problem. While the latest hearing aids feature technology designed to suppress background noise, these hearing aids have no way of knowing which voices a wearer wants to listen to, and which are the distractors.

The device Mesgarani and team came up with constantly monitors the brain activity of the wearer to solve this issue. To do this, it uses a deep neural network which automatically separates each of the speakers from the background hubbub and compares each speaker with the neural data from the user’s brain. The speaker who best matches the neural data is then amplified to assist the user.

It’s a great concept — although it may still be a bit longer before the finished product is available to wearers. Next, the team hopes to develop better algorithms for performing the task in all possible conditions, as well as finding a way to make the neural recording process less intrusive.

“Many researchers have been developing techniques for measuring the brain signal from inside the ear,” Mesgarani continued. “Imagine an earbud with electrodes placed around it. [Another solution might include] C-shape grids placed around the ear, similar to a [regular] hearing aid.”

A paper describing this work was recently published in the Journal of Neural Engineering.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Groundbreaking A.I. can synthesize speech based on a person’s brain activity
Everything you need to know about Neuralink

Speech synthesis from neural decoding of spoken sentences

Scientists from the University of California, San Francisco have demonstrated a way to use artificial intelligence to turn brain signals into spoken words. It could one day pave the way for people who cannot speak or otherwise communicate to be able to talk with those around them.

Read more
This AI cloned my voice using just three minutes of audio
acapela group voice cloning ad

There's a scene in Mission Impossible 3 that you might recall. In it, our hero Ethan Hunt (Tom Cruise) tackles the movie's villain, holds him at gunpoint, and forces him to read a bizarre series of sentences aloud.

"The pleasure of Busby's company is what I most enjoy," he reluctantly reads. "He put a tack on Miss Yancy's chair, and she called him a horrible boy. At the end of the month, he was flinging two kittens across the width of the room ..."

Read more
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more