Skip to main content

Groundbreaking A.I. brain implant translates thoughts into spoken words

Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis.

“The algorithm is a special kind of artificial neural network, inspired by work in machine translation,” Joseph Makin, one of the researchers involved in the project, told Digital Trends. “Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length.”

The neural net, Makin explained, consists of two stages. In the first, the neural data gathered from brain signals, captured using electrodes, is transformed into a list of numbers. This abstract representation of the data is then decoded, word by word, into an English language sentence. The two stages are trained together, not separately, to achieve this task. The words are finally outputted as text — although it would be equally possible to output it as speech using a text-to-speech converter.

For the study, four women with epilepsy, who had previously had electrodes attached to their brains to monitor for seizures, tested out the mind-reaching tech. Each participant was asked to repeat sentences, allowing the A.I. to learn and then demonstrate its ability to decode thoughts into speech. The best performance had an average translation rate error of only 3%.

Currently the A.I. has a vocabulary of around 250 words. By comparison, the average American adult native English speaker has a vocabulary of somewhere between 20,000 and 35,000 words. So if the researchers are going to make this tool as valuable as it could be, they will need to vastly scale up the number of words it can identify and verbalize.

“The algorithms for natural-language processing, including machine translation, have advanced quite a bit since I conceived the idea for this decoder in 2016,” Makin continued. “We’re investigating some of these now. [In order to] achieve high-quality decoding over a broader swath of English, we need to collect more data from a single subject — or somehow get even bigger boosts from our transfer learning.”

A paper describing the work was recently published in the journal Nature Neuroscience.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more
Algorithmic architecture: Should we let A.I. design buildings for us?
Generated Venice cities

Designs iterate over time. Architecture designed and built in 1921 won’t look the same as a building from 1971 or from 2021. Trends change, materials evolve, and issues like sustainability gain importance, among other factors. But what if this evolution wasn’t just about the types of buildings architects design, but was, in fact, key to how they design? That’s the promise of evolutionary algorithms as a design tool.

While designers have long since used tools like Computer Aided Design (CAD) to help conceptualize projects, proponents of generative design want to go several steps further. They want to use algorithms that mimic evolutionary processes inside a computer to help design buildings from the ground up. And, at least when it comes to houses, the results are pretty darn interesting.
Generative design
Celestino Soddu has been working with evolutionary algorithms for longer than most people working today have been using computers. A contemporary Italian architect and designer now in his mid-70s, Soddu became interested in the technology’s potential impact on design back in the days of the Apple II. What interested him was the potential for endlessly riffing on a theme. Or as Soddu, who is also professor of generative design at the Polytechnic University of Milan in Italy, told Digital Trends, he liked the idea of “opening the door to endless variation.”

Read more
Emotion-sensing A.I. is here, and it could be in your next job interview
man speaking into phone

I vividly remember witnessing speech recognition technology in action for the first time. It was in the mid-1990s on a Macintosh computer in my grade school classroom. The science fiction writer Arthur C. Clarke once wrote that “any sufficiently advanced technology is indistinguishable from magic” -- and this was magical all right, seeing spoken words appearing on the screen without anyone having to physically hammer them out on a keyboard.

Jump forward another couple of decades, and now a large (and rapidly growing) number of our devices feature A.I. assistants like Apple’s Siri or Amazon’s Alexa. These tools, built using the latest artificial intelligence technology, aren’t simply able to transcribe words -- they are able to make sense of their contents to carry out actions.

Read more