Researchers from the University of California, San Francisco, have developed a brain implant which uses deep-learning artificial intelligence to transform thoughts into complete sentences. The technology could one day be used to help restore speech in patients who are unable to speak due to paralysis.
“The algorithm is a special kind of artificial neural network, inspired by work in machine translation,” Joseph Makin, one of the researchers involved in the project, told Digital Trends. “Their problem, like ours, is to transform a sequence of arbitrary length into a sequence of arbitrary length.”
The neural net, Makin explained, consists of two stages. In the first, the neural data gathered from brain signals, captured using electrodes, is transformed into a list of numbers. This abstract representation of the data is then decoded, word by word, into an English language sentence. The two stages are trained together, not separately, to achieve this task. The words are finally outputted as text — although it would be equally possible to output it as speech using a text-to-speech converter.
For the study, four women with epilepsy, who had previously had electrodes attached to their brains to monitor for seizures, tested out the mind-reaching tech. Each participant was asked to repeat sentences, allowing the A.I. to learn and then demonstrate its ability to decode thoughts into speech. The best performance had an average translation rate error of only 3%.
Currently the A.I. has a vocabulary of around 250 words. By comparison, the average American adult native English speaker has a vocabulary of somewhere between 20,000 and 35,000 words. So if the researchers are going to make this tool as valuable as it could be, they will need to vastly scale up the number of words it can identify and verbalize.
“The algorithms for natural-language processing, including machine translation, have advanced quite a bit since I conceived the idea for this decoder in 2016,” Makin continued. “We’re investigating some of these now. [In order to] achieve high-quality decoding over a broader swath of English, we need to collect more data from a single subject — or somehow get even bigger boosts from our transfer learning.”
A paper describing the work was recently published in the journal Nature Neuroscience.
- How Nvidia is using A.I. to help Domino’s deliver pizzas faster
- The funny formula: Why machine-generated humor is the holy grail of A.I.
- Intel processors could adopt one of the best Apple and iPhone features
- Nvidia’s latest A.I. results prove that ARM is ready for the data center
- Nvidia’s new voice A.I. sounds just like a real person