Locked-in syndrome (LIS) refers to a condition in which patients have cognitive function, but are not able to move or communicate due to total paralysis.
Thanks to an exciting advance, however, some help may be on the way. As part of an international research project, doctors have able to use brain-reading technology to communicate with patients for the first time by asking a series of “yes” or “no” answers and then using computer algorithms to decode their thought patterns.
“In some cases, eye movement can be used for communication,” researcher Ujwal Chaudhary told Digital Trends. “However, once eye movement is gone for a person suffering from locked-in syndrome, there is no means of communication. That’s where we come in. We’ve developed a non-invasive functional near-infrared spectroscopy technique for communication.”
In a small scale study carried out at Germany’s University of Tübingen, four LIS patients were kitted out with these fNIRS non-invasive brain caps. The caps use infrared light to measure variations in blood flow to different regions of the brain.
To start with, the researchers asked the study’s participants questions like, “Is Berlin the capital of Germany?” This process lasted about one hour and consisted of a sample of 100-150 questions, to which the questioners already knew the answers. This allowed them to train their computer algorithm to recognize when a patient was answering in the affirmative or negative. According to the investigators, accuracy of the computer analysis is around 70 percent.
After they were satisfied with this number, the researchers then moved on to asking open questions, such as, “Would you like [a particular friend] to visit you today?” or, “Are you in pain at the moment?”
This is where the real value of the work comes into play — although the researchers aren’t stopping here. According to Chaudhary, the team next hopes to develop the work to allow LIS patients to form their own sentences.
This is something which is already possible for LIS sufferers who still have eye movement, since various eye-tracking tools available make it possible for them to select words or letters. But such tools cannot be used if patients are unable to move their eyes. The question, then, is how best to achieve a similar goal — and how to do so when you only have access to binary positive or negative brain signals.
“What we’re working on right now is a way of categorizing topics such as health, family, and food,” Chaudhary continued. “Each of these can be asked as ‘yes’ or ‘no’ questions. If the patient answers ‘yes’ to talking about food, they could then answer questions like ‘are you hungry?’”
However, this will take longer to develop. In a separate interview, Niels Birbaumer, the neuroscientist who led the research, told Digital Trends he thinks it will require “another few years” of research to achieve. It may also mean more invasive methods being used for brain-reading.
“I think we will need to implant electrodes in the brain to be able to do this, because it is very hard for people to have the concentration to do this,” he said. “Right now, when we ask people to choose a particular word or letter they are not able to do so.”
- Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution
- Freaky new A.I. scans your brain, then generates faces you’ll find attractive
- Meet the Xenobots: Living, biological machines that could revolutionize robotics
- Elon Musk’s Neuralink posts video of a monkey playing Pong with its mind
- How to protect your smartphone from hackers and intruders