Maybe not, but thanks to the good folks at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University, mind-controlled robots aren’t entirely a dream any more.
“We’ve developed a system that uses EEG brain-activity data to correct a robot’s mistakes in real time,” CSAIL research scientist Stephanie Gil told Digital Trends. “For several years researchers have tried to develop robots that can be controlled by brain signals. The problem is that to do this, most of the time humans have to ‘think’ in a specific way that computers can recognize, like looking at a flashing light that corresponds to a particular task. Obviously, this is a pretty unnatural experience for us, and can be very mentally taxing. And it’s especially problematic if we are overseeing a robot to do a dangerous task in manufacturing or construction. In our case the users do not need to modulate their brain activity, they just have to evaluate the actions of the robot: a task that is very natural to humans.”
For the study, the human participant wears an electroencephalography (EEG) brain cap for recording neural activity. They then watch as a robot performs an object-sorting task, and use nothing more than their thoughts to correct it if it makes a mistake. This is done using a machine-learning algorithm capable of classifying brain waves in just 10-30 milliseconds.
“I could imagine a system like this being used to let a person monitor robots as they perform tasks that are deemed too dirty or dangerous for humans, such as on a manufacturing floor or even underwater and in space,” CSAIL PhD candidate Joseph DelPreto told Digital Trends. “Alternatively, such a system could allow an autonomous car to drive itself while still being kept in check at all times by a human driver in case it makes a mistake.”
There’s still work to be done before it can reach that point, of course. The system as it currently exists handles only a relatively simple binary choice activity, although it suggests that one day there will be far more intuitive ways for us to control robots.
As Boston University PhD candidate Andres F. Salazar-Gomez put it to us, “This is important because it means that people don’t have to train themselves to think in a certain way. The machine is the one that adapts to you, instead of the other way around.”
- How will we know when an AI actually becomes sentient?
- Say hi to Proteus, Amazon’s most advanced warehouse robot yet
- Finishing touch: How scientists are giving robots humanlike tactile senses
- Spot’s latest robot dance highlights new features
- Nvidia’s next GPUs will be designed partially by AI