Brain-reading headphones are here to give you telekinetic control

Neurable Headset
Neurable

For the past 45 years, SIGGRAPH, the renowned annual conference for all things computer graphics, has been a great place to look if you want a sneak peak at the future. In the 1980s, it was the place animation enthusiasts Ed Catmull and John Lasseter crossed paths for one of the first times. A decade later they had created Toy Story, the first feature length movie animated on computer. In the 1990s, it was home to a spectacular demo in which thousands of attendees used color-coded paddles to play a giant collaborative game of Pong. Today, online games played by millions are everywhere.

And, in 2017, it was where an early stage startup called Neurable and VR graphics company Estudiofuture demonstrated a game called The Awakening. In The Awakening, players donned a VR headset, along with head-mounted electrodes designed for reading their brain waves. Using machine learning technology to decode these messy brain signals, Neurable was able to turn thoughts into game actions. Players could select, pick up, and throw objects simply by thinking about it. No gamepads, controller, or body movement necessary. Is this the future of computer interaction as we know it?

Neurable

In the closing days of 2019, Neurable has moved beyond gaming. But it’s lost none of its enthusiasm for building technology that can be operated by a person using nothing more than brain activity. Having just announced a new $6 million funding round, the company is preparing to create its first mass market product: what its creators claim will be the world’s first true “everyday brain-computer interface.” Coming soon to a head near you.

Recommended Videos

Reading your mind

According to Neurable CEO, Dr. Ramses Alcaide, it all started when he was a kid. Eight, to be precise. “My uncle got into a trucking accident and lost both his legs,” he told Digital Trends. “[Since then], the idea of developing technology for people who are differently abled has been my big, never-ending quest.”

Neurable

It’s a quest that took Alcaide first to the University of Washington, for an undergrad degree in electrical engineering, during which he developed control systems for prosthetics. After that it was on to the University of Michigan for a Master’s and then Ph.D. in neuroscience, focusing on brain activity communication systems. While there, Alcaide played a key role in one of the big tech breakthroughs that lead to Neurable.

“We came up with a significant breakthrough that let us increase the signal-to-noise [ratio] dramatically for EEG and other types of signal processing systems for brain activity,” he said.

EEG, short for electroencephalography, is a noninvasive brain electrical activity monitoring method. Using electrodes placed along the scalp, it measures voltage fluctuations from ionic current in the neurons of the brain. While the resulting signal contains far less information than you would receive from an fMRI (functional magnetic resonance imaging) scan, the technology is more portable, since it doesn’t require large, expensive equipment that’s usually found only in clinics. That makes it more practical for use in real world brain-computer interfaces.

Using cutting edge A.I. algorithms to probe the EEG signals, Alcaide was convinced that the performance metrics were “significant enough” that a commercial product was possible. This could help people without the ability to use their limbs to easily control the machines around them. Its appeal could extend beyond that group, too. The question was what form the final brain-reading product would take.

The perfect form factor

Neurable’s first attempt at such a device was the DK1, an EEG reading cap and software developer kit. The company showed it off by carrying out demonstrations such as having users “think” a toy race car around a track or flying drones with their mind. The DK1 utilized six dry electrodes (look, Ma, no gel!) and took just two minutes to calibrate. That was a major step up from its previous proof-of-concept, which required 32 wet electrodes and a calibration time of 30 minutes. Still, Alcaide knew that this was not the Platonic ideal of what he was trying to create.

Neurable

That ideal is what Neurable is now working to bring to market. While not yet shown off publicly, according to Alcaide it will be a set of headphones with the company’s electrode technology built in. Instead of looking like, frankly, the mad scientist rig of earlier EEG devices, it will instead resemble something all of us are already used to seeing on a daily basis. The headphones will appear entirely unremarkable — and that’s exactly the point.

“Because they’re headphones, you can just put them on and nobody will know that you’re wearing a brain-computer interface,” Alcaide said. “It just looks like an average pair of headphones, but [will give users] the ability to do hands-free control. It can also track your cognitive states throughout the day.”

Neurable

These two main use cases — one active, the other passive — are what Alcaide is excited about. So far, Neurable has shown that its technology works. But, just like the earliest personal computers, it hasn’t been possible to do much more than show that it works. An everyday wearable headset could change that.

The “hearables” market, referring to ear-worn devices that also include smart features like A.I. assistants or health-tracking, is one of the fastest growing in tech. A recent report by International Data Corporation (IDC) suggested that smart earwear shipments will hit 273.7 million per year by 2023. Today it is, at 139.4 million, around half that. If Neurable is able to be first to introduce brain-reading tech into headphones, it would represent a unique market opportunity. If anything’s going to make Apple’s AirPods Pro look like yesterday’s news, this could well be it!

Neurable

With plans for Neurable to unveil the product sometime in 2020, Alcaide was not yet able to say exactly what these headphones will be able to do. “On a high level, it’s going to be what you standardly want to do with a pair of headphones,” he said. “I don’t know if I’m allowed to go into deeper detail than that, but it’s pretty self-evident [what that might be].”

In some ways, that is. It doesn’t take a Ph.D. in neuroscience to figure out that a brain-computer interface built into a pair of headphones could be great for letting you start, stop, or skip tracks without having to use your hands or speak commands out loud. But it could go far deeper than that. For instance, the cognitive state-tracking that Alcaide referred to could be used to cue up the perfect soundtrack for the moment, depending on how you are feeling.

It could go even further down the rabbit hole by playing you the right song not just to match your mood, but to bring you to a desired emotional state. Rather than just triggering a certain playlist to match your mood, a person could theoretically enter their desired emotion and then have a customized playlist generated to provoke that response.

The next big interface

Alcaide doesn’t think his team is just building a cool gimmick. He believes that brain-computer interfaces represent the next big computing paradigm shift; the logical evolution in a string of popular mass market technologies which started with the personal computer and then moved on to devices like the smartphone.

“Computing is becoming more and more mobile,” he said. “It’s also [going to become] more spatial. As it continues to go down that path, we need forms of interaction that enable us to more seamlessly interact with our technology.”

Neurable

New ways of interacting with technology isn’t just about speed. The mouse, for instance, wasn’t just a faster keyboard. It changed the mode of interaction by which we interfaced with computers. Before the mouse, there was no way to carry over to a computer the universal shorthand of pointing to indicate an area of interest. As Steve Jobs told Playboy magazine (of all places) in the 1980s, “If I want to tell you that there is a spot on your shirt, I’m not going to do it linguistically: ‘There’s a spot on your shirt 14 centimeters down from the collar and three centimeters to the left of your button.’”

Instead, we do it by taking our finger (or our mouse cursor) and indicating the precise spot. The mouse empowered average users at a time when using a computer was scary. It ushered in the world of the graphical user interface, transforming computer operating systems into a recognizable, real world-inspired environment the user could move through.

A quarter century later, multi-touch did the same thing for mobile device touchscreens. Touching a device to navigate on it removed the middleman of the mouse, making interactions even more intuitive. It also opened up more possibilities for real world gestures such as “pinch to zoom,” while the adoption of sensors such as the accelerometer gave your smartphone a sense of where it was in terms of orientation.

Alcaide is confident that brain-computer interfaces will change how we interact with machines to a similar degree. If augmented reality becomes as big as many believe it will be, we’ll need a way to interact with it. Interfacing on the move with AR glasses can’t easily be done with a mouse or by touching a mobile display. Nor is A.I. assistant-style voice interaction an obvious solution in every scenario.

But a computer system which knows what you’re thinking very well could be.

Do the rewards outweigh the risks?

Alcaide said that brain-reading technology will help create richer interactions. Take messaging, for instance. “Right now, the way we communicate is so limited,” he said. “If I was to send you a message via text, versus speaking to you in real life, we could interpret those messages differently depending on the media. Brain-computer interfaces allow us to add further context to what we are sending to one another. Imagine that you were able to send a text message and the color bubble can tell you how the person intends to say something: whether they’re angry or upset or being sarcastic.”

Or what if the technology, constantly monitoring your background cognitive state, knew when to present you with certain information. In a world in which users are bombarded with constant notifications, media, ads, and other distractions, Alcaide thinks such technology could prevent data overload. “Your brain is able to determine which information is not important,” he said.

Neurable X Siggraph

Right now, we’re still at the start of this particular journey. Neurable is far from the only group working on brain-computer interfaces. Other interested parties range from cutting edge research labs at top universities to (who else?) Elon Musk’s Neuralink brain-computer interface. But Neurable’s founder and employees are convinced that they are onto a good thing.

Should we be worried about the growing number of tech companies looking to peer into our brains to power next-gen computer systems? After all, it’s hard to look at some of the more egregious abuses of sensitive user data in recent years and think that Silicon Valley is entirely a force to be trusted. Things are bad enough when algorithms simply comb through our data to predict what we’re interested in. Would it get worse if they could actually read our brains?

“[People certainly] have the right to be concerned,” Alcaide said. However, he also insists that Neurable is working hard to protect users’ brain data. He additionally thinks part of his job at Neurable is to educate the public about what exactly he’s doing — and what brain-computer interfaces will mean for us.

“We’re not reading thoughts,” he said. “We are reading high-level brain electrical changes, and then interpreting those to make assumptions as to what are a person’s thoughts.” Besides, he notes, it’s very easy for users to protect themselves against brain-reading tech such as this. “You just don’t put [the headsets] on, right?”

If Neurable has done its job properly, Alcaide hopes that the rewards will far, far outweigh the perceived risks. The next few years will shed some light on whether he’s correct.

Editors' Recommendations

I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
6 questions we have about Elon Musk’s Neuralink brain interface technology

Not satisfied with SpaceX and Tesla, Elon Musk shed light on Neuralink, his brain interface company that wants to develop “ultra-high bandwidth brain-machine interfaces to connect humans and computers.”

During a livestream Tuesday night, Musk explained that he hoped Neuralink could help treat brain disorders, preserve and enhance human brains, and eventually merge humans with artificial intelligence. The company is already working on a system that would allow paralyzed people to control artificial limbs using just their thoughts.

Read more
Digital Trends’ Tech For Change CES 2023 Awards

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more