Skip to main content

Wild new ‘brainsourcing’ technique trains A.I. directly with human brainwaves

Picture a room full of desks, numbering more than two dozen in total. At each identical desk, there is a computer with a person sitting in front of it playing a simple identification game. The game asks the user to complete an assortment of basic recognition tasks, such as choosing which photo out of a series that shows someone smiling or depicts a person with dark hair or wearing glasses. The player must make their decision before moving onto the next picture.

Only they don’t do it by clicking with their mouse or tapping a touchscreen. Instead, they select the right answer simply by thinking it.

Each person in the room is equipped with an electroencephalogram (EEG) skull cap; a trail of wires leading from each person to a nearby recording device that monitors the electrical voltage activity on their scalp. The scene looks like an open plan office in which everyone is jacked into The Matrix.

John MacDougall / Getty

“The participants [in our study] had the simple task of just recognizing [what they were asked to look for],” Tuukka Ruotsalo, a research fellow at the University of Helsinki, which led the recently published research, told Digital Trends. “They were not asked to do anything else. They just looked at the images they were shown. We then built a classifier to see if we could identify the correct face with the target features, solely based on the brain signal. Nothing else was used, apart from the EEG signal at the moment when the participants saw the picture.”

In the experiment, a total of 30 volunteers were shown images of synthesized human faces (to avoid the chance that one of the participants might recognize a person they were shown, and therefore skew the results). Participants were asked to mentally label the faces based on what they saw and were asked to look for. Using only that brain activity data, an artificial intelligence algorithm learned to recognize images, such as when a blonde person appeared on-screen.

A fresh spin on an old idea

This is impressive stuff, but it’s not especially new. For at least the past decade, researchers have used brain activity data, gathered via EEG or fMRI, to carry out an assortment of increasingly impressive thought-reading demonstrations. In some cases, it’s identifying a particular image or video, as with a recent study during which researchers at the Neurorobotics Lab in Moscow showed that it’s possible to figure out which video clips people are watching by monitoring their brain activity.

In other cases, these insights can be used to trigger certain responses. For example, in 2011 researchers at Washington University in St. Louis placed temporary electrodes over the speech center of a person’s brain and then demonstrated that they were able to move a computer cursor on screen simply by having the person think about where they wanted to move it. Still other studies have shown that brain data can be used to move robotic limbs or hover drones.

What makes the University of Helsinki’s recent study novel and interesting is that it focuses on how the brain activity of a group of people, rather than single people, can be used to draw conclusions, such as classifying images. Not only have they shown that it works, but that — at least up to a point — the more people you add to the group, the more accurate the data becomes.

Chris So / Getty

“When we add more people into the brain-sourcing pool, so that brain data is recorded from a group of people, we achieve performance of well over 90% accuracy,” Ruotsalo said. “[That is] almost at the level of [asking a group to manually tag answers.]”

This might initially sound counterintuitive. If brain data is noisy, wouldn’t adding more people make it even noisier? After all, if you want to listen out for a particularly hard-to-hear sound in a room, it’s easier if you’re only got one person talking over the top of it than 10. Or 30. But as the history of the big data revolution, and many of the most notable demonstrations of machine learning in action, have made clear, the more data you’ve got at your disposal to throw at a problem, the more accurate systems become.

“The signal is noisy in general from EEG or any other brain imaging, and participants or humans are not always attending 100%,” Ruotsalo explained. “Think about looking at pictures yourself. Sometimes, after looking [at] many, your mind could be wandering. Even with single participants, researchers often use tricks, such as repeating the same stimulus all over again to be able to average the noise out. Here, we use signals from many participants.”

The chance that at least some individuals are focused at each time is greatly increased versus just one individual. Add in the notion of the wisdom of crowds (more on that later) and you’ve got one heck of a powerful combination.

Enter the world of brainsourcing

Tuukka Ruotsalo and his team call this group-based brain-reading “brainsourcing.” It’s a play on the term crowdsourcing, referring to a way of breaking up one big task into smaller tasks that can be distributed to large groups of people to help solve. Here in 2020, crowdsourcing might be most synonymous with money-raising platforms such as Kickstarter, where the “big task” is the startup capital needed to launch a product and the distributed crowd-based element involves asking people to chip in smaller sums of money.

However, crowdsourcing can lend itself to other applications as well. Amazon’s Mechanical Turk platform and Apple’s ResearchKit are crowdsourcing tools that harness the power of the crowd for tasks that range from answering surveys to carrying out important academic research. Meanwhile, companies like TaskRabbit and 99designs leverage the crowd to help customers match up with the right person to deliver anything from yard work and grocery shopping to designing you the perfect logo or masthead for your website.

Brainsourcing: Crowdsourcing Recognition Tasks via Collaborative Brain Computer Interfacing (Teaser)

A.I. can also benefit from crowdsourcing. Consider, for instance, Google’s reCAPTCHA technology. Most of us likely consider reCAPTCHA to be a way that websites can check whether or not we’re a bot before allowing us to perform a particular task. Completing a reCAPTCHA might involve reading a wiggly line of text or clicking every image in a selection that includes a cat. But reCAPTCHAs aren’t just about testing whether we’re humans or not; they’re also a very clever way of gathering data that can be used to make Google’s image recognition A.I. smarter. Each time you read a fragment of text from a roadside sign on a reCAPTCHA image, you could be contributing to making, say, Google’s self-driving cars slightly better at recognizing the real world. When Google has collected enough answers for an image, Google is reasonably certain that it has a correct answer.

It is too early to consider how brainsourcing could practically build on these ideas. “We’ve been trying to think about this ourselves,” Ruotsalo said. “I don’t think we even have the ideas yet. It’s just a proof-of-concept that we can do this. Now it’s open for other people to explore how well, and what kinds of tasks, and what types of groups of people we could use this for.”

The future is coming

But the potential is certainly there. Commercially available wearable EEG monitors are now starting to become available — in forms that range from brain-reading headphones to smart tattoos. At present, EEG demonstrations like the one in this study measure only a tiny percentage of a person’s total brain activity. But over time this could increase, meaning that a less binary collection of information may be gathered. Rather than just getting a “yes” or “no” answer to questions, this technology could observe people’s response to more complex questions, could monitor responses to media like a TV show or movie and then feed aggregate crowd data back to its makers.

“Instead of using conventional ratings or like buttons, you could simply listen to a song or watch a show, and your brain activity alone would be enough to determine your response to it,” Keith Davis, a student and research assistant on the project, said in a press release accompanying the work.

Imagine if millions of people wore EEG-tracking wearables and you offered a percentage of them a micropayment 10 times a day in exchange for taking a few seconds to help solve a particular task. Fanciful? Maybe right now, but so too did many of today’s crowdsourcing technologies just a few years ago.

On the game show Who Wants To Be A Millionaire, one of the “lifelines” available to the contestants is the option to ask the audience a certain question. When this one-off lifeline is triggered, the audience uses voting pads attached to their seats and votes for the answer to a multiple-choice question they believe is correct. The computer then tallies the results and shows them as a percentage to the contestant. According to James Surowiecki’s book, The Wisdom of Crowds, asking the audience yields the correct answer more than 90% of the time. That is significantly better than the show’s 50/50 option, which eliminates two incorrect answers, and the option to phone a friend, which gives you the right answer around two-thirds of the time.

Could brainsourcing be tech’s next great idea; helping to do everything from improving entertainment to training better A.I. to answering all manner of questions? It’s admittedly too early to say. But this is definitely a term you’re going to hear a lot more about in the months, years, and decades to come.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Facial recognition tech for bears aims to keep humans safe
A brown bear in Hokkaido, Japan.

If bears could talk, they might voice privacy concerns. But their current inability to articulate thoughts means there isn’t much they can do about plans in Japan to use facial recognition to identify so-called "troublemakers" among its community.

With bears increasingly venturing into urban areas across Japan, and the number of bear attacks on the rise, the town of Shibetsu in the country’s northern prefecture of Hokkaido is hoping that artificial intelligence will help it to better manage the situation and keep people safe, the Mainichi Shimbun reported.

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more