Skip to main content

‘Mind-reading’ A.I. produces a description of what you’re thinking about

RichVintage/Getty Images

Think that Google’s search algorithms are good at reading your mind? That’s nothing compared to a new artificial intelligence research project coming out of Japan, which can analyze a person’s brain scans and provide a written description of what they have been looking at.

To generate its captions, the artificial intelligence is given an fMRI brain scan image, taken while a person is looking at a picture. It then generates a written description of what they think the person was viewing. An illustration of the level of complexity it can offer is: “A dog is sitting on the floor in front of an open door” or “a group of people standing on the beach.” Both of those turn out to be absolutely accurate.

“We aim to understand how the brain represents information about the real world,” Ichiro Kobayashi, one of the researchers from Japan’s Ochanomizu University, told Digital Trends. “Toward such a goal, we demonstrated that our algorithm can model and read out perceptual contents in the form of sentences from human brain activity. To do this, we modified an existing network model that could generate sentences from images using a deep neural network, a model of visual system, followed by an RNN (recurrent neural network), a model that can generate sentences. Specifically, using our dataset of movies and movie-evoked brain activity, we trained a new model that could infer activation patterns of DNN from brain activity.”

Before you get worried about some dystopian future in which this technology is used as a supercharged lie detector, though, Kobayashi points out that it still a long way away from real-world deployment. “So far, there are not any real-world applications for this,” Kobayashi continued. “However, in the future, this technology might be a quantitative basis of a brain-machine interface.”

As a next step, Shinji Nishimoto, another researcher on the project, told Digital Trends that the team wants to use it to better understand how the brain processes information.

“We would like to understand how the brain works under naturalistic conditions,” Nishimoto said. “Toward such a goal, we are planning to investigate how various forms of information — vision, semantics, languages, impressions, etcetera — are encoded in the brain by modeling the relationship between our experiences and brain activity. We also aim to investigate how multimodal information is related to achieve semantic activities in the brain. In particular, we will work on generating descriptions about what a person thinks.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This basic human skill is the next major milestone for A.I.
Profile of head on computer chip artificial intelligence.

Remember the amazing, revelatory feeling when you first discovered the existence of cause and effect? That’s a trick question. Kids start learning the principle of causality from as early as eight months old, helping them to make rudimentary inferences about the world around them. But most of us don’t remember much before the age of around three or four, so the important lesson of “why” is something we simply take for granted.

It’s not only a crucial lesson for humans to learn, but also one that today’s artificial intelligence systems are pretty darn bad at. While modern A.I. is capable of beating human players at Go and driving cars on busy streets, this is not necessarily comparable with the kind of intelligence humans might use to master these abilities. That’s because humans -- even small infants -- possess the ability to generalize by applying knowledge from one domain to another. For A.I. to live up to its potential, this is something it also needs to be able to do.

Read more
Your A.I. smart assistant could one day tell if you’re lonely
roku ultra vs amazon fire tv nvidia shield apple siri

Your A.I. assistant can do plenty of things for you, whether that’s answering questions, cuing up the perfect song at the right time, making restaurant bookings, or many other tasks. Could it also work out whether you’re lonely?

“Emotion-sniffing” technology is a growing area of interest among researchers, but it’s still in its infancy. In a new proof-of-concept study, researchers from the University of California San Diego School of Medicine recently showed how speech-analyzing A.I. tools can be used for predicting loneliness in older adults.

Read more
A.I. can tell if you’re a good surgeon just by scanning your brain
brain with computer text scrolling artificial intelligence

Could a brain scan be the best way to tell a top-notch surgeon? Well, kind of. Researchers at Rensselaer Polytechnic Institute and the University at Buffalo have developed Brain-NET, a deep learning A.I. tool that can accurately predict a surgeon’s certification scores based on their neuroimaging data.

This certification score, known as the Fundamentals of Laparoscopic Surgery program (FLS), is currently calculated manually using a formula that is extremely time and labor-consuming. The idea behind it is to give an objective assessment of surgical skills, thereby demonstrating effective training.

Read more