Skip to main content

Freaky new A.I. scans your brain, then generates faces you’ll find attractive

Brain-computer interface for generating personally attractive images

Imagine if some not-too-distant future version of Tinder was able to crawl inside your brain and extract the features you find most attractive in a potential mate, then scan the romance-seeking search space to seek out whichever partner possessed the highest number of these physical attributes.

We’re not just talking qualities like height and hair color, either, but a far more complex equation based on a dataset of everyone you’ve ever found attractive before. In the same way that the Spotify recommendation system learns the songs you enjoy and then suggests others that conform to a similar profile — based on features like danceability, energy, tempo, loudness, and speechiness — this hypothetical algorithm would do the same for matters of the heart. Or, at least, the loins. Call it physical attractiveness matchmaking by way of A.I.

To be clear, Tinder isn’t — as far as I’m aware — working on anything remotely like this. But researchers from the University of Helsinki and Copenhagen University are. And while that description might smack somewhat of a dystopian shallowness pitched midway between Black Mirror and Love Island, in reality their brain-reading research is pretty darn fascinating.

Searching the face space

In their recent experiment, the researchers used a generative adversarial neural network, trained on a large database of 200,000 celebrity images, to dream up a series of hundreds of fake faces. These were faces with some of the hallmarks of certain celebrities — a strong jawline here, a piercing set of azure eyes there — but which were not instantly recognizable as the celebrities in question.

The images were then gathered into a slideshow to show to 30 participants, who were kitted out with electroencephalography (EEG) caps able to read their brain activity, via the electrical activity on their scalps. Each participant was asked to concentrate on whether they thought the face they were looking at on the screen was good-looking or not. Each face showed for a short period of time, before the next image appeared. Participants didn’t have to mark anything down on paper, press a button, or swipe right to indicate their approval. Just focusing on what they found attractive was enough.

The Cognitive Computing Group

“We showed a large selection of these faces to participants, and asked them to selectively concentrate on faces they found attractive,” Michiel Spapé, a postdoctoral researcher at the University of Helsinki, told Digital Trends. “By capturing the brain waves by EEG that occurred just after seeing a face, we estimated whether a face was seen as attractive or not. This information was then used to drive a search within the neural network model — a 512-dimensional ‘face-space’ — and triangulate a point that would match an individual participant’s point of attractivity.”

Finding the hidden data patterns that revealed preferences for certain features was achieved by using machine learning to probe the electrical brain activity each face provoked. Broadly speaking, the more of a certain kind of brain activity spotted (more on that in a second), the greater the levels of attraction. Participants didn’t have to single out certain features as particularly attractive. To return to the Spotify analogy, in the same way that we might unconsciously gravitate to songs with a particular time signature, by measuring brain activity when viewing large numbers of images, and then letting an algorithm figure out what they all have in common, the A.I. can single out parts of the face we might not even realize we’re drawn to. Machine learning is, in this context, like a detective whose job it is to connect the dots.

Swipe right brain

“It is not necessarily ‘increased brain activity,’ but rather that certain images resynchronize neural activity,” Spapé clarified. “That is, the living brain is always active. EEG is quite unlike [functional magnetic resonance imaging] in that we are not very sure where activity comes from, but only when it comes from something. Only because many neurons fire at the same time, in the same direction, are [we] able to pick up their [electrical] signature. So synchronization and desynchronization is what we pick up rather than ‘activity’ as such.”

He stressed that what the team has not done is to find a way to look at random EEG brain data and tell, immediately, if a person is looking at someone they find attractive. “Attraction is a very complex subject,” he said. Elsewhere, he noted that “we cannot do thought control.”

The Cognitive Computing Group

So how exactly have the researchers managed to carry out this experiment if they cannot guarantee that what they are measuring is attraction? The answer is, in fact, that they are measuring attraction. In this scenario, at least. What the researchers see in this experimental setup is that, roughly 300 milliseconds after a participant sees an attractive image, their brain lights up with a particular electrical signal called a P300 wave. A P300 wave doesn’t always signify attraction, but rather a recognition of a certain relevant stimuli. But what that stimuli is depends on what the person has been asked to look for. In other scenarios, where a person is asked to focus on different features, it might indicate something entirely different. (Case in point: P300 response is used as a measure in lie detectors — and not necessarily to tell whether a person is telling the truth about their attraction to a particular person.)

NeuroTinder and beyond

In this study, the researchers then used this attraction data to have the generative adversarial network generate new customized faces combining the most brain-sparking traits — a Frankenstein assembly of facial features participants’ brain data had indicated they find personally attractive.

“While there may be some facial features that seem to be generally preferred across participants, as some generated faces in our experiments look similar to each other, the model really captures personal features,” Tuukka Ruotsalo, an associate professor at the University of Helsinki, told Digital Trends. “There are differences in all generated images. In the most trivial aspect, participants with different gender preferences get faces matching that preference.”

Generating attractive people who have never existed is certainly a headline-grabbing use of this technology. However, it could have other, more meaningful applications, too. The interaction between a generative artificial neural network and human brain responses could also be used to test out human responses to different phenomena present in data.

“This could help us to understand the kind of features and their combinations that respond to cognitive functions, such as biases, stereotypes, but also preferences and individual differences,” said Ruotsalo.

A paper describing the work was recently published in the journal IEEE Transactions in Affective Computing.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
MIT and IBM’s new A.I. image-editing tool lets you paint with neurons
mit and ibm paint with neurons photo manipulation

Whether it’s automatically tagging objects in pictures or the ability to tweak lighting and separate subjects from their background using the iPhone’s “portrait mode,” there’s no doubt that artificial intelligence is a powerful force in modern photo-editing tools.

But what if it were possible to go one step further, and use the latest cutting-edge technologies to develop what may just be the world’s most ambitious (and, in its own way, imaginative) paint program -- one that goes far beyond simply touching up or coldly analyzing your existing pictures?

Read more
New A.I. and voice synthesis makes Gatebox your cutest, cleverest digital pal
gatebox line clova ai news and

The concept of living at home with an intelligent digital character is almost a reality, due to a partnership between Line, the popular messaging app, and Gatebox, a cult Japanese artificial intelligence (A.I.) company. Line supplies its Clova A.I. platform to Gatebox, where it will be integrated into the production version of its smart home hardware. Think of Gatebox as a little like Amazon’s Alexa, but the difference is, Gatebox has a friendly, interactive digital character inside, ready to communicate naturally with you both in and out of your home. It’s a big step beyond the basic interactions we have with Alexa or Google Assistant.

While the partnership between these two companies had been revealed when Line acquired Gatebox in early 2018, the result of how Clova will work with Gatebox has now been demonstrated at the Line Conference currently taking place in Japan. Clova is Line’s A.I. platform that powers its smart home speakers, operating in the same way as Alexa. The integration with Gatebox brings A.I. smarts to the charming digital character living inside the sci-fi hardware.

Read more
Samsung’s new A.I. software makes generating fake videos even easier
samsung ai deepfake videos software

Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

A.I. is getting better and better at producing fake videos, for everything from amusingly adding Nicholas Cage into movies to maliciously spreading fake news. Now Samsung has developed software which makes creating fake videos even easier.

Read more