Skip to main content

Machines are getting freakishly good at recognizing human emotions

Until very recently we’ve had to interact with computers on their own terms. To use them, humans had to learn inputs designed to be understood by the computer: whether it was typing commands or clicking icons using a mouse. But things are changing. The rise of A.I. voice assistants like Siri and Alexa make it possible for machines to understand humans as they would ordinarily interact in the real world. Now researchers are reaching for the next Holy Grail: Computers that can understand emotions.

Whether it’s Arnold Schwarzenegger’s T-1000 robot in Terminator 2 or Data, the android character in Star Trek: The Next Generation, the inability of machines to understand and properly respond to human emotions has long been a common sci-fi trope. However, real world research shows that machine learning algorithms are actually getting impressively good at recognizing the bodily cues we use to hint at how we’re feeling inside. And it could lead to a whole new frontier of human-machine interactions.

Affectiva

Don’t get us wrong: Machines aren’t yet as astute as your average human when it comes to recognizing the various ways we express emotions. But they’re getting a whole lot better. In a recent test carried out by researchers at Dublin City University, University College London, the University of Bremen and Queen’s University Belfast, a combination of people and algorithms were asked to recognize an assortment of emotions by looking at human facial expressions.

The emotions included happiness, sadness, anger, surprise, fear, and disgust. While humans still outperformed machines overall (with an accuracy of 73% on average, compared to 49% to 62% depending on the algorithm), the scores racked up by the various bots tested showed how far they have come in this regard. Most impressively, happiness and sadness were two emotions at which machines can outperform humans at guessing, simply by looking at faces. That’s a significant milestone.

Emotions matter

Researchers have long been interested in finding out whether machines can identify emotion from still images or video footage. But it is only relatively recently that a number of startups have sprung up to take this technology mainstream. The recent study tested commercial facial recognition machine classifiers developed by Affectiva, CrowdEmotion, FaceVideo, Emotient, Microsoft, MorphCast, Neurodatalab, VicarVision, and VisageTechnologies. All of these are leaders in the growing field of affective computing, a.k.a. teaching computers to recognize emotions.

The test was carried out on 938 videos, including both posed and spontaneous emotional displays. The chance of a correct random guess by the algorithm for the six emotion types would be around 16%.

Damien Dupré, an Assistant Professor at Dublin City University’s DCU Business School, told Digital Trends that the work is important because it comes at a time when emotion recognition technology is becoming more relied upon.

“Since machine learning systems are becoming easier to develop, a lot of companies are now providing systems for other companies: mainly marketing and automotive companies,” Dupré said. “Whereas [making] a mistake in emotion recognition for academic research is, most of the time, harmless, stakes are different when implanting an emotion recognition system in a self-driving car, for example. Therefore we wanted to compare the results of different systems.”

It could one day be used to spot things like drowsiness or road rage, which might trigger a semi-autonomous car taking the wheel.

The idea of controlling a car using emotion-driven facial recognition sounds, frankly, terrifying — especially if you’re the kind of person prone to emotional outbursts on the road. Fortunately, that’s not exactly how it’s being used. For instance, emotion recognition company Affectiva has explored the use of in-car cameras to identify emotion in drivers. It could one day be used to spot things like drowsiness or road rage, which might trigger a semi-autonomous car taking the wheel if a driver is deemed unfit to drive.

Researchers at the University of Texas at Austin, meanwhile, have developed technology that curates an “ultra-personal” music playlist that adapts to each user’s changing moods. A paper describing the work, titled “The Right Music at the Right Time: Adaptive Personalized Playlists Based on Sequence Modeling,” was published this month in the journal MIS Quarterly. It describes using emotion analysis that predicts not just which songs will appeal to users based on their mood, but the best order in which to play them, too.

Affectiva

There are other potential applications for emotion recognition technology, too. Amazon, for instance, has very recently begun to incorporate emotion-tracking of voices for its Alexa assistant; allowing the A.I. to recognize when a user is showing frustration. Further down the line, there’s the possibility this could even lead to full-on emotionally responsive artificial agents, like that in Spike Jonze’s 2013 movie Her.

In the recent image-based emotion analysis work, emotion sensing is based on images. However, as some of these illustrations show, there are other ways that machines can “sniff out” the right emotion at the right time.

“When facial information is for some reason unavailable, we can analyze the vocal intonations or look at the gestures.”

“People are generating a lot of non-verbal and physiological data at any given moment,” said George Pliev, founder and managing partner at Neurodata Lab, one of the companies whose algorithms were tested for the facial recognition study. “Apart from the facial expressions, there are voice, speech, body movements, heart rate, and respiration rate. A multimodal approach states that behavioral data should be extracted from different channels and analyzed simultaneously. The data coming from one channel will verify and balance the data received from the other ones. For example, when facial information is for some reason unavailable, we can analyze the vocal intonations or look at the gestures.”

Challenges ahead?

However, there are challenges — as all involved agree. Emotions are not always easy to identify; even for the people experiencing them.

“If you wish to teach A.I. how to detect cars, faces or emotions, you should first ask people what do these objects look like,” Pliev continued. “Their responses will represent the ground truth. When it comes to identifying cars or faces, almost 100% of people asked would be consistent in their replies. But when it comes to emotions, things are not that simple. Emotional expressions have many nuances and depend on context: cultural background, individual differences, the particular situations where emotions are expressed. For one person, a particular facial expression would mean one thing, while another person may consider it differently.”

Dupré agrees with the sentiment. “Can these systems [be guaranteed] to recognize the emotion actually felt by someone?” he said. “The answer is not at all, and they will never be! They are only recognizing the emotion that people are deciding to express — and most of the time that doesn’t correspond to the emotion felt. So the take-away message is that [machines] will never read … your own emotion.”

Still, that doesn’t mean the technology isn’t going to be useful. Or stop it from becoming a big part of our lives in the years to come. And even Damien Dupré leaves slight wiggle room when it comes to his own prediction that machines will never achieve something: “Well, never say never,” he noted.

The research paper, “Emotion recognition in humans and machine using posed and spontaneous facial expression,” is available to read online here.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Meet the robotic pioneers that will help humanity colonize Mars
A rendering of Mars 2020 rover, to be launched on its journey to Mars next year.

From NASA's upcoming Moon to Mars mission to Elon Musk's ambitious plans to use a SpaceX Starship to eventually colonize Mars, the race to populate the Red Planet is already on. But before humans can visit Mars and set up any kind of long-term base there, we need to send out scouts to see the lay of the land and prepare it for manned missions.

The mechanical pioneers we'll be sending to Mars in the coming years will follow in the tire tracks of explorers like the Curiosity rover and the Insight lander, but the next generation of Martian robotics will use sophisticated AI, novel propulsion methods, and flexible smallsats to meet the challenges of colonizing a new world.
Designing for the Mars environment
There are distinct difficulties in building machines which can withstand the Martian environment. First, there's the cold, with temperatures averaging around minus 80 degrees Fahrenheit and going down to minus 190 degrees Fahrenheit at the poles. Then there's the thin atmosphere, which is just one percent the density of Earth's atmosphere. And then there's the troublesome dust that gets kicked up in any operations on the planet's surface, not to mention the intense radiation from the Sun's rays.

Read more
MIT is teaching self-driving cars how to psychoanalyze humans on the road
mit algorithm predict drivers personality car driver behind wheel

In March 2004, the U.S. Defense Advanced Research Projects Agency (DARPA) organized a special Grand Challenge event to test out the promise -- or lack thereof -- of current-generation self-driving cars. Entrants from the world's top A.I. labs competed for a $1 million prize; their custom-built vehicles trying their best to autonomously navigate a 142-mile route through California’s Mojave Desert. It didn’t go well. The “winning” team managed to travel just 7.4 miles in several hours before shuddering to a halt. And catching fire.

A decade-and-a-half, a whole lot has changed. Self-driving cars have successfully driven hundreds of thousands of miles on actual roads. It’s non-controversial to say that humans will almost certainly be safer in a car driven by a robot than they are in one driven by a human. However, while there will eventually be a tipping point when every car on the road is autonomous, there’s also going to be a messy intermediary phase when self-driving cars will have to share the road with human-driven cars. You know who the problem parties are likely to be in this scenario? That’s right: the fleshy, unpredictable, sometimes-cautious, sometimes-prone-to-road-rage humans.

Read more
The world’s most freakishly realistic text-generating A.I. just got gamified
gpt adventure text based game openai 2 feat

What would an adventure game designed by the world’s most dangerous A.I. look like? A neuroscience grad student is here to help you find out.

Earlier this year, OpenAI, an A.I. startup once sponsored by Elon Musk, created a text-generating bot deemed too dangerous to ever release to the public. Called GPT-2, the algorithm was designed to generate text so humanlike that it could convincingly pass itself off as being written by a person. Feed it the start of a newspaper article, for instance, and it would dream up the rest, complete with imagined quotes. The results were a Turing Test tailor-made for the fake news-infused world of 2019.

Read more