“The eyes … they never lie,” said noted philosopher Tony Montana in the gangster movie Scarface. While Montana chose to go down the drug-dealing and murdering route, however, had he been born 30 years later he could probably have had a promising career as a computer interface designer. At least, that’s the message we’re choosing to take away from a new project created by researchers in Australia and Germany. They developed an artificial intelligence that is able to predict a person’s personality type by looking into their eyes.
“Several previous works suggested that the way in which we move our eyes is modulated by who we are — by our personality,” Andreas Bulling, a professor from Germany’s Max Planck Institute for Informatics, told Digital Trends. “For example, studies reporting relationships between personality traits and eye movements suggest that people with similar traits tend to move their eyes in similar ways. Optimists, for example, spend less time inspecting negative emotional stimuli — [such as] skin cancer images — than pessimists. Individuals high in openness spend a longer time fixating and dwelling on locations when watching abstract animations.”
These insights are interesting, but the challenge for the researchers was figuring out a way to turn such observations into an artificial intelligence system. To do so, they turned to a deep learning A.I. to offer some help.
The researchers asked 42 students to wear an off-the-shelf head-mounted eye tracker as they ran errands. They also had the students’ personality types tested using established self-report questionnaires. With both the input (the eye data) and output (personality types) gathered, the A.I. was then able to work out the correlating factors linking the two.
“We found that we were able to reliably predict four of the big five personality traits — neuroticism, extraversion, agreeableness, conscientiousness — as well as perceptual curiosity only from eye movements,” Bulling continued.
While there are definitely potential ethical dilemmas involved (imagine what companies like the now-defunct Cambridge Analytica might have been able to do with this information), Bulling noted that there are plenty of positive applications, too.
“Robots and computers are currently socially ignorant and don’t adapt to the person’s non-verbal signals,” Bulling said. “When we talk, we see and react if the other person looks confused, angry, disinterested, distracted, and so on. Interactions with robots and computers will become more natural and efficacious if they were to adapt their interactions based on a person’s non-verbal signals.”
A paper describing the work was recently published in the journal Frontiers in Human Neuroscience.
- Nvidia’s new voice A.I. sounds just like a real person
- Facebook’s ‘droidlet’ A.I. could take speech recognition to a whole new level
- Ingenious new search and rescue drone finds people by listening for screams
- IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live
- Can A.I. beat human engineers at designing microchips? Google thinks so