Skip to main content

Fascinating new study explores the complex psychology of why humans feel empathy toward robots

algorithm spots fakes on social media robots right wrong ethics
Image used with permission by copyright holder

Would you lie to a robot to avoid hurting its feelings?

If you’re like one participant in a recent study, carried out by researchers from the University of Bristol and University College London, the answer is a resounding “yes.” In the experiment, a humanoid robot called BERT attempted to help a human make an omelette by passing them the necessary ingredients, such as eggs, oil and salt.

Over a series of tests, the human participants contended with a non-communicative version of the bot, as well as a more accident-prone version which made plenty of mistakes, but tried to make up for it by profusely apologizing — and even changed its facial expression to look happy, sad or shocked depending on events. At the end of the study, participants were asked whether or not they would give each robot the full-time role of kitchen assistant.

The result? That most people would rather work with a less efficient, but more expressive robot — instead of one which performs perfectly but appears to have no social skills. Oh, and we’re suckers for a robot apology and a sad face.

“I wasn’t ready for how much people wanted to interact with BERT,” lead author Adriana Hamacher, who carried out the work as part of her MSc in Human Computer Interaction at UCL, told Digital Trends. “People were really, really keen. When the robot was dropping eggs, they were trying to help it. There were certain participants who couldn’t stop giving him advice or encouragement. What was really disconcerting, though — and something I really wasn’t prepared for — was when it asked people whether it had got the job. That really, really made people uncomfortable because they couldn’t qualify their answer. They had to say ‘yes’ or ‘no.’”

The fact that humans are susceptible to communicative machines is no great shock. In the 1960s, a computer scientist at MIT, Joseph Weizenbaum, programmed a basic “chatbot” parody of a psychotherapist. Although ELIZA simply repeated information it was given to users back in the form of a question, it bothered its creator Weizenbaum how much it prompted its subjects to reveal about themselves. More recently, we’ve seen other instances of how humans can get become attached to communicative machines — particularly if there are recognizable features thrown into the mix.

“There has to be a lot of care taken with the design of expressive machines. There is a big danger of causing distress among people”

For example, in the late 1990s and early 2000s, Furbies experienced an enormous boom in popularity. Furry, owl-like robots, Furbies could play games and “speak” to their owners in the fictitious language of “Furbish.” Over time, however, the toys began to “learn” English by replacing some of their gibberish Furbish words with ones it had supposedly picked up from its owner. In reality, it was doing no such thing — but the effect was striking.

The question of why we model robots after ourselves doesn’t seem a particularly complex one. On both an emotional and intellectual level, we have always attempted to build robots modeled after ourselves in the same way that logical, chess-loving AI pioneers assumed artificial intelligence would involve logical chess-playing. It’s a way to mirror ourselves in a machine.

The lure of building artificial people dates back at least as far as Ancient Greece, and the myth of Pygmalion: a Cypriot sculptor who carves a woman out of ivory, falls in love with her, and eventually sees her come to life. In ancient China, meanwhile, there is an account of a mechanical engineer known as Yan Shi, who presents ruler King Mu of Zhou with a life-size, humanoid mechanical figure.

That same impulse imbued various robotics projects over the years, such as the faintly anthropomorphized SRI International’s “Shakey the robot” in the late 1960s, the first general-purpose mobile robot able to reason about its own actions.

In the same way that AI has changed, though, so too have robotics. Many of today’s most promising robots don’t attempt to emulate any form of human locomotion, while popular industrial robots also stray far from approximating humanoid forms. But this doesn’t mean humanoid robots are a thing of the past. Far from it, in fact.

Pepper
Image used with permission by copyright holder

The point is that, increasingly, the decision to build human-like robots exists as a clear choice, rather than simply the logical way to attack particular problems of movement or fine-motor skills. For instance, robots designed to act as therapists would do well to look like people in order to provoke openness and convey empathy. In terms of another type of, err, emotional connection, earlier this year Californian researchers reported that touching a robot’s buttocks or groin area produces arousal in human test subjects — suggesting that possible “sexbots” will likely remain recognizably humanoid.

But don’t take these decisions lightly.

As Adriana Hamacher told us: “There has to be a lot of care taken with the design of expressive, communicative machines. There is a big danger of causing distress among people.” Problems like the “uncanny valley” effect (where robots appear disarmingly lifelike, but still not quite real enough) will stop us from necessarily approximating humans too closely. So too will findings like the one from the recent International Journal of Social Robotics paper, “Blurring Human–Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to Human Distinctiveness.”

However, studies like Hamacher’s show that — no matter our fears concerning this area — there is inherent value in a robot that behaves in a way we can at least recognize.

“The most exciting thing for me is that it drives home how well people respond to robots that react and behave as human beings do,” Hamacher said. “Because we’re still a long way from having robots which work efficiently it’s a fantastic method of easing teething problems along the way. Robots with personalities could also be used for other tasks, such as teaching, reminding people to take their medication, and improving working conditions.”

In other words, humanoid robots — like the ones which have populated our sci-fi dreams for decades — aren’t going away any time soon…

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more