Skip to main content

Why human-robot relationships are totally a good thing

human-robot relationships
Image used with permission by copyright holder
Make no mistake about it: here in 2017, relationships between humans and robots are a very real thing. Given the amount of time we spend with our devices, that’s no great surprise, either.

While the most attention-grabbing may be the plethora of stories about sex robots, there are plenty of other ways we’re interfacing with machines that are totally different from any other point in history. Far from a weird tech niche (here’s looking at you, Aibo), developing relationships with our robots is going to be essential to us not only leading longer, happier lives, but also making smarter machines to draw the best out of both us and our electronics.

And it’s a challenge that robotics and AI researchers are already busy solving.

The lovability of robots

For starters, robots are made to be loved. That might sound trite, but there’s something fundamentally lovable about robots and computer software that attempts to approximate human intelligence, which is what a lot of AI is supposed to do.

If you saw the footage of Boston Dynamics’ Spot robot being kicked to test its balancing capabilities, there’s a good chance you felt sympathy for it. That’s because we tend to anthropomorphize and project emotions onto the more lifelike machines around us — in a way we’d never do with a regular desktop computer.

Watch robot dog 'Spot' run, walk...and get kicked

This effect was widely observed in the 1990s, when Tamagotchis and Furbies became the “must have” toys of the season. In one notable anecdote, one airplane passenger disembarked her flight, vowing never to fly that same airline again; the flight attendant had told her to turn off her Tamagotchi for take-off — and she knew this would reset the device and thus “kill” her creature.

The effect isn’t just found in people who get strangely attached to toys, either. “U.S. soldiers have been known to form close bonds with their bomb-disposal robots, even though these robots are not particularly human-like in their features,” John Danaher, co-editor of the new MIT Press book Robot Sex: Social and Ethical Implications, told Digital Trends.

New ways to build robots

This opens up entirely new opportunities when it comes to the deployment of new machines: applications that were unavailable at any other point in history. One example of how the emotional component of a robot relationship can be harnessed is in caregiving.

This opens up entirely new opportunities that were unavailable at any other point in history

While robots are never going to be an adequate replacement for emotional human relationships or contact, there are scenarios where they can play an invaluable role. For example, with a limited number of caregivers available and a growing elderly population, robots or chatbots with natural language processing (NLP) abilities could prove excellent companions for older people.

They can remind folks to take their medication, help people with degenerative neurological disorders like Alzheimer’s by playing memory games, and even provide other forms of reassuring comfort.

The most famous therapeutic robot, more of a pet than an approximation of human company, is Paro the “therapeutic robot” seal. Designed with the elderly market in mind, Paro can make eye contact with users by sensing the direction of their voice, has a limited vocabulary of words for “understanding” people, and is able to fine-tune its behavior depending on how it is treated. Stroke it softly or more forcefully and its behavior will change to mirror that of the user. This provides comfort to its users by appearing to empathize with them.

Intuition Robotics’ ElliQ, meanwhile, is another robot designed with the elderly in mind. A bit like a cross between a simple Amazon Echo and a robot version of the animated Pixar mascot, ElliQ uses AI and machine learning to maintain physical and mental health in its users, as well as offering companionship to older people.

ELLIQ - The active aging companion

“Given the general suspicions held by older members of the population surrounding AI, ElliQ is proactive rather than reactive,” Joe Lobo, a robot expert at AI and natural language processing company Inbenta, told Digital Trends. “It will learn the preferences and personality of the person to recommend relevant activities such as going for a walk, playing games or even calling members of the family. It will also remember important daily routines such as when to take medication or when there are upcoming hospital appointments.”

Helping teach social skills

In addition to the elderly population, robots could also prove invaluable communicative tools for people with autism. It’s easy to be cynical about the reductive relationship that a robot or AI can have with a person, but sometimes that is exactly what is needed.

Robots could also prove invaluable communicative tools for people with diseases like autism.

In 2014, three years after Apple’s AI assistant Siri debuted on the iPhone 4s, the New York Times published a story titled “To Siri, With Love,” describing how journalist Judith Newman’s 13-year-old autistic son had forged a relationship with Siri, which helped him develop his communication skills in the real world. Siri, Judith writes, is “wonderful for someone who doesn’t pick up on social cues: [the] responses are not entirely predictable, but they are predictably kind.” Is an AI that answers everything in a “predictably kind” way a realistic portrayal of human relationships? No way. Is it better in this case? You bet!

A few years on from 2014, there are now a number of companies focused on building AI tools to help people with autism develop real world communication skills. One company leading this charge is Wisconsin’s emotion tracking company Affectiva. Unlike Siri, which recognizes only words, tools developed by Affectiva can incorporate other elements to cute effect — such as a video display that dispenses free chocolate samples when you smile at the screen.

Recently, Affectiva’s technology was used in a smart glasses “life coach” to gamify the social experience by helping identify people’s emotions, and provoke conversations with autistic users about what these might mean.

The Hershey Smile Sampler

As chatbots and AI systems become smarter and more deeply embedded in our lives, the number of use-cases will explode. By forming bonds with these robots and AI systems, we can carry out tasks more effectively, while getting the same boost that we get from speaking to a likeminded person. Just as Paro mirrors its users, AI assistants could learn our speech patterns and which ones prompt the best responses, and learn to use them.

With AI increasingly used as a teaching aid, it may even be possible to introduce an AI assistant early in a person’s life and have it grow and evolve with them — while transferring its personality from machine to machine, much as Siri is available on the iPhone, Mac, or Apple TV. The possibilities are endless.

The importance of reciprocity

So here’s one more billion dollar question: Do robots need to be able to reciprocate for us to consider the relationships real? This is a difficult question. If we found out that our partner was a Truman Show-style actor, being paid to act as though they were in love with us, it would be deeply traumatizing. The same thing would be true with our buddies.

But does it have to be true with robots? And does it matter?

Do robots have to act like they love us for us to consider the relationship real?

It’s a question that has been examined by people working in artificial intelligence for decades — and not just within the confines of science fiction stories. In the 1960s, researchers at MIT developed a computer psychotherapist named ELIZA, which was designed to carry out seemingly intelligent text-based conversations with users. By echoing fragments of a user’s language in a way that either seemed to support or question their statements, ELIZA was able to act like, well, a real psychotherapist.

So far, so good. But Joseph Weizenbaum, the researcher behind ELIZA, had an experience that’s not usually shared by people developing new technology: ELIZA worked too well. Although it had no actual “understanding” of what users were discussing, Weizenbaum was disturbed by the fact that it prompted people to reveal intimate details of their lives, including relationship difficulties or complex life issues. For Weizenbaum, there was something ethically troubling about it.

Not everyone thinks this way. The Turing Test posits that, if a machine acts in an intelligent way, we should attribute intelligence to it. Is the same thing true with a robot relationship?

VCG via Getty Images
VCG via Getty Images

“I’m not entirely sure what reciprocity or mutuality really is and whether it is off-limits for machines,” said John Danaher. “I’m guessing people think of it as some inner mental state. But that’s problematic when it comes to human relationships. We never really know what another human being thinks of us. We only have their behavior to go on. If they consistently act as if they love us, we think there is mutual affection. I don’t see why robots couldn’t be behaviorally indistinguishable from human partners. It would require some pretty sophisticated technology – far more sophisticated than we currently have – but I don’t think it is impossible.”

On a technical level, it would be fascinating to see if we could ever develop a machine that cares for us in the same way that we might care for it. But it’s not necessary to create a meaningful relationship that can enrich our lives.

Will it ever be quite the same thing as a human relationship, however?

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
A disembodied robot mouth and 14 other 2020 stories we laughed at
The Prayer

Goodbye 2020, and good riddance! But before we slam the door shut on this tumultuous year, let’s try to raise a smile or two by revisiting some of the more amusing tech stories that landed on the pages of Digital Trends over the last 12 months. Here's a recap of the weirdest, wildest, and most hilariously strange stories we've run this year. Enjoy!
A.I. fail as robot TV camera follows bald head instead of soccer ball
https://twitter.com/rogbennett/status/1321869751258329090

While artificial intelligence (A.I.) has clearly made astonishing strides in recent years, the technology is still prone to the occasional fail.

Read more