I, Alexa: Should we give artificial intelligence human rights?

ai personhood ethics questions hanson robotics sophia
Hanson Robotics

A few years ago, the subject of AI personhood and legal rights for artificial intelligence would have been something straight out of science fiction. In fact, it was.

Douglas Adams’ second Hitchhiker’s Guide to the Galaxy book, The Restaurant at the End of the Universe, tells the story of a futuristic smart elevator called the Sirius Cybernetics Corporation Happy Vertical People Transporter. This artificially intelligent elevator works by predicting the future, so it can appear on the right floor to pick you up even before you know you want to get on — thereby “eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.”

The ethics question, Adams explains, comes when the intelligent elevator becomes bored of going up and down all day, and instead decides to experiment with moving from side to side as a “sort of existential protest.”

We don’t yet have smart elevators, although judging by the kind of lavish headquarters tech giants like Google and Apple build for themselves, that may just be because they’ve not bothered sharing them with us yet. In fact, as we’ve documented time and again at Digital Trends, the field of AI is currently making a bunch of things possible we never thought realistic in the past — such as self-driving cars or Star Trek-style universal translators.

Have we also reached the point where we need to think about rights for AIs?

You’ve gotta fight for your right to AI

It’s pretty clear to everyone that artificial intelligence is getting closer to replicating the human brain inside a machine. On a low resolution level, we currently have artificial neural networks with more neurons than creatures like honey bees and cockroaches — and they’re getting bigger all the time.

Have we also reached the point where we need to think about rights for AIs?

Higher up the food chain are large-scale projects aimed at creating more biofidelic algorithms, designed to replicate the workings of the human brain, rather than simply being inspired by the way we lay down memories. Then there are projects designed to upload consciousness into machine form, or something like the so-called “OpenWorm” project, which sets out to recreate the connectome — the wiring diagram of the central nervous system — for the tiny hermaphroditic roundworm Caenorhabditis elegans, which remains the only fully-mapped connectome of a living creature humanity has been able to achieve.

In a 2016 survey of 175 industry experts, the median expert expected human-level artificial intelligence by 2040, and 90 percent expected it by 2075.

Before we reach that goal, as AI surpasses animal intelligence, we’ll have to begin to consider how AIs compare to the kind of “rights” that we might afford animals through ethical treatment. Thinking that it’s cruel to force a smart elevator to move up and down may not turn out to be too far-fetched; a few years back English technology writer Bill Thompson wrote that any attempt to develop AI coded to not hurt us, “reflects our belief that an artificial intelligence is and always must be at the service of humanity rather than being an autonomous mind.”

ai personhood ethics questions elevator

The most immediate question we face, however, concerns the legal rights of an AI agent. Simply put, should we consider granting them some form of personhood?

This is not as ridiculous as it sounds, nor does it suggest that AIs have “graduated” to a particular status in our society. Instead, it reflects the complex reality of the role that they play — and will continue to play — in our lives.

Smart tools in an age of non-smart laws

At present, our legal system largely assumes that we are dealing with a world full of non-smart tools. We may talk about the importance of gun control, but we still hold a person who shoots someone with a gun responsible for the crime, rather than the gun itself. If the gun explodes on its own as the result of a faulty part, we blame the company which made the gun for the damage caused.

So far, this thinking has largely been extrapolated to cover the world of artificial intelligence and robotics. In 1984, the owners of a U.S. company called Athlone Industries wound up in court after their robotic pitching machines for batting practice turned out to be a little too vicious. The case is memorable chiefly because of the judge’s proclamation that the suit be brought against Athlone rather than the batting bot, because “robots cannot be sued.”

This argument held up in 2009, when a U.K. driver was directed by his GPS system to drive along a narrow cliffside path, resulting in him being trapped and having to be towed back to the main road by police. While he blamed the technology, a court found him guilty of careless driving.

ai personhood ethics questions wrongwaygps
Sean Ryan / Rapid City Journal
Sean Ryan / Rapid City Journal

There are multiple differences between AI technologies of today (and certainly the future) and yesterday’s tech, however. Smart devices like self-driving cars or robots won’t just be used by humans, but deployed by them — after which they act independently of our instructions. Smart devices, equipped with machine learning algorithms, gather and analyze information by themselves and then make their decisions. It may be difficult to blame the creators of the technology, too.

“Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”

As David Vladeck, a law professor at Georgetown University in Washington D.C., has pointed out in one of the few in-depth case studies looking at this subject, the sheer number of individuals and firms that participate in the design, modification, and incorporation of an AI’s components can make it tough to identify who the party responsible is. That counts for double when you’re talking about “black boxed” AI systems that are inscrutable to outsiders.

Vladeck has written: “Some components may have been designed years before the AI project had even been conceived, and the components’ designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, much less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign blame to the designer of a component whose work was far removed in both time and geographic location from the completion and operation of the AI system. Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”

It’s the corporations, man!

Awarding an AI the status of a legal entity wouldn’t be unprecedented. Corporations have long held this status, which is why a corporation can own property or be sued, rather than this having to be done in the name of its CEO or executive board.

Although it hasn’t been tested, Shawn Bayern, a law professor from Florida State University, has pointed out that technically AI may have already have this status due to the loophole that it can be put in charge of a limited liability company, thereby making it a legal person. This might also occur for tax reasons, should a proposal like Bill Gates’ “robot tax” ever be taken seriously on a legal level.

It’s not without controversy, however. Granting AIs this status would stop creators being held responsible if an AI somehow carries out an action its creator was not explicitly responsible for. But this could also encourage companies to be less diligent with their AI tools — since they could technically fall back on the excuse that those tools acted outside their wishes.

There is also no way to punish an AI, since punishments like imprisonment or death mean nothing

“I’m not convinced that this is a good thing, certainly not right now,” Dr. John Danaher, a law professor at NUI Galway in Ireland, told Digital Trends about legal personhood for AI. “My guess is that for the foreseeable future this will largely be done to provide a liability shield for humans and to mask anti-social activities.”

It is a compelling area of examination, however, because it doesn’t rely on any benchmarks being achieved in terms of ever-subjective consciousness.

“Today, corporations have legal rights and are considered legal persons, whereas most animals are not,” Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, told us. “Even though corporations clearly have no consciousness, no personality and no capacity to experience happiness and suffering; whereas animals are conscious entities.”

“Irrespective of whether AI develops consciousness, there might be economic, political and legal reasons to grant it personhood and rights in the same way that corporations are granted personhood and rights. Indeed, AI might come to dominate certain corporations, organizations and even countries. This is a path only seldom discussed in science fiction, but I think it is far more likely to happen than the kind of Westworld and Ex Machina scenarios that dominate the silver screen.”

Not science fiction for long

At present, these topics still smack of science fiction but, as Harari points out, they may not stay that way for long. Based on their usage in the real world, and the very real attachments that form with them, questions such as who is responsible if an AI causes a person’s death, or whether a human can marry his or her AI assistant, are surely ones that will be grappled with during our lifetimes.

robots tax san francisco jobs of the future ex machina vfx
Universal Pictures
Universal Pictures

“The decision to grant personhood to any entity largely breaks down into two sub-questions,” Danaher said. “Should that entity be treated as a moral agent, and therefore be held responsible for what it does? And should that entity be treated as a moral patient, and therefore be protected against certain interferences and violations of its integrity? My view is that AIs shouldn’t be treated as moral agents, at least not for the time being. But I think there may be cases where they should be treated as moral patients. I think people can form significant attachments to artificial companions and that consequently, in many instances, it would be wrong to reprogram or destroy those entities. This means we may owe duties to AIs not to damage or violate their integrity.”

In other words, we shouldn’t necessarily allow companies to sidestep the question of responsibility when it comes to the AI tools they create. As AI systems are rolled out into the real world in everything from self-driving cars to financial traders to autonomous drones and robots in combat situations, it’s vital that someone is held accountable for what they do.

At the same, it’s a mistake to think of AI as having the same relationship with us that we enjoyed with previous non-smart technologies. There’s a learning curve here and, if we’re not yet technologically at the point where we need to worry about cruelty to AIs, that doesn’t mean it’s the wrong question to ask.

So stop yelling at Siri when it mishears you and asks whether you want it to search the web, alright?


The Pixel 3 is just another phone. But the A.I. brains inside are unrivaled

The idea that artificial intelligence in smartphones will transform our lives has been heavily hyped, but the reality often disappoints. Google may not know how to make a beautiful phone, but its A.I. features are truly useful.
Emerging Tech

Curious how A.I. 'brains' work? Here's a super-simple breakdown of deep learning

What is deep learning? A branch of machine learning, this field deals with the creation of neural networks that are modeled after the brain and adept at dealing with large amounts of human-oriented data, like writing and voice commands.
Emerging Tech

What the heck is machine learning, and why is it everywhere these days?

Machine learning has been responsible for some of the biggest advances in artificial intelligence over the past decade. But what exactly is it? Check out our handy beginner's guide.
Movies & TV

The best shows on Netflix in October, from 'Mindhunter’ to ‘The Good Place’

Looking for a new show to binge? Lucky for you, we've curated a list of the best shows on Netflix, whether you're a fan of outlandish anime, dramatic period pieces, or shows that leave you questioning what lies beyond.

As deaf gamers speak up, game studios are finally listening to those who can’t

Using social media, personal blogs and Twitch, a small group of deaf and hard-of-hearing players have been working to make their voices heard and improve accessibility in the gaming industry.
Emerging Tech

From flying for fun to pro filmmaking, these are the best drones you can buy

In just the past few years, drones have transformed from a geeky hobbyist affair to a full-on cultural phenomenon. Here's a no-nonsense rundown of the best drones you can buy right now, no matter what kind of flying you plan to do.
Emerging Tech

With cameras that know dogs from Dodges, Honda is making intersections safer

Honda and the city of Marysville, Ohio are working on creating a smart intersection. The goal would not only help better direct the flow of traffic, it could also help save the lives of pedestrians and cyclists.
Emerging Tech

Get your head in the clouds with the best vaporizers for flower and concentrates

Why combust dead plant matter when you could vaporize the good stuff and leave the leaves behind? Here's a rundown of the best vaporizers money can buy, no matter what your style is.
Emerging Tech

Here’s all the best gear and gadgetry you can snag for $100 or less

A $100 bill can get you further than you might think -- so long as you know where to look. Check out our picks for the best tech under $100, whether you're in the market for headphones or a virtual-reality headset.
Emerging Tech

Here are the best (and least likely to explode) hoverboards you can buy

With widespread reports of cheap, knock-off Chinese hoverboards exploding, these self-balancing scooters may be getting a rough reputation. They're not all bad, though. Ride in style with our picks for the best -- and safest -- hoverboards
Emerging Tech

Boston Dynamics is trying to make fetch happen with its new working robot dog

Boston Dynamics wants to see Spot in the workplace, but not as part of take-your-dog-to-work days. Quite the opposite, in fact, as the technology company believes its extraordinary robo-dog is now ready to start work.
Emerging Tech

Regular paints and plastics will soon be able to ‘heal’ like skin

Imagine if paints, plastics, or other coatings could heal up like human skin in the event that they suffered damage. Thanks to researchers at Clemson University, such technology is almost here.
Emerging Tech

Here’s how Microsoft’s Hololens is helping NASA build the new Orion spacecraft

Lockheed Martin is turning to Microsoft’s mixed reality Hololens smartglasses to help build NASA's Orion spacecraft, which could one day help rocket astronauts as far afield as Mars.
Emerging Tech

Shrimp eyes inspire new camera focused on helping self-driving cars see better

By mimicking the vision of mantis shrimp, researchers were able to make significant improvements on today’s commercial cameras. They hope their technology can help mitigate accidents by letting self-driving vehicles see more clearly.