DT Debates: Should robots be held to a human moral compass?

Robot revealing heart: Human moral compass

The rise of A.I. is in our midst. Google Glasses, self-driving cars, and our continued attempts at robotic life are not to be ignored. And a recent study found that we’re beginning to think about the morals and ethics we humans will hold our droid friends to. So we had to pit staff writers Andrew Couts and Amir Iliaifar against each to ask… 

question
 

Andrew

 

andrew-coutsThis is a highly complex question to which I do not claim to know all the answers. But I can definitively say that we must hold robots to at least the same moral and ethical standards that we hold ourselves. Obviously, we cannot let robots do things to humans that we do not allow amongst one another — that’s something science fiction god Isaac Asimov wisely concluded all the way back in 1942.

The big question here, I think, is whether we have a moral imperative to treat robots in the same way we treat people. Fortunately, some have already begun to answer this question. In 2007, the South Korean government drafted a code of ethics to prevent humans from “abusing” robots (and vice versa). This too follows in line with Asimov’s “Three Laws of Robotics,” the third rule of which states that “a robot must protect its own existence,” unless that means injuring a human or disobeying the orders of a human.

Of course, some might say that a robot, no matter how complex or life-like, is really nothing more than a fancy computer — which is technically true — and that there is no moral code that prohibits smashing your computer, or tossing it off a building, so why should there be a rule against damaging or destroying robots? That view is short-sighted. Once we reach a point where robots closely mimic the physical attributes and/or “mental” wherewithal of humans, it will become increasingly difficult to distinguish the two, so I believe it is important that we grant the same respect to these machines as we do our fellow man — if only to hold back the most savage instincts of human nature.

 

Amir

 

Amir-IliaifarWe meet again Mr. Couts. Last time you proved a more than worthy adversary and I imagine I can only expect more of the same this go-around. That being said, while I agree with your take on instilling a modicum of morality towards our eventual robotic overlords, I think you are missing the point: any question of ethics and morality should be the sole enterprise of the human beings behind the machines, and not the other way around. It may be that we one day inhabit a world where robots exist autonomously, but the reality is no matter how advanced a robot may become, they will never be considered “real” (and no, I’m not going to get into a metaphysical argument over this), and should always be the responsibility of their “creators.” Therefore any of the legal or ethical standards you are suggesting are moot, and need to apply to whoever develops, builds, and operates them rather than the robots themselves.

As for there being a “moral imperative to treat robots in the same way we treat people,” again, I don’t think that really matters. Sure we can pass laws and what not, it’s certainly doable, but I question whether that would be adequate enough given the increasingly destitute nature of people all over the world. Not to be glib, but we need to focus on the legal and ethical standards we place upon ourselves before we try and codify or promote any sort of “robotic equal rights.” There are plenty of living, breathing humans out there that don’t enjoy even the most basic of human rights, so I simply suggest we concentrate our attention towards that. On a side note: Asimov was a brilliant man, and I wholeheartedly agree with his Three Laws.

 

Andrew

 

While I completely agree with you that at the moment the problem of people treating each other badly is far more pressing than robot morality, we must accept that the day when robots inhabit every nook and cranny of our lives is quickly approaching. And there must be at least a minimum code of ethics that guide how we treat and interact with these mechanical “beings.”

I agree with you that the scientists, engineers, and corporations that create robots should be held responsible for the actions of their contraptions. Just because robot-makers must follow certain ethical guidelines, however, doesn’t mean that those of us who interact with robots cannot also follow a code. Of course, I don’t believe this code should be, or even can be, the same as the moral code that guides our interactions with fellow humans. But I do believe that it is possible for humans to act immorally toward robots, even if the robot can never be truly conscious of the actions in the same way a human (or even a dog, ape, or alpaca) is aware.

Imagine this scenario: Say you purchase a robot butler. Your robot butler serves you well, day after day. Then, one cloudy May afternoon, it accidentally trips on the carpet, and spills a giant glass of grape Kool-Aid all over your sheepskin R2-D2 rug. You lash out, and chop the head off your robot butler — lets call him “Chris” — rendering him, well, headless, and completely useless.

Now, Chris hasn’t the faintest clue about what just happened. But you do. You know you let your negative emotions get the better of you, and you acted out with violence. In my mind, that is morally incorrect simply because you had a violent reaction, and let the evil, wicked part of your soul get the better of your actions. That may not be the same as setting your step-brother on fire because he put snot in your comic books (yeah, I know all about that Amir, don’t try to hide it), but it is at least ever so slightly wrong. So, you know, there should be rules against that.

 

Amir

 

There is only one rule: There are no rules! Now that that’s out of my system… I see what you’re getting at, and I just don’t agree. Why would we need to treat these “mechanical beings” as anything other than property? Yes they might be intricate and infinitely cool, but other than mechanized components that make up their rusty innards, there is nothing about a robot that makes it real or intrinsically human. Unless it’s a living, breathing organism, I don’t really see the need to advocate for any sort of laws or code of ethics toward machines.

I’m glad you agree with me that the ultimate responsibility for a robot’s actions lie with its creators/owners, but that’s the only moral imperative I see here. A machine is a machine. It has no emotions or feelings. If I want to rough up my machine then so be it — I don’t see the problem. If I break it I will need to buy another one, if I can’t afford to, well then I’m up chocolate creek without a popsicle stick aren’t I? Now that doesn’t mean I wish to just go around decapitating the heads of my robotic man servant (who I would totally dress up to resemble you, glasses and all FYI) but who the heck cares? It’s a robot! If I let my emotions get the best of me I’m out an expensive robot and that is going to be more of an impetus for me not to mistreat my property than any sort of moral code being shoved upon me. We have plenty of machines right now that perform crazy awesome tasks, would you exclude them from your robo-crusade just because they don’t resemble a human? The fact is: there is no distinction, a machine is a machine. Now come with me if you want to live.

 

Andrew

 

I completely understand your logic, but I feel as though your argument is woefully short-sighted. At some point in the future, there will be machines capable of acting more human than some actual humans. Yes, it will still be a machine, technically, but that’s like saying humans are still just animals. Which, incidentally, we are.

Robots will not just be “machines” in the way that my smartphone is a machine, or my lawnmower is a machine. These will be fully-functioning members of society, some of them capable of amazing feats, both physical and mental. We will be able to talk to them, and confide in them, and perhaps even hang out and watch movies together. They will become companions, confidants, even friends. Just as Google is now capable of “learning” what types of information each of us is looking for in our searches, so too will these artificially intelligent beings be able to learn our wants, our needs, and our emotions. Or, at the very least, our most likely response to a piece of data or stimuli. I can all but guarantee that many of us will not view these next-generation robots as “just machines.” And when that happens, we will not be able to justify damaging them, or violently knocking them out of existence, without feeling as though we’re doing something wrong. Which is precisely why we need to decide on an ethical code of conduct now, before things get messy.

 

Amir

 

I empathize with your position Andrew, really I do, but while my viewpoint might be “woefully short-sighted” I happen to think you too are missing the bigger picture here. If you start passing laws protecting robot’s existence then you’re going to need to recognize them as fully-fledged members of society, cognizant of their rights and privileges within our social fabric. Of course that leads to the very tricky endeavor of actually getting people to recognize robots as legitimate beings in the first place, which I don’t think will ever happen, especially among religious folk.

But let’s say, for argument’s sake, we made that technological leap into the future and robots are barely distinguishable from humans. What happens if I accidentally run over a robot? Should I be charged with second-degree manslaughter? No, of course not, that’s absurd. If we are going to treat robots as living beings, what happens when they start demanding rights, or worse, taking them?

No matter how much technological wizardry you put into a robot, it’s going to be a robot. Everything else is veneer. If anything, I think the more humanity we instill upon a machine, the more we detach ourselves from our own.

I’ll leave you with this: Right now the U.S. military uses drones – highly sophisticated unmanned machines – to kill military targets. We happily program these machines to do our dirty work. Why? Because we can. They can’t feel and have no moral or ethical code to live by– they just obey. So much for Asimov’s laws… 

[Image courtesy of Dvpodt/Shutterstock]

The views expressed here are solely those of the author and do not reflect the beliefs of Digital Trends.

Photography

MIT science photographer isn’t an artist, but her work could fill galleries

Felice Frankel is an award-winning photographer, but she doesn't consider herself an artist. As a science photographer, she has been helping researchers better communicate their ideas for nearly three decades with eye-catching imagery.
Movies & TV

The best shows on Netflix, from 'Haunting of Hill House’ to ‘Twilight Zone’

Looking for a new show to binge? Lucky for you, we've curated a list of the best shows on Netflix, whether you're a fan of outlandish anime, dramatic period pieces, or shows that leave you questioning what lies beyond.
Product Review

The competition was fierce, and this is the best TV of 2018

With stellar picture quality, excellent ease of use, and rich features, the LG C8 OLED is the best TV you can buy in 2018, but that doesn’t necessarily mean it is right for you. Check out our review to learn more.
Smart Home

'Alexa, how much do you weigh?' Questions unlock voice assistant's Easter eggs

Sometimes all we really need is someone to talk to, which is why we've rounded up the best (read: funniest) Easter eggs currently embedded within Amazon's virtual assistant. Apparently, Alexa knows more than you might think.
Features

Has Columbus, Ohio raised its IQ yet? A progress report from the mayor

Two years ago, the city of Columbus in Ohio received $40 million to pursue smart city initiatives. So, what’s happened since then? We spoke with its mayor, Andrew Ginther, to discuss progress and what’s ahead.
Emerging Tech

Awesome Tech You Can’t Buy Yet: Folding canoes and ultra-fast water filters

Check out our roundup of the best new crowdfunding projects and product announcements that hit the web this week. You may not be able to buy this stuff yet, but it sure is fun to gawk!
Emerging Tech

It’s no flying car, but the e-scooter had a huge impact on city streets in 2018

Within just a year, electric scooters have fundamentally changed how we navigate cities. From San Francisco to Paris, commuters have a new option that’s more fun than mass transit, easier than a bike, and definitely not a car.
Emerging Tech

New experiment casts doubt on claims to have identified dark matter

A South Korean experiment called COSINE-100 has attempted to replicate the claims of dark matter observed by the Italian DAMA/LIBRA experiment, but has failed to replicate the observations.
Emerging Tech

White dwarf star unexpectedly emitting bright ‘supersoft’ X-rays

NASA's Chandra Observatory has discovered a white dwarf star which is emitting supersoft X-rays, calling into question the conventional wisdom about how X-rays are produced by dying stars.
Business

Amazon scouted airport locations for its cashier-free Amazon Go stores

Representatives of Amazon Go checkout-free retail stores connected with officials at Los Angeles and San Jose airports in June to discuss the possibility of cashier-free grab-and-go locations in busy terminals.
Emerging Tech

Full-fledged drone delivery service set to land in remote Canadian community

Some drone delivery operations seem rather crude in their execution, but Drone Delivery Canada is building a comprehensive platform that's aiming to take drone delivery to the next level.
Emerging Tech

Intel wants its fleet of drones to monitor America’s aging, unsafe bridges

Intel has signed a deal to use its Falcon 8+ drones to carry out bridge inspections. The hope is that these drones will be useful in spotting potential problems before they become serious.
Emerging Tech

Transplanted pig hearts show promise in baboon trials. Are humans next?

Researchers in Germany have successfully transplanted modified pig hearts into baboons. The results take us one step closer to ending organ transplant waiting lists for good. Here's why.
Emerging Tech

An A.I. cracks the internet’s squiggly letter bot test in 0.5 seconds

How do you prove that you’re a human when communicating on the internet? The answer used to be by solving a CAPTCHA puzzle. But maybe not for too much longer. Here is the reason why.