How do we teach robots right from wrong? Soon the problem won’t be hypothetical

algorithm spots fakes on social media robots right wrong ethics
Editor’s note: Digital Trends has partnered with WebVisions, the internationally recognized design, technology and user-experience conference, to help bring luminary Douglas Rushkoff to this year’s event in Portland, Oregon. As part of our partnership, we’re also pleased to feature select content from WebVisions’ insightful blog, Word. This week, contributor Mark Wyner wonders how we go about teaching artificial intelligence right from wrong. Enjoy!

Twitter has admitted that as many as 23 million (8.5 percent) of its user accounts are autonomous Twitterbots. Many are there to increase productivity, conduct research, or even have some fun. Yet many have been created with harmful intentions. In both cases, the bots have been known to behave with questionable ethics – maybe because they’re merely minor specimens of artificial intelligence (AI).

Humans are currently building far-more sophisticated machines which will face questions of ethics on monumental scales. Even in the mortality of human beings. So how do we make sure they make the right choices when the time comes?

Build it in or teach it

The key factor in successfully building autonomous machines that coincide symbiotically with human beings is ethics. And there are basically two ways to program ethics into machines:

First, you can hard code them into their operating systems. The concern here is that ethics are subjective. The ethics in the machine are contingent upon the ethics in its creator. But we humans do not always align in our morality. We fight wars over ethical differences. So as we build autonomous machines to be ethical, we’re building within the confines of our existing disparities.

robots-right-wrong-ethics-2

Second, you can provide some guidelines, then allow the machine to learn its own ethics based on its own experiences. This passive approach leaves plenty of room for misinterpretations of morality, contingent upon which behaviors are observed. Consider the recent meltdown of Microsoft’s Twitter AI, Tay, who was tricked into tweeting racist slurs and promotion of genocide based on a false inference of accepted ethical normalcy.

A team at Georgia Tech is working on the latter, teaching cognitive-systems to learn how to behave in socially acceptable ways by reading stories. A reward system called Quixote is supposed to help cognitive systems identify protagonists in stories, which the machines use to align their own values with those of human beings. It’s unclear what methods Microsoft used with Tay. But if their techniques were preemptive, as with Georgia Tech’s learning system, we’re a long way from solidifying ethics in Artificial Intelligence.

Ethical paralysis

Now, all of this is based on the idea that a computer can even comprehend ethics. As Alan Winfield shows in his study Towards an Ethical Robot, when a computer encounters an ethical paradox, the result is unpredictable, often paralyzing. In his study, a cognitive robot (A-robot) was asked to save a “human” robot (H-robot) from peril. When the A-robot could save only one of two H-robots, it dithered and conceded in its own confusion, saving neither.

There is an ancient philosophical debate about whether ethics is a matter of reason or emotion. Among modern psychologists, there is a consensus that ethical decision making requires both rational and emotional judgments. As Professor Paul Thagard notes in a piece about this topic, “ethical judgments are often highly emotional, when people express their strong approval or disapproval of various acts. Whether they are also rational depends on whether the cognitive appraisal that is part of emotion is done well or badly.”

Decisions with consequences

So, if cognitive machines don’t have the capacity for ethics, who is responsible when they break the law? Currently, no one seems to know. Ryan Calo of the University of Washington School of Law notes, “robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.”

The process for legislation is arduously slow, while technology, on the other hand, makes exponential haste.

The crimes can be quite serious, too. Netherlands developer Jeffry van der Goot had to defend himself  — and his Twitterbot — when police knocked on his door, inquiring about a death threat sent from his Twitter account. Then there’s Random Darknet Shopper; a shopping bot with a weekly allowance of $100 in Bitcoin to make purchases on Darknet for an art exhibition. Swedish officials weren’t amused when it purchased ecstasy, which the artist put on display. (Though, in support for artistic expression they didn’t confiscate the drugs until the exhibition ended.)

In both of these cases, authorities did what they could within the law, but ultimately pardoned the human proprietors because they hadn’t explicitly or directly committed crimes. But how does that translate when a human being unleashes an AI with the intention of malice?

The ironic reality is that we are exploring ways we can govern our autonomous machines. And beyond our questionable ability to instill ethics into them, we are often bemused by their general behavior alone. When discussing the methods behind their neural networks, Google software engineer Alexander Mordvintsev revealed, “… even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t.”

Can we keep up?

All things considered, the process for legislation is arduously slow, while technology, on the other hand, makes exponential haste. As Vivek Wadhwa of Singularity University explains, “the laws can’t keep up because … laws are essentially codified ethics. That we develop a consensus as a society about what’s good and what’s bad and then it becomes what’s right and what’s wrong, and then it becomes what’s legal and what’s illegal. That’s the way the progression goes. On most of those technologies we haven’t decided what’s good or bad.”

If the law does catch up, we may be writing our own doom. All of that talk about robots taking over the world? Maybe they just jaywalk en masse until we imprison so much of our race that we become the minority in autonomous beings. Checkmate.

The views expressed here are solely those of the author and do not reflect the beliefs of Digital Trends.

Cars

How Nissan’s ‘invisible-to-visible’ tech could pave the way for autonomous cars

Nissan is experimenting with what it calls “invisible-to-visible” (I2V) tech. It’s meant to connect cars to a virtual world, but will it work in the real world? That’s what we asked Nissan’s Roel de Vries.
Movies & TV

Skip the flowers and sunshine this spring and watch the best shows on Hulu

It's often overwhelming to navigate Hulu's robust library of TV shows. To help, we put together a list of the best shows on Hulu, whether you're into frenetic cartoons, intelligent dramas, or anything in between.
Product Review

Mercedes-Benz fires its first salvo in the luxury electrification war

The EQC is Mercedes-Benz's answer to the Tesla threat, bringing performance, tech features, and luxury to the growing electric SUV segment. We headed to Norway, the electric car capital of the world, to get a taste of it.
Product Review

The Supra is back, baby! And this time it has BMW in its DNA

The Toyota Supra makes a triumphant return as a joint project with BMW, to the delights o driving enthusiasts, racing fans, and aftermarket tuners alike. We were among the first group of lucky individuals to give it a proper spin.
Emerging Tech

Google wants to map the world's air quality. Here's how.

For the past several years, a growing number of Google’s Street View cars have been doing more than just taking photos. They’ve also been measuring air quality. Here's why that's so important.
Emerging Tech

Soaring on air currents like birds could let drones fly for significantly longer

Birds are sometimes able to glide by catching rising air currents, known as thermals. This energy-saving technique could also be used by drones to allow them to remain airborne longer.
Cars

Volkswagen is launching a full range of EVs, but it doesn’t want to be Tesla

Volkswagen is preparing to release the 2020 ID.3 - an electric, Golf-sized model developed for Europe. It sheds insight into the brand's future EVs, including ones built and sold in the United States.
Emerging Tech

Get ready to waste your day with this creepily accurate text-generating A.I.

Remember the text-generating A.I. created by research lab OpenA.I. that was supposedly too dangerous to release to the public? Well, someone just released a version of it. Check it out.
Emerging Tech

Think your kid might have an ear infection? This app can confirm it

Researchers at the University of Washington have developed a new A.I.-powered smartphone app that’s able to listen for ear infections with a high level of accuracy. Here's how it works.
Emerging Tech

San Francisco won the battle, but the war on facial-recognition has just begun

San Francisco has become the first city in America to ban facial recognition. Well, kind of. While the ruling only covers certain applications, it's nonetheless vitally important. Here's why.
Emerging Tech

SpaceX calls off Starlink launch just 15 minutes before liftoff

High winds above Cape Canaveral on Wednesday night forced SpaceX to postpone the launch of a Falcon 9 rocket in a mission that would have marked the first major deployment of the company’s Starlink internet satellites.
Emerging Tech

SpaceX scraps second effort to launch 60 Starlink satellites

Wednesday's planned SpaceX launch of 60 Starlink satellites was pushed back due to bad weather. Thursday's launch has also been postponed, so the company said it will try again next week.
Emerging Tech

UV-activated superglue could literally help to heal broken hearts

Scientists at China's Zhejiang University have developed a UV-activated adhesive glue that is capable of efficiently healing damage to organs, including the heart. Here's how it works.
Emerging Tech

USC’s penny-sized robotic bee is the most sci-fi thing you’ll see all week

Engineers at the University of Southern California in Los Angeles have built a bee-inspired robot that weighs just 95 grams and is smaller than a penny. Check it out in action here.