Is the AI apocalypse a tired Hollywood trope, or human destiny?

Why is it that every time humans develop a really clever computer system in the movies, it seems intent on killing every last one of us at its first opportunity?

In Stanley Kubrick’s masterpiece, 2001: A Space Odyssey, HAL 9000 starts off as an attentive, if somewhat creepy, custodian of the astronauts aboard the USS Discovery One, before famously turning homicidal and trying to kill them all. In The Matrix, humanity’s invention of AI promptly results in human-machine warfare, leading to humans enslaved as a biological source of energy by the machines. In Daniel H. Wilson’s book Robopocalypse, computer scientists finally crack the code on the AI problem, only to have their creation develop a sudden and deep dislike for its creators.

Is Siri just a few upgrades away from killing you in your sleep?

And you’re not an especially sentient being yourself if you haven’t heard the story of Skynet (see The Terminator, T2, T3, etc.)

The simple answer is that — movies like Wall-E, Short Circuit, and Chappie, notwithstanding — Hollywood knows that nothing guarantees box office gold quite like an existential threat to all of humanity. Whether that threat is likely in real life or not is decidedly beside the point. How else can one explain the endless march of zombie flicks, not to mention those pesky, shark-infested tornadoes?

The reality of AI is nothing like the movies. Siri, Alexa, Watson, Cortana — these are our HAL 9000s, and none seems even vaguely murderous. The technology has taken leaps and bounds in the last decade, and seems poised to finally match the vision our artists have depicted in film for decades. What then?

Is Siri just a few upgrades away from killing you in your sleep, or is Hollywood running away with a tired idea? Looking back at the last decade of AI research helps to paint a clearer picture of a sometimes frightening, sometimes enlightened future.

The dangers of a runaway brain

An increasing number of prominent voices are being raised about the real dangers of humanity’s continuing work on so-called artificial intelligence.

Chief among them is Dr. Nick Bostrom, a philosopher who also holds degrees in physics and computational neuroscience. In his 2014 book, Superintelligence: Paths, Dangers, Strategies, he outlines in rigorous detail the various ways a “strong” AI — should we succeed in building one — would wipe us off the face of the planet the moment it escapes our control. Forget about wholesale nuclear annihilation — that’s how power-hungry human dictators go about dealing with an unwanted group of humans. No, a strong AI would instead starve us to death, use up all of our natural resources, or, if it’s feeling really desperate, dismantle our bodies at a molecular level and use the resulting biomass for its own purposes.

Dr. Nick Bostrom warns about the potential dangers of a runaway AI at a 2015 TED talk in Vancouver, Canada. (Photo: Bret Hartman/TED)

But don’t take it personally. As Bostrom points out, an artificial superintelligence likely won’t behave according to any human notions of morality or ethics. “Anthropomorphic frames encourage unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations, and capabilities of a mature superintelligence,” he says.

Don’t let Bostrom’s professorial language fool you — he’s deadly serious about the consequences of an AI that can outthink even the smartest human being, and none of them are good. More frighteningly, he says that we may go from giving ourselves high-fives over creating the first AI that can think as well as we can to cowering in the corner as it hunts us down in as little as a few weeks, or perhaps even days. It all comes down to a few key factors that will likely influence our future with AI.

“Once humans design artificial intelligence it will take off on its own and develop at an ever-increasing rate.”

Computers think really fast. In the best-case scenario, we’ll have enough time between an AI acquiring the ability to think as well as us and its rise to super-intelligent status that we can adjust and respond. On the other hand, as Bostrom points out, when you’re dealing with a machine that can think — and therefore develop — at an almost unimaginable speed, by the time we realize what’s going on, it will already be far too late to stop it. Some readers may remember the 1970s sci-fi horror flick Demon Seed, in which an AI not only predicts that it will be shut down by its fearful creator, but employs murder and rape to ensure its survival.

“If and when a takeoff occurs,” Bostrom writes, “it will likely be explosive.” Stephen Hawking has echoed this sentiment: “Once humans design artificial intelligence,” he says, “it will take off on its own and develop at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Computers often produce unexpected results. AI researchers are regularly surprised by the outcome of their experiments. In 2013, a researcher discovered that his AI — designed to learn to play NES games — decided to pause the gameplay on Tetris as its preferred solution to the goal of not losing.

At our early stage of AI development, this is a good thing; surprises often lead to new discoveries. Unexpected results paired with a massive and sudden surge in intelligence would be quite the opposite. Being able to anticipate the way a superintelligent AI will respond to, well, anything, could prove to be impossible, in much the same way our actions and motivations are utterly impenetrable to an insect.

Strong AI, weak AI, and the stuff we already use

Artificial intelligence research has at times resembled the hunt for the Holy Grail. In the summer of 1956, there was a belief that we could achieve strong AI (aka, artificial general intelligence), which can be thought of as mimicking human intelligence in all of its forms, functions, and subtleties. Researchers at Dartmouth University thought that if they could endow computers with the basic building blocks of human intelligence — reasoning, knowledge representation, planning, natural language processing, and perception — then somehow, general intelligence would simply emerge.

Obviously, that didn’t happen. In fact, over the intervening decades, there were several boom and bust cycles in AI research (often dubbed “AI winters”) that moved a few of these building blocks forward, but then failed to show ongoing progress after an initial period of excitement.

What did happen were various advances in each of the building blocks, resulting in an assortment of “weak AIs,” or practical AIs.

AI doesn’t just exist in far-flung visions of the future. Google has been using it since 2015 to improve search results. (Video: Google)

The Google search engine could just be the best-known example of weak AI. Its algorithms do an exceptional job of pulling together hundreds of variables and combining those with the words you type to produce a results page culled from the vastness of the web.

In fact, most of the examples of AI from the past 10 years probably haven’t struck you as examples of AI at all, which just goes to show how successful they’ve been. Most have quietly and seamlessly integrated themselves into our lives, making them better in small but significant ways.

Google’s Photos product shows how far we’ve come in perception AI — type in “bicycle” and it will find photos you’ve taken of two-wheeled vehicles, even if you never labeled them as such.

The predictive text options that appear as we type on our phones save us valuable taps, while autocorrect attempts to make up for the fact that on-screen keyboards and human thumbs are a recipe for inaccuracy (notwithstanding the hilarity that often ensues when it tries to come to our rescue).

“The next big step will be for the very concept of the ‘device’ to fade away.”

Amazon’s “suggested for you” product recommendations feature is an example of AI used to make money. It was reportedly responsible for 35 percent of the company’s sales in 2006, and the people behind the technology have since made it available to other retailers, resulting in “billions” of sales.

So-called intelligent assistants from Apple, Google, Amazon, and Microsoft (Siri, Google Now, Alexa, and Cortana, respectively) all leverage recent advances in natural language processing (NLP), combined with sophisticated heuristics, which make questions like “what’s the weather going to be like today?” the hands-free equivalent of jumping into a weather app or Googling the same phrase. Google’s CEO, Sundar Pichai, recently boasted that his company’s NLP AI is approaching human levels of understanding, which may explain why he told shareholders that “the next big step will be for the very concept of the ‘device’ to fade away.”

The Waze GPS and crowdsourced mapping app is a great example of AI planning.

Recently, chatbots have become the most “talked about” form of AI. In 2014, the University of Reading in England declared that the Turing Test had finally been passed, 65 years after its inception, by an algorithm masquerading as a 13-year-old Ukrainian boy named Eugene. Eugene is a chatbot — a pretty good one by all accounts — but despite passing the test, it is still not an example of true strong AI. Cleverbot — another AI chatbot – which is claimed to have passed the Turing Test in 2011, is available to have a chat with anyone. But labeling it “intelligent,” might depend on your definition of the word.

Me: Do you believe in God?
Clev: Do you?
Me: That’s not an answer.
Clev: You did not ask a question.
Me: Yes I did. I asked: Do you believe in God?
Clev: Yes, I do. Why don’t you?
Me: Why do you think I don’t believe in God?
Clev: Because you said that you did not.
Me: No, I didn’t. You aren’t a very good listener, are you?
Clev: Fine fine. I know you said that. But whatever. Do you like Lady Gaga?

Both the Turing Test itself and the claims that it has been passed, are not without criticism.

Microsoft learned just how fallible a chatbot AI can be in a spectacularly public fashion when it released Tay, an AI Twitter bot, on an unsuspecting global audience earlier this year. Less than 24 hours after Tay began interacting with people, she — er — it, became sexist and racist, offering praise for Hitler among other gaffes.

IBM Watson Jeopardy
From left to right, Jeopardy host Alex Trebek poses with legendary contestants Ken Jennings, IBM’s Watson supercomputer, and Brad Rutter. Watson would later wipe the floor with both human contestants in a showdown. (Photo: Jeopardy Productions)

Where’s Watson?

If the AI industry has a celebrity, it’s IBM’s Watson. After handily defeating all-time Jeopardy! champ Ken Jennings in 2011, the supercomputing application dropped the mic and started a new life in the somewhat less exciting sphere of commercial applications. But Watson’s victory caused a lot of people to start wondering if IBM’s wunderkind was in fact an embryonic HAL 9000. Guru Banavar, IBM’s Vice President of Cognitive Computing, places it in a different category entirely.

“We think of AI as augmented intelligence, as opposed to artificial intelligence,” Banavar told Digital Trends. He believes that the notion of AI as a carbon copy of the human brain is a distraction, one that entirely misses the point of how this technology can best be put to use. “Augmented intelligence is a partnering between a person and a machine,” he explains, with the goal being to offload the work that a person isn’t able to do as well as the machine. It forms a symbiotic relationship of sorts, in which the two entities work better together than each of them would do on their own.

“Augmented intelligence is a partnering between a person and a machine.”

IBM refers to this approach to AI as “cognitive computing,” specifically because it does not seek to replicate the entirety of human intelligence. The approach IBM took to solving the Jeopardy! problem wasn’t centered on making a synthetic brain, but rather on getting a machine to process a very specific type of information — language — in order to hunt for and ultimately produce the right answer for the game’s reverse-question format. To do this, Banavar recounts, took a combination of advances “going back almost 10 years.” Simply getting Watson to understand the massive number of permutations of meaning within English, was daunting. Its eventual success was “a big breakthrough for the whole field of computer science,” Banavar claims.

IBM continues to develop Watson, as well as its other investments in AI, in pursuit of what Banavar calls “grand challenges.” These are computing problems so difficult and complex, they often require dozens of researchers and a sustained investment over months or years. Or, as Banavar puts it: “Not something you can do with a couple of guys in a garage.”

A cluster of 90 IBM Power 750 servers power Watson, and each one uses a 3.5 GHz POWER7 eight-core processor. (Video: IBM)

One such challenge is reading medical images. The growing number of X-rays, CT scans, PET scans, and MRIs being done every day is a potential lifesaving boon for patients, but it’s also a growing problem for the profession of radiology. At the moment, a radiologist must personally assess each scan to look for signs of disease or other anomalies. The sheer number of scans being done are creating increasing demand for trained radiologists, whose numbers are limited simply due to the rigorous and lengthy training required to become one. Banavar describes the work they do as “very monotonous and error prone,” not because these doctors lack the skill, but because they are only human. It’s a scenario that seems almost custom-built for the kind of AI that IBM has been working on. In order to significantly impact the number and quality of scans that can be processed, researchers are using Watson to understand the content of the images, within the full medical context of the patient. “Within the next two years,” Banavar says, “we will see some very significant breakthroughs in this.”

Teaching a machine to learn

For IBM to succeed, it will have to solve a problem that has plagued AI efforts from their very beginnings: Computers tend to follow the instructions they’re given in such a literal way that, when the unexpected occurs — a situation the developer hadn’t foreseen — they proceed anyway, often with undesirable outcomes. But what if machines possessed the ability to know when something doesn’t quite fit and adjust accordingly, without being told so explicitly? In other words, what if they possessed common sense?

Dr. Maya Gupta is a senior AI researcher at Google, and she is attempting to do just that. Using a tool within the AI arsenal known as machine learning, Gupta and her colleagues are slowly training computers to filter information in a way that most humans find relatively simple. Her current goal — improving video recommendations on YouTube — might seem modest, or even boring, but from an AI researcher’s perspective, it’s nirvana. That’s because of the fundamental difference between how machines and humans learn.

“If you don’t have a billion examples, the machine has nothing to learn from.”

“A 3-year-old can learn an enormous amount of things from very few examples,” Gupta says. The same cannot be said for computers, which require vast quantities of data to acquire the same level of understanding. It also requires some pretty significant computing resources, which is why Nvidia recently launched a new kind of supercomputer developed specifically to run deep-learning algorithms.

Curiously, computer scientists have known how to “teach” machines for several decades. The missing ingredient has been, well, the ingredients. “You can have a model that can learn from a billion examples,” Gupta explains, “but if you don’t have a billion examples, the machine has nothing to learn from.” Which is why YouTube, with its monster catalog of videos, is the perfect place to nurture a data-hungry process like machine learning. Gupta’s algorithms are being taught two kinds of common sense, known as smoothness and monotonicity. Both feel like child’s play: Smoothness dictates that you shouldn’t let one small change throw off a decision that has been based on dozens of other factors, while monotonicity operates on an “all other things being equal, this one fact should make it the best choice” principle.

In practice, smoothness means that a potentially great video recommendation isn’t dismissed by the algorithm simply because it contained both cooking and traveling information, when the previously watched video was purely about cooking. For monotonicity, Gupta cites the example of recommending a coffee shop. If you’ve identified that you like coffee shops that serve organic, fair trade coffee and that also have free Wi-Fi, then the one that is closest to you should top the recommended list, even though you never specified distance as important. “It would surprise some humans just how hard that is,” Gupta says of the effort involved in teaching machines to respect patterns that any 5-year-old could pick up on.

Project Malmo Katja Hoffman
Microsoft researcher Katja Hofmann, center, is teaching machines to play Minecraft as part of Project Malmo, which is intended to improve human-machine cooperation. (Photo: Scott Eklund/Red Box Pictures)

Mining for knowledge in Minecraft

As successful as it might be at finding just the right video for you, that algorithm has trouble performing the same task with music recommendation. “It’s hard to transfer what we’ve learned,” Gupta acknowledges, something she says is a challenge for the industry, not just Google. So how do you teach an AI to be flexible, in addition to having common sense? Dr. Katja Hofmann, a researcher at the Machine Intelligence and Perception group at Microsoft Research Cambridge, thinks she has the answer: Teach it how to play Minecraft.

Experiments have already revealed that AI agents are able to complete tasks which humans simply found too hard.

Project Malmo is Hofmann’s attempt to repurpose the massively popular online game into an experimentation platform for artificial intelligence research. Her team has developed a modification for the game that lets AI “agents” interact directly with the Minecraft environment. “Minecraft is really interesting because it’s an open-world game,” Hofmann told us, which offers a unique space in which AI agents can deal with different environments that change over time, a key point if you’re trying to foster flexible learning. This aspect of Minecraft created problems during early attempts to get agents to achieve goals. “The world doesn’t wait for the agent to make its decision,” she says, referring to the real-time nature of the game, and its obvious parallels to real life.

Using the mod not only gives an agent the ability to manipulate the LEGO-like bricks of material that are central to the game’s environment — it can also interact with other players, including humans. One of Hofmann’s long-term goals for Project Malmo is to improve human-computer cooperation. Much like at IBM, the philosophy driving these experiments, and in fact Microsoft’s entire approach to AI, is that it should work collaboratively with people. Experiments have already revealed that AI agents are able to complete tasks which humans simply found too hard. Hofmann is eagerly anticipating an agent that learns to collaborate with humans to solve tasks. “That would mean we have achieved a big breakthrough,” she said.

It could come from collaboration. Earlier this year, Microsoft decided to open source Project Malmo, a move that could yield important discoveries, especially if Microsoft’s competitors take an interest. IBM’s Watson has proven its trivia chops on Jeopardy! but how would it fare when asked to build a house out of bricks? Over at Google, the team behind DeepMind has already enjoyed success in getting an algorithm to learn how to play Space Invaders! — a game with a single goal — maximize points — and only three control options: move left, move right, and fire. Does it possess the flexibility that Hofmann is trying to encourage for success in Minecraft?

Strike up the roboband

Having an entity that can apply logic, reasoning, and brute mathematical prowess to challenges in engineering, medicine, and research just makes sense. But what about art? Can AI play a role in the creation of beauty, whether it’s cinema, sculpture, or even music? Google is determined to find out.

The company recently showed off some of the more quixotic fruits of its AI research, performing mind-bending, almost hallucinogenic transformations of images and videos through a process it has dubbed DeepDream. It’s a fun, trippy thing to do to your favorite photos, but it seems to fall short of the independent creative process we normally attribute to “artists,” and might be more appropriately described as an Instagram filter on steroids.

Google’s Deep Dream Generator reinterprets photos into psychedelic works of art using neural networks. (Photos: Deep Dream Generator)

Dr. Douglas Eck, a research scientist at Google, was intrigued when he first saw DeepDream. He recognized it immediately as a powerful example of machine learning, or “good ol’-fashioned neural networks, done right,” as he puts it. But Eck was also struck by something else: “This stuff can be fun.” So Eck decided to lobby the senior brass at Google to let him build a small team to investigate how machine learning could be further leveraged in the world of art, only this time it would be focused on music, an area Eck has long been passionate about. “Much to my pleasure,” Eck recounts, “Google was right on board with this,” and Magenta was born.

“How do you build models that can generate [music] and can understand whether they’re good or not.”

Generating music algorithmically isn’t new. You can listen to some of Eck’s own efforts from his time working on it at the University of Montreal in 2002. “The question is,” Eck asks, philosophically, “how do you build models that can generate [music] and can understand whether they’re good or not, based upon feedback from their audience, and then improve?”  It starts to sound like Magenta is going to unleash a horrible new wave of computer-generated Muzak, but Eck is quick to assure us that’s not the point. “In the end,” he says, “people want to connect with people. Consuming 100-percent machine-generated content is a bit of dead end.”

Instead, he sees Magenta as an opportunity to create the world’s next electric guitar. “Can we build AI tools that help people express themselves in ways they couldn’t before?” Eck wonders. He cites Jimi Hendrix’s iconic use of amplification and distortion as an example: “That really opened up whole channels for him to express himself that weren’t there before,” he told Digital Trends.

But unlike Hendrix’s guitar, the instruments that Magenta births will ideally be smart. Really smart. “You can already drop some bass loops into GarageBand and play on top of it,” he points out. “But what if there’s actually some smarts to this bassist?” In Eck’s vision of the future, the Magenta code base will construct a virtual bandmate that listens to you as you play, and can intelligently — perhaps even creatively — follow along and respond accordingly. “It’s like your copilot,” he said.

Just like Project Malmo, Magenta is now open source, an important step if any of Eck’s dreams for a backup band of AI musicians are to be realized. Because Magenta is built on a machine-learning framework — Google’s own open-source TensorFlow software — it is incredibly data hungry. By opening access to a worldwide community of musicians, Magenta could evolve very quickly. “If we can be playing along with other musicians, the amount of information that’s present for learning is just astonishing,” Eck enthuses.

From music to megalomania?

Each of the AI experts we spoke to all share an enthusiasm for the future potential of the technology that borders on the religious. They also share an equally skeptical view of Bostrom’s doomsday prophecies. For them, the notion that one day a superintelligent AI will turn on an unsuspecting human population remains very much the domain of science fiction, not science fact.

“I do not believe that machines are going to end up being autonomous entities that go off and do things on their own,” IBM’s Banavar says when asked about the likelihood of a machine intelligence that would need to be controlled. His primary concern for our future with the machines is one that programmers have been obsessing over for years: Poor performance because of bug-ridden code. “That’s a much bigger problem in my mind than machines that will wake up one day and do something they weren’t designed to do,” he said.

“We’re actually getting more moral with each passing decade — that is not accidental.”

Google’s Gupta points to a basic stumbling block that she thinks will hamstring the development of a strong AI for years to come: “Our best philosophers and neuroscientists aren’t sure what consciousness is,” she notes, “so how can we even start talking about what it means [for a machine to be conscious] or how we would go about replicating that digitally?” It’s hard to tell if she’s being sincere or coy — many observers have suggested that if any entity working on the AI problem today will crack the code, it’s probably going to be Google. Given a sufficiently long runway, she believes anything is possible. “I think we can do it … I’d think in the next hundred years,” she offers. “I’m just not sure it’s going to be that interesting.”

Microsoft’s Hofmann echoes Gupta’s thoughts about the difficulty of achieving a machine with a truly general level of intelligence. “I believe that it may be possible in principle,” she says, “but just knowing the state of the art in AI, I don’t see us getting anywhere close to those predictions any time in the near future.”

Google’s Eck finds the topic somewhat exasperating. “This whole idea of superintelligence,” he says, “it just doesn’t make sense to me. I guess I really don’t get it.” It’s hard to reconcile this confusion with the fact that he’s on a mission to create the first intelligent, synthetic musician. But he clarifies a moment later: “My view of cognition is so tied to human perception and action. I don’t look at our brains as these computational boxes [in competition] with these other, stronger brains in boxes that happen to be inside computers.” When asked how far we might be from such a scenario, he laughs and says, “Twenty years!” because, as he points out, that’s the usual time frame experts give when they have no idea, but they need to say something.

Circuit Board

Skeptics of Bostrom’s predictions of AI supremacy aren’t limited to those working in the field. He has also drawn criticism from the world of philosophy and ethics. Michael Chorost, author of Rebuilt: How Becoming Part Computer Made Me More Human and World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet, feels he has a strong understanding of how computers and their code work, despite not having a background in AI. He classifies Bostrom’s Superintelligence as “a brilliantly wrong book.”

Chorost believes we may create increasingly powerful AI agents, but he’s unconvinced these algorithms will ever become sentient, let alone sapient. He compares these concerns to climbing a tree, and then declaring we’re closer to the moon. Much like Gupta, Banavar and Eck, he says the biggest question is how a machine, made up of circuits and code, could ever achieve that status. He subscribes to the idea that there is something inherently special about our bodies, and their “aqueous chemical” makeup, that no electronic system will ever be able to duplicate.

Any AI that develops over time — thanks to evolution — will thus be a kinder, gentler entity.

He falls short of ruling it out completely, however, and offers one possibly viable route it could take: An evolutionary one. Instead of trying to program awareness into machines, we should let nature do the heavy lifting. “Putting it in a complex environment that forces [an AI] to evolve,” Chorost suggests, might do the trick. “The environment should be lethally complex,” he says, evoking images of AIs competing in a virtual gladiator’s arena, “so that it kills off ineffective systems and rewards effective ones.”

The other benefit to this artificial Darwinism, if it succeeds, is that it will produce a moral AI with no genocidal tendencies. “Morality is actually built into how evolution works,” he claims, saying that all you need to do is look at humanity for the proof. “Despite both world wars, the rate of violent death has been consistently falling,” he notes. “We’re actually getting more moral with each passing decade — that is not accidental. That’s a process that comes out of reason.” Chorost himself reasons that any AI that develops over time — thanks to evolution — will thus be a kinder, gentler entity because we’ve seen this process play out in all of us.

So why worry?

Perhaps Chorost is right. Perhaps the essential ingredients for sentience will never be reproduced in silicon, and we’ll be able to live comfortably knowing that as incredibly capable as Siri becomes, she’s never going to follow her own desires instead of catering to ours, like in the movie Her. But even if you don’t buy into the idea that one day AI will become an existential threat, Gary Marchant thinks we should all be paying a lot more attention to the risks that come with even a moderately more sophisticated level of artificial intelligence.

Officially, Marchant is the Lincoln Professor of Emerging Technologies, Law and Ethics at the Sandra Day O’Connor College of Law at Arizona State University. When he’s not lecturing students at ASU, he’s working on a legal and ethical framework for the development of AI as a member of the international engineering standards body, the IEEE. When he’s not doing that, Marchant co-investigates the “control and responsible innovation in the development of autonomous machines” thanks to grant money from the Future Of Life Institute — an organization that funds research and outreach on scenarios that could pose an existential risk to humanity (including AI). These activities give Marchant a 50,000-foot perspective on AI that few others possess. His conclusion? There are two areas that require immediate attention.

Predator Drone
Military drones like the MQ-9 Reaper currently only operate with human pilots remotely at the controls, but AI developments may eventually enable them to kill autonomously. (Photo: General Atomics Aeronautical)

“One that concerns me the most,” he says, “is the use of AI in the military.” At the moment, the U.S. drone arsenal is remote-controlled. The pilot is still in command, even if she’s sitting hundreds of miles away, but that scenario may already have an expiration date. The Defense Advanced Research Projects Agency, or DARPA, is reportedly interested in developing autonomous software agents that could identify and repair security risks in computing systems. “There will be strong incentives to go more and more autonomous,” Marchant warns, because it will be seen as the only viable way to respond to an adversary who is already benefitting from the faster-than-human decision-making these systems are capable of. “Then you’re reliant on these systems not to make mistakes,” he notes, ominously.

It might be easy to dismiss his concerns were it not for the fact that a federal advisory board to the Department of Defense just released a study on autonomy that echoes his words, almost verbatim: “Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike. The study therefore concluded that DoD must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.”

The other use of AI that Marchant believes is in need of examination is much closer to home: “The movement toward autonomous cars,” he says, is going to require thoughtful development and much better regulation.

Tesla Autopilot allows the Model S to drive autonomously on the highway, but a number of crashes prove it’s not yet perfect. (Video: Tesla)

“As we recently saw with Tesla,” he observes, referencing the recent crashes — and one death — connected to the company’s autopilot system, “it’s a harbinger of what’s to come — people being injured or killed by an autonomous system making decisions.”

He highlights the very real ethical decisions that will be faced by AI-controlled cars: In an accident situation, whose life should be preserved — that of the passenger, another driver, or a pedestrian? It’s a question many are wrestling with, including Oren Etzioni, a computer scientist at the University of Washington and the CEO of the Allen Institute for Artificial Intelligence, who told Wired: “We don’t want technology to play God.”

What Marchant clearly isn’t worried about is biting the hand that feeds: Much of his Future Of Life grant comes from Tesla CEO Elon Musk.

What about the jobs?

Further into the future, Marchant sees a huge problem with AI replacing human workers, a process that he claims has already begun. “I was talking to a pathologist,” he recounts, “who said his field is drying up because machines are taking it over.” Recently, a prototype AI based on IBM’s Watson began working at a global law firm. Its machine discovery and document review capabilities, once sufficiently advanced, could affect the jobs of young associate lawyers, which Marchant thinks demonstrates that it’s not only menial jobs that are at risk. “This is going to become a bigger and bigger issue,” he said. Fanuc, the largest provider of industrial robots, has recently used reinforcement learning to teach an embodied AI how to perform a new job — in 24 hours.

Google’s Gupta offers an optimistic perspective, saying, “The more interesting story is the jobs that are being created,” though she stops short of listing any of these new jobs. Her Google colleague, Eck, puts it into a historical (and of course, musical) frame, noting that the advent of drum machines didn’t create legions of unemployed drummers (or, at the very least, it didn’t add to their existing numbers). “We still have lots of drummers, and a lot of them are doing really awesome things with drum machines,” he says.

Marchant understands these arguments, but ultimately, he rejects them. The always-on, 24/7 decision-making nature of AI puts it into a technology class by itself he says. “There will be so many things that machines will be able to do better than humans,” he notes. “There was always something for humans to move to in the past. That isn’t the case now.”

“I’m almost worried that sometimes we move too quickly.”

Interestingly, the biggest players in AI aren’t deaf to these and other concerns regarding AI’s future impact on society and have recently joined forces to create a new nonprofit organization called The Partnership on Artificial Intelligence to Benefit People and Society, or the shorter Partnership on AI. Its members include Microsoft, Google, IBM, Facebook, and Amazon, and its stated goal is to “address opportunities and challenges with AI technologies” through open and collaborative research. The partnership isn’t interested in consulting with policymaking bodies, but the public could end up steering it that way if they can’t be convinced of the technology’s benefits.

Surprisingly, Marchant isn’t ready to demand more laws or more regulations right away. “I’m almost worried that sometimes we move too quickly,” he says, “and start putting in place laws before we know what we’re trying to address.”

Just say no to AI

Dr. Kathleen Richardson, Senior Research Fellow in the Ethics of Robotics at De Montfort University, knows exactly what she’s trying to address: The goal of an aware AI, or any AI designed to mimic living things, she believes, is a fundamentally flawed pursuit. “The only reason we think it’s possible that machines could be like people,” she says, “is because we had — and still have — slavery.” For Richardson, using machines as a stand-in for a person, or indeed any other living entity, is a byproduct of a corrupt civilization that is still trying to find rationalizations to treat people as objects.

“We share properties with all life,” she says, “but we don’t share properties with human-made artifacts.” Reversing this logic, Richardson dismissed the notion that we will ever create an aware, sentient, or sapient algorithm. “I completely, utterly, 100 percent reject it,” she says. Perhaps because of this ironclad belief, Richardson doesn’t spend much time worrying about superintelligent, killer AIs. Why think about a future that will never come to pass? Instead, she’s focused her research on AI and robotics in the here and now, as well as their near-term impact. What she sees, she does not like — in particular the trend toward robotic companions, driven by improvements in AI. Though most well-known for her anti-sex robot position, Richardson opposes robotic companionship of any kind.

Softbank Pepper Robot
Pepper is designed to be a humanoid companion, keeping owners company like a pet, rather than performing any specific task. (Photo: Softbank)

“They say that these robots — these objects — are going to be therapeutic,” she says, referring specifically to the bleeding-edge Japanese market, which has the support of industry heavyweights like SoftBank and Sony. Richardson doesn’t put much faith in this notion, which she thinks is nothing more than yet another rationalization linked to slavery. “If you talk to elderly people,” she says, “what they want is day trips out, and contact with other human beings. None of them said, ‘What I want most of all is a robot.’” Perhaps she’s right, and yet that didn’t stop SoftBank’s Pepper — the first companion robot capable of interpreting basic human emotions — from selling out its initial run of 1,000 units in less than a minute.

Sherry Turkle, a Massachusetts Institute of Technology researcher, psychologist and author, agrees with Richardson’s viewpoint, but mostly because she has seen that — contrary to Richardson’s claim — there is demand for AI companions, and that worries her. In 2013, Turkle gave an interview to Live Science, saying, “The idea of some kind of artificial companionship has already become the new normal.” The price for this new normalcy is that “we have to change ourselves, and in the process, we are remaking human values and human connection.”

Sophia from Hanson Robotics, also pictured at the top of this article, achieves an almost creepy level of resemblance to a real human. (Video: Hanson Robotics)

That would be just fine with Dr. David Hanson of Hanson Robotics. “The artificial intelligence will evolve to the point where they will truly be our friends,” he told CNBC.

Marchant has already weighed in on this subject. Instead of fighting this new normal, he says we might just have to embrace it. In his controversial Slate article, he outlines a future where marriages between humans and robots will not only be legal, they will be inevitable. “If a robotic companion could provide some kind of comfort and love — apparent love at least — I’m not sure that’s wrong,” he says, citing the fact that there are many in our society who, for various reasons, are incapable of forming these kinds of relationships with other humans.

Marchant makes it clear that he still values human relationships above those that involve synthetic companions, but he’s also prepared to accept that not everyone will share these values. “I’m certainly not going to marry a robot, but if my son wanted to 20 years from now, I wouldn’t say he couldn’t do that,” he claims. “I’d try to talk him out of it, but if that’s what made him happy, I’d be more concerned about that than anything else.” Perhaps as a sign of the times, earlier this year a draft plan for the EU included wording that would give robots official standing as “electronic persons.”

Stepping toward the future

Facebook CEO Mark Zuckerberg has said that in 10 years, it’s likely that AI will be better than humans at basic sensory perception. Li Deng, a principal researcher at Microsoft, agrees, and goes even further, saying, “Artificial Intelligence technologies will be used pervasively by ordinary people in their daily lives.”

Eric Schmidt, executive chairman of Google parent Alphabet, and Google CEO Pichai see an enormous explosion in the number of applications, products, and companies that have machine learning at their core. They are quick to point out that this type of AI, with its insatiable appetite for data, will only fulfill its potential when paired with the cloud. Urs Hölzle, Google’s senior vice president of technical infrastructure, said, “Over the next five years, I expect to see more change in computing than in the last five decades.”

“When an engineering path [to sentient AI] becomes clear then we’ll have a sense of what not to do.”

These predictions are — somewhat obviously, given their sources — highly positive, but that doesn’t mean the road ahead will resemble the Autobahn. There could be significant bumps. IBM’s Banavar points to a few challenges that could hamper progress. “One of the breakthroughs we need,” he says, “is how you combine the statistical technique [of machine learning] with the knowledge-based technique.” He refers to the fact that even though machines have proven powerful at sifting through huge volumes of data to determine patterns and predict outcomes, they still don’t understand its “meaning.”

The other big challenge is being able to ramp up the computing power we need to make the next set of AI leaps possible. “We are working on new architectures,” he reveals, “inspired by the natural structures of the brain.” The premise here is that if brain-inspired software, like neural nets, can yield powerful results in machine learning, then brain-inspired hardware might be equally (or more) powerful.

All this talk about brain-inspired technology inevitably leads us back to our first, spooky, concern: In the future, AI might be a collection of increasingly useful tools that can free us from drudgery, or it could evolve rapidly — and unbeknownst to us — into the most efficient killing machine ever invented.

One of those options certainly seems a lot more desirable, but how do we make sure that’s the version we end up with?

If we follow AI expert Chorost’s advice, there’s no reason to worry, because as long as our AIs evolve, they’ll develop morality — and morality leads to benevolence. That’s assuming it’s even possible, which he disputes. “When an engineering path [to sentient AI] becomes clear,” he says, “then we’ll have a sense of what not to do.”

Banavar, despite being fairly certain that an AI with its own goals isn’t in our future, suggests that “it is a smart thing for us to have a way to turn off the machine.” The team at Google’s DeepMind agree and have written a paper in conjunction with the Future Of Life Institute that describes how to create the equivalent of a “big red button” that would let the human operator of an AI agent suspend its functions, even if the agent became smart enough to realize such a mechanism existed. The paper, titled “Safely Interruptible Agents,” does not go so far as to position itself as the way to counter a runaway superintelligence, but it’s a step in the right direction as far as Tesla CEO Musk is concerned: He recently implied that Google is the “one” company whose AI efforts keep him awake at night.

Interestingly, during the same interview with Recode, Musk suggested that OpenAI — an organization he backs that operates a grass-roots effort to make AI technology widely available to everyone — could be the ultimate antidote to a malevolent AI agent. If everyone possessed their own personal AI, he reckons, “if somebody did try to something really terrible, then the collective will of others could overcome that bad actor.”

Perhaps we will develop a strong AI. Perhaps it won’t be friendly. Perhaps we will be pushed to extinction by Skynet, offered a tenuous, uneasy truce by the machines of The Matrix, or simply ignored and left to our own (less intelligent) devices by the superintelligent OS One from Her.

Or perhaps, to quote computer scientist and AI skeptic Peter Hassan, we will simply keep “pursuing an ever-increasing number of irrelevant activities as the original goal recedes ever further into the future — like the mirage it is.”