Skip to main content

New research paper from Google reveals what the company fears most about AI

google pinpoints dangerous ai behaviors maxresdefault
2001: A Space Odyssey
It’s hard to think of a company more infatuated with AI than Google. With multi-billion dollar investments in deep learning startups like DeepMind, and responsible for some of the biggest advances involving neural networks, Google is the greatest cheerleader artificial intelligence could possibly hope for.

But that doesn’t mean there aren’t things about AI that scare the search giant.

In a new paper, entitled “Concrete Problems in AI Safety,” Google researchers — alongside experts from UC Berkeley and Stanford University — lay out some of the possible “negative side effects” which may arise from AI systems over the coming years. Instead of focusing on the distant threat of superintelligence, the 29-page paper instead examines “unintended and harmful behavior that may emerge from poor design.” Two big themes which emerge are the idea of a machine purposely misleading its creators in order to complete an objective, or else causing injury or damage to achieve “a tiny advantage for [its] task at hand.”

“This is a great paper that achieves a much-needed systematic classification of safety issues relating to autonomous AI systems,” George Zarkadakis, author of the book In Our Own Image: Will Artificial Intelligence Save or Destroy Us?, tells Digital Trends.

As to whether fears about AI are justified, Zarkadakis says that Google’s warnings — while potentially alarming — are a far cry from some of the other AI warnings we’ve heard in recent months from the likes of Stephen Hawking and Elon Musk. “The Google paper is a matter-of-fact engineering approach to identifying the areas for introducing safety in the design of autonomous AI systems, and suggesting design approaches to build in safety mechanisms,” he notes.

Indeed, despite its raising of issues, Google’s paper ends by considering the “question of how to think most productively about the safety of forward-looking applications of AI,” complete with handy suggestions. In all, whether you think working to achieve artificial intelligence is going to be a net positive or potentially disastrous negative for humanity, the newly-published paper is well worth a read.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Crew Dragon astronaut reveals what he loves most about spacewalks
return home button spacesuit feature spacewalk 1

One of the International Space Station’s latest arrivals, Bob Behnken, this week shared with earthlings what he loves most about spacewalks.

The NASA astronaut, who arrived at the space station with Doug Hurley at the end of May on the maiden flight of SpaceX’s Crew Dragon capsule, is a spacewalk veteran, having done a total of six during two Space Shuttle missions in 2008 and 2010.

Read more
This Google robot taught itself to walk, with no help whatsoever, in two hours
Google Robot

Do you remember that scene in Walt Disney’s Bambi where the titular fawn learns to stand up and walk under its own power? It’s a charming vignette in the movie, showcasing a skill that plenty of baby animals -- from pigs to giraffe to, yes, deer -- pick up within minutes of their birth. Over the first few hours of life, these animals rapidly refine their motor skills until they have full control over their own locomotion. Humans, who learn to stand holding onto things at around seven months and who begin walking at 15 months, are hopelessly sluggish by comparison.

Guess what the latest task that robots have beaten us at? In a new study carried out by researchers at Google, engineers have taught a quadruped Minitaur robot to walk by, well, not really having to teach it much at all. Rather, they’ve used a a type of goal-oriented artificial intelligence to make a four-legged robot learn how to walk forward, backward, and turn left and right entirely on its own. It was able to successfully teach itself to do this on three different terrains, including flat ground, a soft mattress, and a doormat with crevices.

Read more
Researchers use artificial intelligence to develop powerful new antibiotic
MIT researchers used a machine-learning algorithm to identify a drug called halicin that kills many strains of bacteria. Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not.

Researchers at MIT have used artificial intelligence to develop a new antibiotic compound that can kill even some antibiotic-resistant strains of bacteria. They created a computer model of millions of chemical compounds and used a machine-learning algorithm to pick out those which could be effective antibiotics, then selected one particular compound for testing and found it to be effective against E. coli and other bacteria in mouse models.

Most new antibiotics developed today are variations on existing drugs, using the same mechanisms. The new antibiotic uses a different mechanism than these existing drugs, meaning it can treat infections that current drugs cannot.

Read more