Skip to main content

All hail our robot overlords: Google’s new AI will beat you at Atari

google ai atari wargames
Image used with permission by copyright holder
Researchers at Google have created an artificial intelligence that teaches itself how to play video games. Representatives called it “the first significant rung of the ladder” to developing broad, flexible, intelligent systems, Bloomberg reports.

Google acquired the London-based AI startup DeepMind, which helmed the project, early in 2014.  The program was set to play 49 games on the Atari 2600 console with no instructions. Left to its own devices, the AI was able to best expert human players in 29 of the games, and outperformed the best known algorithms for 43.

Rather than programming in particular strategies for each game, the researchers paired a general AI with a memory and a reward system. After completing a level or achieving a high score the system is rewarded, encouraging it to replicate what worked. With clear, measurable goals and the ability to refer to its own memory and adjust behavior based on what happens, Google’s system is able to train itself without human supervision. That is a major leap ahead of other machine learning projects that generally require people to provide feedback. You can dig a little deeper into the tech and see the improvement in action for Breakout on the Google Research Blog.

Demis Hassabis, a DeepMind co-founder and now VP of engineering at Google, said that this is the “first time anyone has built a single learning system that can learn directly from experience and manage a wide range of challenging tasks.” Creating intelligent systems that can navigate unexpected circumstances, rather than simply performing prescribed tasks, is a holy grail for AI research, and this is a major step toward that goal.

Games provide a perfect framework for training these types of flexible AIs for the same reasons that they can serve as a useful pedagogical tool for humans. As simulations, games can recreate the noisy and dynamic environments that an intelligence may have to deal with, but structuring that complexity into manageable systems and discrete goals. The researchers described games’ utility to the journal Nature as a way to “derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations.”

Having mastered the 2D worlds of ’80s Atari games, Hassabis’ plan is to move up into the 3D games of the ’90s. He is particularly excited about driving games, because of Google’s vested interest in self-driving cars. “If this can drive the car in a racing game, then potentially, with a few real tweaks, it should be able to drive a real car. That’s the ultimate aim.”

Google is one of the leading hubs of artificial intelligence research in the world today. In addition to acquiring DeepMind, over the last few years the internet search giant has invested millions in other AI startups and a partnership with Oxford University. Hiring inventor, futurist, and chief Singularity proselytizer Ray Kurzweil as director of engineering in 2012 sent a clear signal about the company’s blue sky ambitions for the disruptive future of artificial intelligence.

The concept for DeepMind’s gaming program is remarkably similar to the 1983 film WarGames, in which an AI called WOPR (War Operation Plan Reponse), designed to provide strategic oversight to the American nuclear missile defense, was trained by playing games like chess and tic-tac-toe. WOPR almost turned the Cold War into World War 3 due to a misunderstanding, so let’s hope that Google and the government never decide to weaponize the project, or at least teach it about no-win scenarios first.

In any case, it is only a matter of time now before the King of Kong is dethroned by a machine in a Kasparov/Deep Blue kind of situation. Is nothing sacred?

Editors' Recommendations

Will Fulton
Former Digital Trends Contributor
Will Fulton is a New York-based writer and theater-maker. In 2011 he co-founded mythic theater company AntiMatter Collective…
Qualcomm says its new chips are 4.5 times faster at AI than rivals
Two Qualcomm Snapdragon chips.

Qualcomm just announced two powerful new processors that excel at generative AI, one for laptops and the other for phones. As the potential applications for artificial intelligence continue to expand from text to images, video, and beyond, faster processing on your own device is becoming more important.

The Snapdragon X Elite is Qualcomm's exciting new laptop processor, boasting best-in-class CPU performance for Windows laptops and impressive GPU speed. For phones, the Snapdragon 8 Gen 3 blasts by the previous generation with 30% greater speed while drawing 20% less energy from your battery.

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more