Skip to main content

Chess. Jeopardy. Go. Why do we use games as a benchmark for A.I.?

Growing up, many of us were told that playing computer or video games was a waste of time. However, when it comes to training artificial intelligence, not only is it not a waste of time, but it could be key to developing the intelligent agents of tomorrow.

Several years ago, researchers from Google DeepMind impressively demonstrated that cutting-edge A.I. could be used to master several classic Atari games, without being explicitly taught how to play them. Since then, multiple researchers have tried their hand at reinforcement learning systems that use trial and error to learn how to master games.

"First return, then explore”: Go-Explore solves all previously unsolved Atari games

Now, researchers from Uber AI Labs and OpenAI have come up with a way to refine these tools even further that enables them to exhibit a high level of performance at more complex games that gameplaying A.I. agents have previously struggled with.

“The new algorithm that we developed, Go-Explore, outperforms previous machine learning algorithms on many Atari games, including the notoriously hard exploration games Montezuma’s Revenge and Pitfall,” Joost Huizinga, a research scientist at OpenA.I., told Digital Trends.

Not only did it “outperform” previous systems, but Go-Explore is the first algorithm to beat all levels of the fiendishly difficult Montezuma’s Revenge and rack up near-perfect scores on Pitfall.

Montezuma pyramid game
Uber AI Labs/OpenAI

It does this by remembering the previous successful approaches it has tried and returning to these high-scoring moments, rather than always starting from the beginning of the game. Since Atari games didn’t usually allow this (people who grew up with games that allow regular save points don’t realize how lucky they are!), the researchers used an emulator that allows them to save stats and reload them at any time.

Playing through a collection of 55 Atari games — representing an increasingly standard benchmark for reinforcement learning algorithms — Go-Explore was able to beat other state-of-the-art systems at these titles 85.5% of the time.

It’s an impressive demonstration of artificial intelligence in action. And, despite just being a game, it could have exciting real-world applications.

Checkmates and robot overlords

Right from the start — before artificial intelligence was even coined as the official name of the discipline — researchers in the field were interested in games. In 1949, Claude Shannon, one of A.I.’s founding figures, laid out his explanation for why making a computer play chess would be a worthy endeavor.

Games like chess, Shannon wrote, present a sharply defined problem, featuring both allowable operations and an ultimate goal. They weren’t too difficult a challenge to feasibly solve, and yet still require intelligence to excel at, while also possessing a discrete (meaning non-continuous) structure in keeping with the step-by-step manner in which computers solve problems.

While the technology driving these systems has changed immeasurably in the intervening 70-plus years, many of those ideas are still what drives the usage of games for developing and testing artificial intelligence. Games provide an abstracted, simplified version of the real world in which the complexity of problems is distilled into actions, states, rewards, and clear-cut winners and losers.

Although Shannon, Alan Turing, and numerous other early A.I. luminaries worked on the challenge of computer chess, and there were some notable successes along the way — such as Massachusetts Institute of Technology programmer Richard Greenblatt’s MacHack in the 1960s — it wasn’t really until May 1997 that computer chess truly grabbed the world’s attention.

Deep Blue vs Kasparov: How a computer beat best chess player in the world - BBC News

That was the month and year that IBM’s “Deep Blue” system bested chess world champion Garry Kasparov in a six-game match. Deep Blue was an astonishing example of brute-force computing in action. It used enormous parallel hardware, consisting of 30 top-end microprocessors, to examine the ramifications of 200 million board positions every second. IBM’s chess-playing supercomputer was equipped with a memory bank, consisting of hundreds of thousands of previous, master-level games upon which it could draw. (Ironically, the move that won Deep Blue the first match was actually a failure on the part of the system in which it defaulted to picking a move at random, which Kasparov mistook for creativity.)

Nearly 15 years later, in February 2011, IBM had its next headline-conquering A.I. gaming triumph when its IBM Watson A.I. faced off against former champions Ken Jennings and Brad Rutter in a multipart televised special of the game show Jeopardy. “I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy was still decades away,” Jennings told me for my book Thinking Machines. “Or at least I thought that it was.”

IBM/Youtube

In the event, Watson obliterated the pair, en route to winning the $1 million prize money. “It really stung to lose that badly,” said Jennings, who holds the record for longest winning streak in the game’s history. As the game came to a close, he scribbled a phrase on his answer board and held it up for the cameras: “I for one welcome our new robot overlords.”

Gameplaying and machine learning

More recent gameplaying A.I. demonstrations have largely been the work of DeepMind, which has focused on games as part of its stated goal to “solve intelligence.” Perhaps the most notable achievement was AlphaGo, a Go-playing bot that defeated world champion Lee Sedol — four games to one — in a 2016 series watched by 60 million people. There was also the aforementioned Atari-playing A.I., and AlphaStar, which sought to master the real-time strategy game StarCraft II.

Compared to the brute-force computing efforts of, say, Deep Blue, these are instead demonstrations of machine learning techniques. This is partly by necessity. Go, for example, features far more possible board positions than chess, making it tough to employ brute-force. The opening move in a chess game allows for 20 possible moves. The first player in Go has 361 possibilities. In totality, Go features more allowable board positions than the total number of atoms in the known universe. That’s a tall order for brute-force computing, even allowing for the increases in hardware since 1997.

alpha go
DeepMind/YouTube

As approaches like deep reinforcement learning have advanced, modern game-playing A.I. systems have, like A.I. as a whole, largely switched from following prelearned rules to learning on their own. This, too, has opened up a new advantage to the use of games as a prover for A.I. systems: The free data.

As symbolic A.I. gave way to today’s data-hungry machine learning tools, games offered researchers a more plentiful source of the data they needed to perform their demonstrations. Demis Hassabis, the CEO and co-founder of DeepMind, made this point in a November 2020 interview with Azeem Azhar of the newsletter Exponential View. “We were a small startup, we didn’t have access to a lot of data from applications … and so we had to synthesize our own data,” Hassabis said. “If you use games, whether that’s board games like Go or simulations, like computer games, video games, you can run them for as long as you like and generate as much synthetic data as you want.”

Case in point: AlphaGo played itself more than 10 million times to achieve its Go-playing prowess. In non-gameplaying scenarios, this mass of data would have to be gathered from elsewhere. In the case of a gameplaying A.I., it can be generated by the system itself.

Real-world usage

There is, it should be stated, an element of P. T. Barnum “roll up, roll up, see the incredible intelligent computer” to public gameplaying A.I. demonstrations. It turns machine learning research into something approaching an Olympic sport. Far more people watched IBM’s Watson win at Jeopardy! than have ever cited the research papers describing backpropagation, the most famous algorithms in modern machine learning. IBM’s stock surged in 1997, following the Kasparov chess game, and again in 2011 after Watson’s Jeopardy win.

But gameplaying A.I. isn’t just a cynical grab for attention. “The ultimate goal, of course, is not merely to solve games per se,” Adrien Ecoffet, a research scientist at OpenAI, told Digital Trends. “The problem framing itself is very general, so that algorithms that can solve games well can also be useful in practical applications. In our work, we show that the same algorithm we used to solve Atari games can also be used to solve a challenging robotics problem. In addition to robotics, Go-Explore has already seen some experimental research in language learning, where an agent learns the meaning of words by exploring a text-based game, and for discovering potential failures in the behavior of a self-driving car in order to avoid those failures in the future.”

Jeff Clune, a research team leader at OpenAI, told Digital Trends that DeepMind has successfully applied reinforcement learning and machine learning to practical, real-world problems such as controlling stratospheric balloons and carrying out data center cooling.

Meanwhile, Huizinga pointed out that reinforcement learning tools are widespread in recommendation systems that determine what video or advertisement to show to users online. Similarly, search algorithms used to allow A.I. agents to find their way in video games also form the “backbone algorithm” for automatic route planning in navigation systems.

“While there are, to the best of our knowledge, no commercially applied versions of Go-Explore yet, it may not be long before we start seeing practical applications,” Huizinga said. And, with it, most likely a whole lot of other gameplaying A.I. systems

A paper describing the Uber A.I. Labs and OpenAI. reinforcement learning project was recently published in the journal Nature.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
How to start the Nuka-World DLC in Fallout 4
People standing outside Nuka World.

The first major DLC expansion for Fallout 4 lets players go to the abandoned amusement park called Nuka-World. While there's plenty of fun and excitement to be had here, don't expect it to come from the roller coasters or carnival games since this park is the battleground between rival raider gangs. This new zone adds a ton of new quests and side activities to the base experience, but it isn't as simple to get to as a real theme park. Don't worry if your Pip-Boy isn't helping you get to Nuka-World -- we'll show you how to start this DLC.

Read more
How to start the Automatron DLC in Fallout 4
A man and a robot walking in the wastelands in Fallout 4.

Each piece of Fallout 4 DLC adds something substantial to the base experience. In the case of the Automatron expansion, an entire new questline pitting you against a robot army led by a figure known as the Mechanist. Starting it isn't as difficult as starting other DLCs like the Nuka-World expansion, but it-s still a bit cryptic. Buying the DLC doesn't automatically make it apparent how to actually start this new adventure, but we'll give you specific directions to find it in the wasteland.

Read more
One of 2023’s best indie games is getting a movie starring LaKeith Stanfield
James descends on an elevator in El Paso, Elsewhere.

El Paso, Elsewhere, one of Digital Trends' favorite indie games of 2023, now has a film adaptation in the works.

Variety reports that LaKeith Stanfield -- an actor known for his work in films like Judas and the Black Messiah, Knives Out, and Haunted Mansion, as well as TV shows like Atlanta -- is going to star in and produce the film. The adaptation is in the works at Di Bonaventure Pictures, the production company behind the Transformers, G.I. Joe, and The Meg film franchises. Little else is known about the film at this time, although we'd presume it will be a fairly direct adaptation of this intense story-driven game.

Read more