Skip to main content

Chess. Jeopardy. Go. Why do we use games as a benchmark for A.I.?

Growing up, many of us were told that playing computer or video games was a waste of time. However, when it comes to training artificial intelligence, not only is it not a waste of time, but it could be key to developing the intelligent agents of tomorrow.

Several years ago, researchers from Google DeepMind impressively demonstrated that cutting-edge A.I. could be used to master several classic Atari games, without being explicitly taught how to play them. Since then, multiple researchers have tried their hand at reinforcement learning systems that use trial and error to learn how to master games.

"First return, then explore”: Go-Explore solves all previously unsolved Atari games

Now, researchers from Uber AI Labs and OpenAI have come up with a way to refine these tools even further that enables them to exhibit a high level of performance at more complex games that gameplaying A.I. agents have previously struggled with.

Recommended Videos

“The new algorithm that we developed, Go-Explore, outperforms previous machine learning algorithms on many Atari games, including the notoriously hard exploration games Montezuma’s Revenge and Pitfall,” Joost Huizinga, a research scientist at OpenA.I., told Digital Trends.

Not only did it “outperform” previous systems, but Go-Explore is the first algorithm to beat all levels of the fiendishly difficult Montezuma’s Revenge and rack up near-perfect scores on Pitfall.

Montezuma pyramid game
Uber AI Labs/OpenAI

It does this by remembering the previous successful approaches it has tried and returning to these high-scoring moments, rather than always starting from the beginning of the game. Since Atari games didn’t usually allow this (people who grew up with games that allow regular save points don’t realize how lucky they are!), the researchers used an emulator that allows them to save stats and reload them at any time.

Playing through a collection of 55 Atari games — representing an increasingly standard benchmark for reinforcement learning algorithms — Go-Explore was able to beat other state-of-the-art systems at these titles 85.5% of the time.

It’s an impressive demonstration of artificial intelligence in action. And, despite just being a game, it could have exciting real-world applications.

Checkmates and robot overlords

Right from the start — before artificial intelligence was even coined as the official name of the discipline — researchers in the field were interested in games. In 1949, Claude Shannon, one of A.I.’s founding figures, laid out his explanation for why making a computer play chess would be a worthy endeavor.

Games like chess, Shannon wrote, present a sharply defined problem, featuring both allowable operations and an ultimate goal. They weren’t too difficult a challenge to feasibly solve, and yet still require intelligence to excel at, while also possessing a discrete (meaning non-continuous) structure in keeping with the step-by-step manner in which computers solve problems.

While the technology driving these systems has changed immeasurably in the intervening 70-plus years, many of those ideas are still what drives the usage of games for developing and testing artificial intelligence. Games provide an abstracted, simplified version of the real world in which the complexity of problems is distilled into actions, states, rewards, and clear-cut winners and losers.

Although Shannon, Alan Turing, and numerous other early A.I. luminaries worked on the challenge of computer chess, and there were some notable successes along the way — such as Massachusetts Institute of Technology programmer Richard Greenblatt’s MacHack in the 1960s — it wasn’t really until May 1997 that computer chess truly grabbed the world’s attention.

Deep Blue vs Kasparov: How a computer beat best chess player in the world - BBC News

That was the month and year that IBM’s “Deep Blue” system bested chess world champion Garry Kasparov in a six-game match. Deep Blue was an astonishing example of brute-force computing in action. It used enormous parallel hardware, consisting of 30 top-end microprocessors, to examine the ramifications of 200 million board positions every second. IBM’s chess-playing supercomputer was equipped with a memory bank, consisting of hundreds of thousands of previous, master-level games upon which it could draw. (Ironically, the move that won Deep Blue the first match was actually a failure on the part of the system in which it defaulted to picking a move at random, which Kasparov mistook for creativity.)

Nearly 15 years later, in February 2011, IBM had its next headline-conquering A.I. gaming triumph when its IBM Watson A.I. faced off against former champions Ken Jennings and Brad Rutter in a multipart televised special of the game show Jeopardy. “I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy was still decades away,” Jennings told me for my book Thinking Machines. “Or at least I thought that it was.”

IBM/Youtube

In the event, Watson obliterated the pair, en route to winning the $1 million prize money. “It really stung to lose that badly,” said Jennings, who holds the record for longest winning streak in the game’s history. As the game came to a close, he scribbled a phrase on his answer board and held it up for the cameras: “I for one welcome our new robot overlords.”

Gameplaying and machine learning

More recent gameplaying A.I. demonstrations have largely been the work of DeepMind, which has focused on games as part of its stated goal to “solve intelligence.” Perhaps the most notable achievement was AlphaGo, a Go-playing bot that defeated world champion Lee Sedol — four games to one — in a 2016 series watched by 60 million people. There was also the aforementioned Atari-playing A.I., and AlphaStar, which sought to master the real-time strategy game StarCraft II.

Compared to the brute-force computing efforts of, say, Deep Blue, these are instead demonstrations of machine learning techniques. This is partly by necessity. Go, for example, features far more possible board positions than chess, making it tough to employ brute-force. The opening move in a chess game allows for 20 possible moves. The first player in Go has 361 possibilities. In totality, Go features more allowable board positions than the total number of atoms in the known universe. That’s a tall order for brute-force computing, even allowing for the increases in hardware since 1997.

alpha go
DeepMind/YouTube

As approaches like deep reinforcement learning have advanced, modern game-playing A.I. systems have, like A.I. as a whole, largely switched from following prelearned rules to learning on their own. This, too, has opened up a new advantage to the use of games as a prover for A.I. systems: The free data.

As symbolic A.I. gave way to today’s data-hungry machine learning tools, games offered researchers a more plentiful source of the data they needed to perform their demonstrations. Demis Hassabis, the CEO and co-founder of DeepMind, made this point in a November 2020 interview with Azeem Azhar of the newsletter Exponential View. “We were a small startup, we didn’t have access to a lot of data from applications … and so we had to synthesize our own data,” Hassabis said. “If you use games, whether that’s board games like Go or simulations, like computer games, video games, you can run them for as long as you like and generate as much synthetic data as you want.”

Case in point: AlphaGo played itself more than 10 million times to achieve its Go-playing prowess. In non-gameplaying scenarios, this mass of data would have to be gathered from elsewhere. In the case of a gameplaying A.I., it can be generated by the system itself.

Real-world usage

There is, it should be stated, an element of P. T. Barnum “roll up, roll up, see the incredible intelligent computer” to public gameplaying A.I. demonstrations. It turns machine learning research into something approaching an Olympic sport. Far more people watched IBM’s Watson win at Jeopardy! than have ever cited the research papers describing backpropagation, the most famous algorithms in modern machine learning. IBM’s stock surged in 1997, following the Kasparov chess game, and again in 2011 after Watson’s Jeopardy win.

But gameplaying A.I. isn’t just a cynical grab for attention. “The ultimate goal, of course, is not merely to solve games per se,” Adrien Ecoffet, a research scientist at OpenAI, told Digital Trends. “The problem framing itself is very general, so that algorithms that can solve games well can also be useful in practical applications. In our work, we show that the same algorithm we used to solve Atari games can also be used to solve a challenging robotics problem. In addition to robotics, Go-Explore has already seen some experimental research in language learning, where an agent learns the meaning of words by exploring a text-based game, and for discovering potential failures in the behavior of a self-driving car in order to avoid those failures in the future.”

Jeff Clune, a research team leader at OpenAI, told Digital Trends that DeepMind has successfully applied reinforcement learning and machine learning to practical, real-world problems such as controlling stratospheric balloons and carrying out data center cooling.

Meanwhile, Huizinga pointed out that reinforcement learning tools are widespread in recommendation systems that determine what video or advertisement to show to users online. Similarly, search algorithms used to allow A.I. agents to find their way in video games also form the “backbone algorithm” for automatic route planning in navigation systems.

“While there are, to the best of our knowledge, no commercially applied versions of Go-Explore yet, it may not be long before we start seeing practical applications,” Huizinga said. And, with it, most likely a whole lot of other gameplaying A.I. systems

A paper describing the Uber A.I. Labs and OpenAI. reinforcement learning project was recently published in the journal Nature.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Metaphor: ReFantazio fixes my one big Persona pet peeve
Metaphor: ReFantazio's protagonist.

After gaining glowing reviews earlier this week, Metaphor: ReFantazio is out now. Atlusā€™ new RPG takes the basic gameplay of Persona but does away with its modern-day high school story. Instead, itā€™s a fantasy tale about a high-stakes election in a kingdom full of monsters, nefarious kings, and political strife. Itā€™s a gripping story that justifies Metaphorā€™s gargantuan runtime.

Iā€™m sure that it wonā€™t be long until genre fans begin debating if Metaphor is ā€œbetterā€ than Persona. Itā€™s a silly question; despite their similarities, both games have very different tonal and narrative strengths, as well as some key differences in combat. Thereā€™s no need to pit the two against one another, but Metaphor does make a few quality-of-life changes to Personaā€™s formula that feel like a definitive improvement. Among the long list of big-picture tweaks, Atlus fixed my biggest pet peeve with Persona: death.

Read more
All upcoming PS5 games: 2024, 2025, and beyond
Team up abilities in Marvel Rivals

The PlayStation 5 has been out for some time now, and its reception has been mostly positive. It includes lots of quality-of-life improvements over its predecessor, the PlayStation 4, such as faster load times, a solid-state drive (SSD) instead of a regular hard disk drive (HDD), and an improved controller in the form of the new DualSense. However, a console is only as good as the games available on it and, thankfully, the PS5 has you covered on that front as well.

While the machine already has a worthy library of great PS5 games, there are even more to look forward to, with some releasing as soon as this month, while others are still years away. Some will be completely free PS5 games, some will be PS5 exclusives, and others will be completely cross-platform so you can play with friends on Xbox, PC, and Switch.

Read more
3 new PS Plus games you need to play this weekend (October 11-13)
Issac Clarke exploring ruins in Dead Space Remake.

As we enter the middle of October, Halloween celebrations will only continue to ramp up. That makes this a perfect time to play horror games or games that elicit an eerie, autumn-like vibe despite all of the new releases like Metaphor: ReFantazio and Dragon Ball: Sparking! Zero. Thankfully, several new additions to Sony's subscription service, from PlayStation Plus Essential to Extra, fit that bill. Those are the games I'm drawing from as I recommend three more PS Plus titles for you to check out this weekend.
Dead Space
10 Minutes of Dead Space Remake Gameplay

One of October's PS Plus Essential monthly games is EA Motive's excellent Dead Space remake. It's an utterly gorgeous, faithful recreation of Visceral Games' iconic 2008 sci-fi horror game that's still one of the best-looking games on PlayStation 5. It's a must-play during this spooky season, even if one of the most frightening things about doing so is realizing EA doesn't plan to follow excellent remake up anytime soon. The PS5 version of Dead SpaceĀ is available to all PS Plus subscribers until November 5. It's also on PC and Xbox Series X/S through Xbox Game Pass Ultimate.
Night in the Woods
Night in the Woods Announce Trailer

Read more