Skip to main content

Chess. Jeopardy. Go. Why do we use games as a benchmark for A.I.?

Growing up, many of us were told that playing computer or video games was a waste of time. However, when it comes to training artificial intelligence, not only is it not a waste of time, but it could be key to developing the intelligent agents of tomorrow.

Several years ago, researchers from Google DeepMind impressively demonstrated that cutting-edge A.I. could be used to master several classic Atari games, without being explicitly taught how to play them. Since then, multiple researchers have tried their hand at reinforcement learning systems that use trial and error to learn how to master games.

"First return, then explore”: Go-Explore solves all previously unsolved Atari games

Now, researchers from Uber AI Labs and OpenAI have come up with a way to refine these tools even further that enables them to exhibit a high level of performance at more complex games that gameplaying A.I. agents have previously struggled with.

“The new algorithm that we developed, Go-Explore, outperforms previous machine learning algorithms on many Atari games, including the notoriously hard exploration games Montezuma’s Revenge and Pitfall,” Joost Huizinga, a research scientist at OpenA.I., told Digital Trends.

Not only did it “outperform” previous systems, but Go-Explore is the first algorithm to beat all levels of the fiendishly difficult Montezuma’s Revenge and rack up near-perfect scores on Pitfall.

Montezuma pyramid game
Uber AI Labs/OpenAI

It does this by remembering the previous successful approaches it has tried and returning to these high-scoring moments, rather than always starting from the beginning of the game. Since Atari games didn’t usually allow this (people who grew up with games that allow regular save points don’t realize how lucky they are!), the researchers used an emulator that allows them to save stats and reload them at any time.

Playing through a collection of 55 Atari games — representing an increasingly standard benchmark for reinforcement learning algorithms — Go-Explore was able to beat other state-of-the-art systems at these titles 85.5% of the time.

It’s an impressive demonstration of artificial intelligence in action. And, despite just being a game, it could have exciting real-world applications.

Checkmates and robot overlords

Right from the start — before artificial intelligence was even coined as the official name of the discipline — researchers in the field were interested in games. In 1949, Claude Shannon, one of A.I.’s founding figures, laid out his explanation for why making a computer play chess would be a worthy endeavor.

Games like chess, Shannon wrote, present a sharply defined problem, featuring both allowable operations and an ultimate goal. They weren’t too difficult a challenge to feasibly solve, and yet still require intelligence to excel at, while also possessing a discrete (meaning non-continuous) structure in keeping with the step-by-step manner in which computers solve problems.

While the technology driving these systems has changed immeasurably in the intervening 70-plus years, many of those ideas are still what drives the usage of games for developing and testing artificial intelligence. Games provide an abstracted, simplified version of the real world in which the complexity of problems is distilled into actions, states, rewards, and clear-cut winners and losers.

Although Shannon, Alan Turing, and numerous other early A.I. luminaries worked on the challenge of computer chess, and there were some notable successes along the way — such as Massachusetts Institute of Technology programmer Richard Greenblatt’s MacHack in the 1960s — it wasn’t really until May 1997 that computer chess truly grabbed the world’s attention.

Deep Blue vs Kasparov: How a computer beat best chess player in the world - BBC News

That was the month and year that IBM’s “Deep Blue” system bested chess world champion Garry Kasparov in a six-game match. Deep Blue was an astonishing example of brute-force computing in action. It used enormous parallel hardware, consisting of 30 top-end microprocessors, to examine the ramifications of 200 million board positions every second. IBM’s chess-playing supercomputer was equipped with a memory bank, consisting of hundreds of thousands of previous, master-level games upon which it could draw. (Ironically, the move that won Deep Blue the first match was actually a failure on the part of the system in which it defaulted to picking a move at random, which Kasparov mistook for creativity.)

Nearly 15 years later, in February 2011, IBM had its next headline-conquering A.I. gaming triumph when its IBM Watson A.I. faced off against former champions Ken Jennings and Brad Rutter in a multipart televised special of the game show Jeopardy. “I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy was still decades away,” Jennings told me for my book Thinking Machines. “Or at least I thought that it was.”

IBM/Youtube

In the event, Watson obliterated the pair, en route to winning the $1 million prize money. “It really stung to lose that badly,” said Jennings, who holds the record for longest winning streak in the game’s history. As the game came to a close, he scribbled a phrase on his answer board and held it up for the cameras: “I for one welcome our new robot overlords.”

Gameplaying and machine learning

More recent gameplaying A.I. demonstrations have largely been the work of DeepMind, which has focused on games as part of its stated goal to “solve intelligence.” Perhaps the most notable achievement was AlphaGo, a Go-playing bot that defeated world champion Lee Sedol — four games to one — in a 2016 series watched by 60 million people. There was also the aforementioned Atari-playing A.I., and AlphaStar, which sought to master the real-time strategy game StarCraft II.

Compared to the brute-force computing efforts of, say, Deep Blue, these are instead demonstrations of machine learning techniques. This is partly by necessity. Go, for example, features far more possible board positions than chess, making it tough to employ brute-force. The opening move in a chess game allows for 20 possible moves. The first player in Go has 361 possibilities. In totality, Go features more allowable board positions than the total number of atoms in the known universe. That’s a tall order for brute-force computing, even allowing for the increases in hardware since 1997.

alpha go
DeepMind/YouTube

As approaches like deep reinforcement learning have advanced, modern game-playing A.I. systems have, like A.I. as a whole, largely switched from following prelearned rules to learning on their own. This, too, has opened up a new advantage to the use of games as a prover for A.I. systems: The free data.

As symbolic A.I. gave way to today’s data-hungry machine learning tools, games offered researchers a more plentiful source of the data they needed to perform their demonstrations. Demis Hassabis, the CEO and co-founder of DeepMind, made this point in a November 2020 interview with Azeem Azhar of the newsletter Exponential View. “We were a small startup, we didn’t have access to a lot of data from applications … and so we had to synthesize our own data,” Hassabis said. “If you use games, whether that’s board games like Go or simulations, like computer games, video games, you can run them for as long as you like and generate as much synthetic data as you want.”

Case in point: AlphaGo played itself more than 10 million times to achieve its Go-playing prowess. In non-gameplaying scenarios, this mass of data would have to be gathered from elsewhere. In the case of a gameplaying A.I., it can be generated by the system itself.

Real-world usage

There is, it should be stated, an element of P. T. Barnum “roll up, roll up, see the incredible intelligent computer” to public gameplaying A.I. demonstrations. It turns machine learning research into something approaching an Olympic sport. Far more people watched IBM’s Watson win at Jeopardy! than have ever cited the research papers describing backpropagation, the most famous algorithms in modern machine learning. IBM’s stock surged in 1997, following the Kasparov chess game, and again in 2011 after Watson’s Jeopardy win.

But gameplaying A.I. isn’t just a cynical grab for attention. “The ultimate goal, of course, is not merely to solve games per se,” Adrien Ecoffet, a research scientist at OpenAI, told Digital Trends. “The problem framing itself is very general, so that algorithms that can solve games well can also be useful in practical applications. In our work, we show that the same algorithm we used to solve Atari games can also be used to solve a challenging robotics problem. In addition to robotics, Go-Explore has already seen some experimental research in language learning, where an agent learns the meaning of words by exploring a text-based game, and for discovering potential failures in the behavior of a self-driving car in order to avoid those failures in the future.”

Jeff Clune, a research team leader at OpenAI, told Digital Trends that DeepMind has successfully applied reinforcement learning and machine learning to practical, real-world problems such as controlling stratospheric balloons and carrying out data center cooling.

Meanwhile, Huizinga pointed out that reinforcement learning tools are widespread in recommendation systems that determine what video or advertisement to show to users online. Similarly, search algorithms used to allow A.I. agents to find their way in video games also form the “backbone algorithm” for automatic route planning in navigation systems.

“While there are, to the best of our knowledge, no commercially applied versions of Go-Explore yet, it may not be long before we start seeing practical applications,” Huizinga said. And, with it, most likely a whole lot of other gameplaying A.I. systems

A paper describing the Uber A.I. Labs and OpenAI. reinforcement learning project was recently published in the journal Nature.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
The Nintendo Switch just got 2 surprise games — and they’re both worth grabbing
A teddy beat sits on an embroidery hoop in Stitch.

If you were unable to catch this week's Nintendo IndieWorld showcase, then you missed a surprisingly loaded show. Lorelei and the Laser Eyes got a May release date, WayForward showed off its Yars' Revenge revival, and Steamworld Heist 2 got an exciting reveal. In the midst of all those headlines, two smaller games were surprise released on the platform: Stitch and Sticky Business. Don't sleep on either of them, as they're both worth a purchase.

Both games are ports of previously released games, but both went a bit under the radar upon their original launch. Sticky Business modestly launched last summer on PC, whereas Stitch has actually been around since 2022 as an Apple Arcade exclusive. The latter even has an Apple Vision Pro version now that can be played in mixed reality. I can't blame anyone for missing either, but their Switch releases offer a good opportunity to catch up with some quiet hidden gems.

Read more
Is this Razer’s Steam Deck killer?
The Razer Kishi Ultra sitting on a table.

Razer has been oddly quiet in the burgeoning world of handheld gaming PCs. When I met up with the company at the Game Developers Conference (GDC) to learn about its new products, I was happy to hear it had an answer to the success of the Steam Deck.

But it was not the type of answer I was expecting.

Read more
The best iPhone emulators
A collage of the delta emulator.

The market for iPhone games has become so wide and diverse that it can realistically compete with most console and PC offerings. Where we once only got cheap time-wasters, we now have complete experiences that don't feel any less impressive than what the competition offers. In fact, a lot of games made for consoles are appearing on the iPhone now that it is becoming so powerful. However, older games have paradoxically been mostly absent from the app store. That all could be about to change as emulation is now allowed on iPhone, though with some caveats that any retro fan should know about before getting too excited to play all your favorite NES games on your phone. Here's what's up with iPhone emulators, as well as our picks for a few of the best ones you can get right now.
What you need to know about emulation on iPhone
Emulators on iPhone, as well as emulation in general, are in a strange legal gray zone. Previously, the only way to get an emulator on your iPhone was through some workarounds that generally involved jailbreaking your phone, That differs from Android, which has enjoyed native emulators for years. In 2024, Apple updated its App Store guidelines to allow for emulators on its store, but with some important restrictions.

Here's the exact wording: "Apps may offer certain software that is not embedded in the binary, specifically HTML5 mini apps and mini games, streaming games, chatbots, and plug-ins. Additionally, retro game console emulator apps can offer to download games. You are responsible for all such software offered in your app, including ensuring that such software complies with these guidelines and all applicable laws. Software that does not comply with one or more guidelines will lead to the rejection of your app. You must also ensure that the software adheres to the additional rules that follow in 4.7.1 and 4.7.5. These additional rules are important to preserve the experience that App Store customers expect, and to help ensure user safety."

Read more