Skip to main content

Chess. Jeopardy. Go. Why do we use games as a benchmark for A.I.?

Growing up, many of us were told that playing computer or video games was a waste of time. However, when it comes to training artificial intelligence, not only is it not a waste of time, but it could be key to developing the intelligent agents of tomorrow.

Several years ago, researchers from Google DeepMind impressively demonstrated that cutting-edge A.I. could be used to master several classic Atari games, without being explicitly taught how to play them. Since then, multiple researchers have tried their hand at reinforcement learning systems that use trial and error to learn how to master games.

"First return, then explore”: Go-Explore solves all previously unsolved Atari games

Now, researchers from Uber AI Labs and OpenAI have come up with a way to refine these tools even further that enables them to exhibit a high level of performance at more complex games that gameplaying A.I. agents have previously struggled with.

“The new algorithm that we developed, Go-Explore, outperforms previous machine learning algorithms on many Atari games, including the notoriously hard exploration games Montezuma’s Revenge and Pitfall,” Joost Huizinga, a research scientist at OpenA.I., told Digital Trends.

Not only did it “outperform” previous systems, but Go-Explore is the first algorithm to beat all levels of the fiendishly difficult Montezuma’s Revenge and rack up near-perfect scores on Pitfall.

Montezuma pyramid game
Uber AI Labs/OpenAI

It does this by remembering the previous successful approaches it has tried and returning to these high-scoring moments, rather than always starting from the beginning of the game. Since Atari games didn’t usually allow this (people who grew up with games that allow regular save points don’t realize how lucky they are!), the researchers used an emulator that allows them to save stats and reload them at any time.

Playing through a collection of 55 Atari games — representing an increasingly standard benchmark for reinforcement learning algorithms — Go-Explore was able to beat other state-of-the-art systems at these titles 85.5% of the time.

It’s an impressive demonstration of artificial intelligence in action. And, despite just being a game, it could have exciting real-world applications.

Checkmates and robot overlords

Right from the start — before artificial intelligence was even coined as the official name of the discipline — researchers in the field were interested in games. In 1949, Claude Shannon, one of A.I.’s founding figures, laid out his explanation for why making a computer play chess would be a worthy endeavor.

Games like chess, Shannon wrote, present a sharply defined problem, featuring both allowable operations and an ultimate goal. They weren’t too difficult a challenge to feasibly solve, and yet still require intelligence to excel at, while also possessing a discrete (meaning non-continuous) structure in keeping with the step-by-step manner in which computers solve problems.

While the technology driving these systems has changed immeasurably in the intervening 70-plus years, many of those ideas are still what drives the usage of games for developing and testing artificial intelligence. Games provide an abstracted, simplified version of the real world in which the complexity of problems is distilled into actions, states, rewards, and clear-cut winners and losers.

Although Shannon, Alan Turing, and numerous other early A.I. luminaries worked on the challenge of computer chess, and there were some notable successes along the way — such as Massachusetts Institute of Technology programmer Richard Greenblatt’s MacHack in the 1960s — it wasn’t really until May 1997 that computer chess truly grabbed the world’s attention.

Deep Blue vs Kasparov: How a computer beat best chess player in the world - BBC News

That was the month and year that IBM’s “Deep Blue” system bested chess world champion Garry Kasparov in a six-game match. Deep Blue was an astonishing example of brute-force computing in action. It used enormous parallel hardware, consisting of 30 top-end microprocessors, to examine the ramifications of 200 million board positions every second. IBM’s chess-playing supercomputer was equipped with a memory bank, consisting of hundreds of thousands of previous, master-level games upon which it could draw. (Ironically, the move that won Deep Blue the first match was actually a failure on the part of the system in which it defaulted to picking a move at random, which Kasparov mistook for creativity.)

Nearly 15 years later, in February 2011, IBM had its next headline-conquering A.I. gaming triumph when its IBM Watson A.I. faced off against former champions Ken Jennings and Brad Rutter in a multipart televised special of the game show Jeopardy. “I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy was still decades away,” Jennings told me for my book Thinking Machines. “Or at least I thought that it was.”

IBM/Youtube

In the event, Watson obliterated the pair, en route to winning the $1 million prize money. “It really stung to lose that badly,” said Jennings, who holds the record for longest winning streak in the game’s history. As the game came to a close, he scribbled a phrase on his answer board and held it up for the cameras: “I for one welcome our new robot overlords.”

Gameplaying and machine learning

More recent gameplaying A.I. demonstrations have largely been the work of DeepMind, which has focused on games as part of its stated goal to “solve intelligence.” Perhaps the most notable achievement was AlphaGo, a Go-playing bot that defeated world champion Lee Sedol — four games to one — in a 2016 series watched by 60 million people. There was also the aforementioned Atari-playing A.I., and AlphaStar, which sought to master the real-time strategy game StarCraft II.

Compared to the brute-force computing efforts of, say, Deep Blue, these are instead demonstrations of machine learning techniques. This is partly by necessity. Go, for example, features far more possible board positions than chess, making it tough to employ brute-force. The opening move in a chess game allows for 20 possible moves. The first player in Go has 361 possibilities. In totality, Go features more allowable board positions than the total number of atoms in the known universe. That’s a tall order for brute-force computing, even allowing for the increases in hardware since 1997.

alpha go
DeepMind/YouTube

As approaches like deep reinforcement learning have advanced, modern game-playing A.I. systems have, like A.I. as a whole, largely switched from following prelearned rules to learning on their own. This, too, has opened up a new advantage to the use of games as a prover for A.I. systems: The free data.

As symbolic A.I. gave way to today’s data-hungry machine learning tools, games offered researchers a more plentiful source of the data they needed to perform their demonstrations. Demis Hassabis, the CEO and co-founder of DeepMind, made this point in a November 2020 interview with Azeem Azhar of the newsletter Exponential View. “We were a small startup, we didn’t have access to a lot of data from applications … and so we had to synthesize our own data,” Hassabis said. “If you use games, whether that’s board games like Go or simulations, like computer games, video games, you can run them for as long as you like and generate as much synthetic data as you want.”

Case in point: AlphaGo played itself more than 10 million times to achieve its Go-playing prowess. In non-gameplaying scenarios, this mass of data would have to be gathered from elsewhere. In the case of a gameplaying A.I., it can be generated by the system itself.

Real-world usage

There is, it should be stated, an element of P. T. Barnum “roll up, roll up, see the incredible intelligent computer” to public gameplaying A.I. demonstrations. It turns machine learning research into something approaching an Olympic sport. Far more people watched IBM’s Watson win at Jeopardy! than have ever cited the research papers describing backpropagation, the most famous algorithms in modern machine learning. IBM’s stock surged in 1997, following the Kasparov chess game, and again in 2011 after Watson’s Jeopardy win.

But gameplaying A.I. isn’t just a cynical grab for attention. “The ultimate goal, of course, is not merely to solve games per se,” Adrien Ecoffet, a research scientist at OpenAI, told Digital Trends. “The problem framing itself is very general, so that algorithms that can solve games well can also be useful in practical applications. In our work, we show that the same algorithm we used to solve Atari games can also be used to solve a challenging robotics problem. In addition to robotics, Go-Explore has already seen some experimental research in language learning, where an agent learns the meaning of words by exploring a text-based game, and for discovering potential failures in the behavior of a self-driving car in order to avoid those failures in the future.”

Jeff Clune, a research team leader at OpenAI, told Digital Trends that DeepMind has successfully applied reinforcement learning and machine learning to practical, real-world problems such as controlling stratospheric balloons and carrying out data center cooling.

Meanwhile, Huizinga pointed out that reinforcement learning tools are widespread in recommendation systems that determine what video or advertisement to show to users online. Similarly, search algorithms used to allow A.I. agents to find their way in video games also form the “backbone algorithm” for automatic route planning in navigation systems.

“While there are, to the best of our knowledge, no commercially applied versions of Go-Explore yet, it may not be long before we start seeing practical applications,” Huizinga said. And, with it, most likely a whole lot of other gameplaying A.I. systems

A paper describing the Uber A.I. Labs and OpenAI. reinforcement learning project was recently published in the journal Nature.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
NYT Connections: hints and answers for Thursday, May 2
New York Times Connection game logo.

Connections is the latest puzzle game from the New York Times. The game tasks you with categorizing a pool of 16 words into four secret (for now) groups by figuring out how the words relate to each other. The puzzle resets every night at midnight and each new puzzle has a varying degree of difficulty. Just like Wordle, you can keep track of your winning streak and compare your scores with friends.

Some days are trickier than others. If you're having a little trouble solving today's Connections puzzle, check out our tips and hints below. And if you still can't get it, we'll tell you today's answers at the very end.
How to play Connections
In Connections, you'll be shown a grid containing 16 words — your objective is to organize these words into four sets of four by identifying the connections that link them. These sets could encompass concepts like titles of video game franchises, book series sequels, shades of red, names of chain restaurants, etc.

Read more
NYT Mini Crossword today: puzzle answers for Thursday, May 2
NYT The Mini Crossword logo.

Love crossword puzzles but don't have all day to sit and solve a full-sized puzzle in your daily newspaper? That's what The Mini is for!

A bite-sized version of the New York Times' well-known crossword puzzle, The Mini is a quick and easy way to test your crossword skills daily in a lot less time (the average puzzle takes most players just over a minute to solve). While The Mini is smaller and simpler than a normal crossword, it isn't always easy. Tripping up on one clue can be the difference between a personal best completion time and an embarrassing solve attempt.

Read more
Best Samsung monitor deals: 4K monitors, ultrawide, and more
Press image of the Samsung ViewFinity S9 studio monitor.

Samsung is probably one of the most well-known electronics companies, making everything from some of the best phones on the market to washers and driers, so it has a huge pedigree in the tech field. That pedigree also extends to monitors, as it also makes some of the best monitors and best gaming monitors on the market as well, so if you're looking to buy a new one, grabbing a Samsung on is a pretty smart choice. Of course, there's a huge selection of monitors to pick from, which is why we've gone out and selected some of our favorite Samsung monitor deals and compiled them for you below.

Also, if you're not quite sure what monitor to buy, check out our computer monitor buying guide to get a better sense of what you need. And, if you don't find it among Samsung monitors, you can always check some other great monitor deals as well.
Samsung 22-inch T350 Full HD monitor -- $100, was $120

Read more