Chess. Jeopardy. Go. Why do we use games as a benchmark for A.I.?

Growing up, many of us were told that playing computer or video games was a waste of time. However, when it comes to training artificial intelligence, not only is it not a waste of time, but it could be key to developing the intelligent agents of tomorrow.

Several years ago, researchers from Google DeepMind impressively demonstrated that cutting-edge A.I. could be used to master several classic Atari games, without being explicitly taught how to play them. Since then, multiple researchers have tried their hand at reinforcement learning systems that use trial and error to learn how to master games.

"First return, then explore”: Go-Explore solves all previously unsolved Atari games

Now, researchers from Uber AI Labs and OpenAI have come up with a way to refine these tools even further that enables them to exhibit a high level of performance at more complex games that gameplaying A.I. agents have previously struggled with.

Recommended Videos

“The new algorithm that we developed, Go-Explore, outperforms previous machine learning algorithms on many Atari games, including the notoriously hard exploration games Montezuma’s Revenge and Pitfall,” Joost Huizinga, a research scientist at OpenA.I., told Digital Trends.

Not only did it “outperform” previous systems, but Go-Explore is the first algorithm to beat all levels of the fiendishly difficult Montezuma’s Revenge and rack up near-perfect scores on Pitfall.

Uber AI Labs/OpenAI

It does this by remembering the previous successful approaches it has tried and returning to these high-scoring moments, rather than always starting from the beginning of the game. Since Atari games didn’t usually allow this (people who grew up with games that allow regular save points don’t realize how lucky they are!), the researchers used an emulator that allows them to save stats and reload them at any time.

Playing through a collection of 55 Atari games — representing an increasingly standard benchmark for reinforcement learning algorithms — Go-Explore was able to beat other state-of-the-art systems at these titles 85.5% of the time.

It’s an impressive demonstration of artificial intelligence in action. And, despite just being a game, it could have exciting real-world applications.

Checkmates and robot overlords

Right from the start — before artificial intelligence was even coined as the official name of the discipline — researchers in the field were interested in games. In 1949, Claude Shannon, one of A.I.’s founding figures, laid out his explanation for why making a computer play chess would be a worthy endeavor.

Games like chess, Shannon wrote, present a sharply defined problem, featuring both allowable operations and an ultimate goal. They weren’t too difficult a challenge to feasibly solve, and yet still require intelligence to excel at, while also possessing a discrete (meaning non-continuous) structure in keeping with the step-by-step manner in which computers solve problems.

While the technology driving these systems has changed immeasurably in the intervening 70-plus years, many of those ideas are still what drives the usage of games for developing and testing artificial intelligence. Games provide an abstracted, simplified version of the real world in which the complexity of problems is distilled into actions, states, rewards, and clear-cut winners and losers.

Although Shannon, Alan Turing, and numerous other early A.I. luminaries worked on the challenge of computer chess, and there were some notable successes along the way — such as Massachusetts Institute of Technology programmer Richard Greenblatt’s MacHack in the 1960s — it wasn’t really until May 1997 that computer chess truly grabbed the world’s attention.

Deep Blue vs Kasparov: How a computer beat best chess player in the world - BBC News

That was the month and year that IBM’s “Deep Blue” system bested chess world champion Garry Kasparov in a six-game match. Deep Blue was an astonishing example of brute-force computing in action. It used enormous parallel hardware, consisting of 30 top-end microprocessors, to examine the ramifications of 200 million board positions every second. IBM’s chess-playing supercomputer was equipped with a memory bank, consisting of hundreds of thousands of previous, master-level games upon which it could draw. (Ironically, the move that won Deep Blue the first match was actually a failure on the part of the system in which it defaulted to picking a move at random, which Kasparov mistook for creativity.)

Nearly 15 years later, in February 2011, IBM had its next headline-conquering A.I. gaming triumph when its IBM Watson A.I. faced off against former champions Ken Jennings and Brad Rutter in a multipart televised special of the game show Jeopardy. “I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy was still decades away,” Jennings told me for my book Thinking Machines. “Or at least I thought that it was.”

IBM/Youtube

In the event, Watson obliterated the pair, en route to winning the $1 million prize money. “It really stung to lose that badly,” said Jennings, who holds the record for longest winning streak in the game’s history. As the game came to a close, he scribbled a phrase on his answer board and held it up for the cameras: “I for one welcome our new robot overlords.”

Gameplaying and machine learning

More recent gameplaying A.I. demonstrations have largely been the work of DeepMind, which has focused on games as part of its stated goal to “solve intelligence.” Perhaps the most notable achievement was AlphaGo, a Go-playing bot that defeated world champion Lee Sedol — four games to one — in a 2016 series watched by 60 million people. There was also the aforementioned Atari-playing A.I., and AlphaStar, which sought to master the real-time strategy game StarCraft II.

Compared to the brute-force computing efforts of, say, Deep Blue, these are instead demonstrations of machine learning techniques. This is partly by necessity. Go, for example, features far more possible board positions than chess, making it tough to employ brute-force. The opening move in a chess game allows for 20 possible moves. The first player in Go has 361 possibilities. In totality, Go features more allowable board positions than the total number of atoms in the known universe. That’s a tall order for brute-force computing, even allowing for the increases in hardware since 1997.

DeepMind/YouTube

As approaches like deep reinforcement learning have advanced, modern game-playing A.I. systems have, like A.I. as a whole, largely switched from following prelearned rules to learning on their own. This, too, has opened up a new advantage to the use of games as a prover for A.I. systems: The free data.

As symbolic A.I. gave way to today’s data-hungry machine learning tools, games offered researchers a more plentiful source of the data they needed to perform their demonstrations. Demis Hassabis, the CEO and co-founder of DeepMind, made this point in a November 2020 interview with Azeem Azhar of the newsletter Exponential View. “We were a small startup, we didn’t have access to a lot of data from applications … and so we had to synthesize our own data,” Hassabis said. “If you use games, whether that’s board games like Go or simulations, like computer games, video games, you can run them for as long as you like and generate as much synthetic data as you want.”

Case in point: AlphaGo played itself more than 10 million times to achieve its Go-playing prowess. In non-gameplaying scenarios, this mass of data would have to be gathered from elsewhere. In the case of a gameplaying A.I., it can be generated by the system itself.

Real-world usage

There is, it should be stated, an element of P. T. Barnum “roll up, roll up, see the incredible intelligent computer” to public gameplaying A.I. demonstrations. It turns machine learning research into something approaching an Olympic sport. Far more people watched IBM’s Watson win at Jeopardy! than have ever cited the research papers describing backpropagation, the most famous algorithms in modern machine learning. IBM’s stock surged in 1997, following the Kasparov chess game, and again in 2011 after Watson’s Jeopardy win.

But gameplaying A.I. isn’t just a cynical grab for attention. “The ultimate goal, of course, is not merely to solve games per se,” Adrien Ecoffet, a research scientist at OpenAI, told Digital Trends. “The problem framing itself is very general, so that algorithms that can solve games well can also be useful in practical applications. In our work, we show that the same algorithm we used to solve Atari games can also be used to solve a challenging robotics problem. In addition to robotics, Go-Explore has already seen some experimental research in language learning, where an agent learns the meaning of words by exploring a text-based game, and for discovering potential failures in the behavior of a self-driving car in order to avoid those failures in the future.”

Jeff Clune, a research team leader at OpenAI, told Digital Trends that DeepMind has successfully applied reinforcement learning and machine learning to practical, real-world problems such as controlling stratospheric balloons and carrying out data center cooling.

Meanwhile, Huizinga pointed out that reinforcement learning tools are widespread in recommendation systems that determine what video or advertisement to show to users online. Similarly, search algorithms used to allow A.I. agents to find their way in video games also form the “backbone algorithm” for automatic route planning in navigation systems.

“While there are, to the best of our knowledge, no commercially applied versions of Go-Explore yet, it may not be long before we start seeing practical applications,” Huizinga said. And, with it, most likely a whole lot of other gameplaying A.I. systems

A paper describing the Uber A.I. Labs and OpenAI. reinforcement learning project was recently published in the journal Nature.

Editors' Recommendations

I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Best Alienware deals: Gaming PCs, laptops, and monitors

Alienware has been around for a long time, before it was even bought out by Dell, and was making some of the best gaming laptops in the market. Now, Alienware has graduated to making excellent gaming gear across the board; whether you want a gaming laptop, a gaming PC, a headset, or even a gaming chair, there's very likely a great deal for you. That's why we've gone through everything it has to offer and collected our favorite deals below. You can also check out our roundup of the best gaming laptop deals and the best gaming PC deals if you want to check out some alternatives.
Alienware AW920H wireless gaming headset — $160, was $180

A gaming headset is a great way to keep in touch with your friends while you play. The Alienware AW920H gaming headset lets you do so without the burden or wires, as it connects to your gaming setup via Bluetooth. It has a built-in microphone for outgoing communications, as well as Dolby Atmos, Surround Sound, and Stereo Sound options to help immerse you in the game and incoming communications.

Read more
How to sail solo in Sea of Thieves

Sea of Thieves is meant to be played as part of a crew. You and a few friends loot and plunder the seas (and other players) as a team. That's not the only way to sail the seven seas, however. If you'd prefer to sail alone, you can also take to the ocean on a smaller, more nimble ship. Taking on the pirate life without any help can be especially tough -- you don't have any backup in emergencies, and if you run up against a crew of other players, you'll have to take them on at a disadvantage.

It's possible to succeed on the high seas all by your lonesome, though, if you play to the strengths of working alone. You are smaller, quicker, and stealthier than other crews and the galleons you might have to face. If you're smart about it, you can win battles, escape emergencies, sneak past enemies, and plunder a whole bunch of loot. Use these tips to maximize your effectiveness as a solo buccaneer.
Sail on Safer Seas

Read more
This HP Omen gaming laptop with RTX 4050 is $450 off

You'll certainly get a much more powerful machine if you're willing to spend more than $1,000 on gaming laptop deals, but it's not a requirement if you just want to enjoy today's most popular games. Here's a device that's relatively budget-friendly -- the HP Omen 16, which is down to $850 from its original price of $1,300 following a $450 discount from HP. We're not sure how much time is remaining for you to get the gaming laptop at 34% off though, so we recommend pushing through with the purchase immediately.

Buy Now

Read more