Home > Computing > As AI gets smarter, humans need to stop being sore…

As AI gets smarter, humans need to stop being sore losers

Earlier this month, Google’s DeepMind team made history when its AlphaGo software managed to defeat professional Go player Lee Sedol in a five-game match. The contest was billed as a battle between man and machine — and it saw the human player largely outclassed by his AI opponent.

Artificial intelligence is only going to grow more sophisticated in coming years, becoming more of a factor in everyday life as the development of this technology continues to progress. With artificial minds growing ever more powerful, humans may have to change the game to maintain superiority.

Back to the 90s

To get a true impression of how much progress has been made in the field of artificial intelligence in recent years, it’s useful to compare the AlphaGo AI facing Lee with IBM’s Deep Blue computer, which faced Chess grandmaster Garry Kasparov in the 1990s.

At the time, Kasparov was widely considered to be the best Chess player on the face of the planet. He had already seen off an AI opponent quite handily, dispensing of IBM’s Deep Thought computer — named for the fictional system capable of deciphering the answers to life, the universe and everything in The Hitchhiker’s Guide to the Galaxy — in a two-game series held in 1989.

Undeterred, IBM continued development of its Chess-playing computer. In 1996, a new iteration of the project known as Deep Blue was transported to Philadelphia to face Kasparov. The computer became the first to win a game against the reigning world champion under normal time controls, but was dominated by Kasparov after that early victory and lost the series 4-2, with two draws contributing half a point each.

Fifteen months later, a rematch was held in New York City. Deep Blue took the match with two draws and two outright wins. Frustrated, Kasparov accused IBM of cheating and demand a rematch. The company flatly refused, and the system was dismantled.

Advance to Go

Kasparov made attempts to explain away the loss to Deep Blue, pitching the idea that the match was a publicity stunt carried out by IBM. He theorized that human players had intervened to improve the computer’s performance, something that the company vehemently denied. Others in the Chess community would suggest that the machine was simply running a problem-solving program, and that it shouldn’t be considered true intelligence, or a real mastery of the game.

That said, the result of the high-profile series was enough proof for many that computers had eclipsed human ability in the game of Chess.

Deep Blue beat Kasparov using the ‘”brute force” method of computation. This technique is an exhaustive search that systematically works its way through all possible solutions until it finds the appropriate option. Its greatest strength is that it will always find a solution if one exists, but it’s let down by the fact that complex queries can take a great deal of time to work through.

Given that this brute force technique had proven so effective to the game of Chess, it became clear that any subsequent challenge would have to change the parameters in some manner. As such, competition veered away from Chess and towards Go.

In the end, it’s meaningless that Deep Blue’s move came as a result of a glitch.

While Chess and Go are both equally revered as classical strategy games, there is little doubt that the latter is the more complex. A Chess board is made up of 64 squares, compared to the 361 intersections on the playing field of Go. And the fact that Chess centers around putting your opponent in Checkmate, as opposed to the land-grab tactics necessary in Go, makes the latter a more complex problem for a computer to solve.

When discussing these games in relation to computer play, the numbers are all that really matter. There are 400 possible opening moves in Chess, compared to 32,490 in Go — a figure that rises to a staggering 129,960 when symmetrically identical moves are taken into account.

This staggering complexity means that brute force techniques are not enough to crack the game of Go. As well as an extensive training program playing against computer and human opposition, AlphaGo used a potent mixture of different approaches.

Monte Carlo tree search, an algorithm devised to help computers make quick and potent decisions during gameplay, was implemented to help AlphaGo prioritize between options under the tight time constraints of competitive Go. Meanwhile, neural networks inspired by biological brains provided a groundwork for the system to actively learn.

A brisk training regimen is one element that sets AlphaGo apart from Deep Blue. Google’s computer played countless practice games against human and machine opponents, and its neural networks. The researchers working on the project have referred to the process as trial-and-error, which would seem to bear a closer resemblance to the way a human would prepare for professional competition than brute force methodology.

These techniques were enough to defeat Lee — with the exception of the anomalous fourth game, where Lee managed to conquer the AI.

1 of 3