Skip to main content

Google outraces Facebook to AI breakthrough by beating a Go champ

Games have always been a preferred domain for artificial intelligence developers to test their mettle. The fixed, rule-bound systems of games allow for a clean environment in which a focused AI can take on a human counterpart with some objective measure of relative success. Now a team out of Google has passed another important milestone in the history of AI gaming, creating the first system to defeat a professional player of the ancient Chinese game, Go.

Starting with tic-tac-toe in 1954, and then checkers in 1994, computers have been steadily working their way through increasingly complex games, matching and then surpassing the best that humanity has to offer. Chess was long held as a bastion of human intellect that was too subtle for computers to master until 1997 when IBM’s Deep Blue notoriously defeated Garry Kasparov, one of the greatest players in chess history. More recently, IBM racked up another success when its Watson defeated two Jeopardy champions in 2011. Google made headlines last year with a generalized AI that was able to successfully teach itself over a dozen Atari games just based on pixel input.

Go has long been a holy grail for AI researchers due to its combination of relatively simple rules and immense strategic complexity. Originating in China over 2,500 years ago, Go has amassed millions of devoted players, and is considered a high intellectual pursuit, particularly in Japanese and Chinese culture. Players alternate placing black or white stones on a grid with the goal of capturing one another’s pieces or fully surrounding sections of the board for points. The rules are straight forward, but because players can place stones anywhere on the board, the game has 1 x 10^127 possible states. That’s more than the number of atoms in the known universe, and many orders of magnitude more than the number of possible chess positions.

Recommended Videos

Traditional AI solutions to games involve using search trees to run through possible ways that the game could play out, based on the current game state, in order to make the most informed decision. This brute force method, leveraging computing strength to run through more possibilities than an intuition-reliant human could, has always been completely insufficient in the face of Go‘s open-ended complexity.

Please enable Javascript to view this content

AlphaGo went 5 and 0 against Hui, marking the first time that a computer program has ever bested a professional Go player.

Google’s team instead relied on neural networks, an approach to intelligent systems that runs inputs through layers of virtual neurons that loosely mimic animal brain function. The result is measured against a desired goal, and then connection strengths within the networks are tweaked. Through repetition this allows for systems that dynamically “learn,” arriving at solutions and strategies that were never directly programmed in. AlphaGo, Google’s system, comprised 12 neural network layers, including a “policy network” that selected a move after the board state was run through the other layers, and a “value network” that predicts the winner based on a given move.

30 million moves from human expert games were run through the network until it could successfully predict human moves 57 percent of the time (over the previous 44-percent record). Wanting to do more than just mimic human players, AlphaGo was then sent to play thousands of games against itself, developing its own, non-programmed strategies by adjusting connections and reinforcing decisions that led to victories, relying on the Google Cloud Platform for the necessary computing oomph. More technical nitty-gritty on how AlphaGo developed can be found in an article published by the team in Nature.

AlphaGo was then put to the test. First it took on the reigning top Go computer programs, winning all but one out of 500 games. Then came the real test, challenging three-time European Go champion Fan Hui. Behind closed doors last October, AlphaGo went 5 and 0 against Hui, marking the first time that a computer program has ever bested a professional Go player.

Coincidentally, Facebook also just announced its efforts to tackle Go with artificial intelligence in a public post from founder Mark Zuckerberg. Although Facebook has apparently made substantial progress in the last year, Google appears to have beaten them to the punch by declaring AlphaGo’s victory over Fan Hui. It may be all fun and games for now, but tackling challenges like Go that were previously thought insurmountable has larger implications for the progress of connectionist AI and machine learning, which have the potential to become extremely powerful tools for analyzing messy, real world problems.

Will Fulton
Former Digital Trends Contributor
Will Fulton is a New York-based writer and theater-maker. In 2011 he co-founded mythic theater company AntiMatter Collective…
Nvidia’s $3,000 Project Digits puts a 1-Petaflop AI on your desk
Nividia Project Digits on a desktop

During a 90-odd-minute keynote address at CES 2025 in Las Vegas on Monday, Nvidia CEO Jensen Huang showed off a powerful desktop computer for home AI enthusiasts. Currently going by Project Digits, this $3,000 device takes up about as much space as a Mac mini and offers 1 PFLOPS of FP4 floating point performance.

Nvidia reportedly used its DGX 100 server design as inspiration for the self-contained desktop AI, with Projects Digits being powered by a 20-core GB10 Grace Blackwell Superchip on 128GB of LPDDR5X memory with a 4TB NVMe solid-state drive (SSD).

Read more
Google TV will soon get Gemini’s AI smarts
Using the Google TV Streamer.

Starting later in 2025, yelling at your TV will finally accomplish something thanks to a new Google initiative announced Monday ahead of CES 2025. The company plans to incorporate its Gemini AI models into the Google TV experience as a means to “make interacting with your TV more intuitive and helpful.”

Google claims that this “will make searching through your media easier than ever, and you will be able to ask questions about travel, health, space, history, and more, with videos in the results for added context,” the company wrote in its announcement blog post. Google had previously forfeited a significant chunk of its market value after its Gemini prototype, dubbed Bard, flubbed its space-based response during the model's first public demo in 2023. Google also had to pause the AI's image-generation feature in early 2024, after it started outputting racially offensive depictions of people of color.

Read more
You’ll never guess what Google’s ‘biggest focus’ will be in 2025
Sundar Pichai stands in front of a screen showing the Google logo.

Google plans to prioritize scaling its Gemini AI for consumers in the new year, CEO Sundar Pichai told employees during a strategy meeting held earlier this month. The company is facing increased competition from rivals like Perplexity and OpenAI as emerging AI technologies reinvent web search. The company has come under added scrutiny from federal regulators as well this year.

“I think 2025 will be critical,” Pichai remarked to employees assembled at Google’s headquarters in Mountain View, California, as well as those attending virtually. “I think it’s really important we internalize the urgency of this moment, and need to move faster as a company. The stakes are high. These are disruptive moments. In 2025, we need to be relentlessly focused on unlocking the benefits of this technology and solve real user problems.”

Read more