Home > Computing > Google's DeepMind division teaches a digital…

Google's DeepMind division teaches a digital ant-like creature to play soccer

The artificial intelligence from Google’s DeepMind Technologies division is impressively versatile, there’s no doubt. Late last year, it became the first neural network in history to defeat a professional player at Go, the Chinese board game whose human players had stumped computers for years, by besting world-ranked player Lee Sedol. It has demonstrated a prowess for video games, too — it taught itself to emerge victorious in 49 different games for the Atari 2600 console and navigate digital 3D-maze called Labyrinth. And now, Google’s human-like AI has learned how to play a sport of a different nature: soccer.

DeepMind’s latest experiment involves teaching an ant-like digital bug to maneuver a soccer ball into a goal. It’s simple enough task, in theory but exceedingly complex when you “go in blind” — that is to say, attempt to learn it without an inkling of the game’s rules or mechanics.

In getting a grasp on the basics, explained DeepMind researcher David Silver in a blog post, DeepMind takes a very human-like approach: “Humans excel at solving a wide variety of challenging problems, from low-level motor control through high-level cognitive tasks,” he wrote. DeepMind, similarly, uses learning techniques and self-reinforcement to not only teach itself the game’s physics and rules, but to use that newfound knowledge to win at it consistently.

The software’s newly developed soccer skills are thanks to what the DeepMind team calls “asynchronous reinforcement learning,” a technique designed to tackle “continuous control” problems which involve an unknown number of constantly shifting variables. The approach, in tandem with DeepMind’s many others, can assist it in figuring out complex problems and games “without any prior knowledge of [their] dynamics,” said Silver.

Other recent developments helped significantly speed up the learning process: DeepMind’s neural network now “exploits the multi-threading capabilities of standard CPUs,” Silver said, allowing it to “execute many instances” at once. And a large-scale, server-based system dubbed Gorilla speeds up computations “by an order of magnitude.” Beyond DeepMind’s quick mastery of soccer, the improvements saw gains of 300 percent in the artificial intelligence’s Atari game mean scores and nearly “human-level” performance, said Silver.

Soccer’s just the beginning, of course. DeepMind hopes to apply asynchronous reinforcement learning to tasks that involve “robotic manipulation” — i.e., unfamiliar objects and environments that machines currently require human oversight to navigate successfully. And in the long term, the AI division is working toward philanthropic efforts in other areas — DeepMind’s healthcare initiative, the eponymous DeepMind Health, seeks to leverage artificial intelligence in alerting health care workers of patients with potentially dangerous complications.