Skip to main content

Animals, algorithms, and obstacle courses: Welcome to the A.I. Olympics

Getty

A.I. has an intelligence problem. More specifically, how do you measure intelligence as computer scientists work toward the dream of a machine with something approaching a mind, as you or I might consider it?

Throughout the field’s history there have been multiple hypothetical tests put forward, from the famous Turing Test to Apple co-founder Steve Wozniak’s proposed Coffee Test. But there is little in the way of scientific rigor to these approaches. Could measuring A.I. intelligence using tests designed for animals help?

That’s what a new competition, organized by the Leverhulme Center for the Future of Intelligence in Cambridge and Prague-based research institute GoodA.I. hopes to find out. Called the “Animal-A.I. Olympics,” it will take place for the first time this June. It aims to test autonomous A.I. agents, developed by some of the world’s top research groups, by putting them through a series of problem-solving exercises designed by leading animal cognition experts.

“People have long been interested in the question of whether we will ever have artificial systems that are capable of doing everything that humans can do,” Matthew Crosby, a postdoctoral A.I. researcher working on the project, told Digital Trends. “Being capable of doing everything that animals can do is considered a stepping stone towards that.”

“Will we ever have artificial systems that are capable of doing everything that humans can do?”

Unlike previous A.I. competitions, which focused on single domain challenges like chess or Go, the Animal-A.I. Olympics is made up of 100 different tests, divided into 10 categories. These will range in difficulty, but all are designed to test some aspect of bot intelligence. This includes areas like an A.I.’s understanding of object permanence (“can an A.I. understand that, even when object A moves behind object B, object A still exists”), preference relationships (“is more food better than less food?”), and the ability to memorize a scene so as to complete a maze in the dark. The exact tests planned are being kept secret so that researchers cannot prepare their A.I. systems too well in advance.

The winning agent — whose creators will take home $10,000 for their efforts — will be the one which exhibits high levels of performance across the board. This ability to adapt from one task to the next is intended to showcase a type of broader, more generalized intelligence. In doing so, it will test today’s artificial intelligence systems in a whole new way.

Easy problems are hard, hard problems are easy

Much of what the original generation of artificial intelligence researchers thought about intelligence turned out to be incorrect. They believed that building an A.I. which could, for instance, play chess like a math prodigy would result in a machine as intelligent as a math prodigy. A bit like starting a puzzle by piecing together the most complicated parts first, they figured they were bypassing the simplest steps. These could surely be filled in later. After all, once you had programmed your chess grandmaster robot, how tough could it be to backtrack and make a computer that could simulate the learning of an infant? Not very, they reasoned. They were wrong.

This isn’t a new observation. In the 1980s, an A.I. researcher named Hans Moravec, today an adjunct faculty member at the Robotics Institute of Carnegie Mellon University, put forward a hypothesis known as “Moravec’s Paradox.” In an insight that turned the A.I. world upside down, Moravec observed that it is “comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers.” On the other hand, it is “difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

Animal AI Olympics

To put it another way, the hard problems are easy and the easy problems are hard.

There are plenty of reasons for this, but one of the big ones is that early A.I. and robotics research focused on elaborate microworlds in which every variable could be perfectly controlled. Chess, to return to an earlier example, is a game based around clearly-defined states, board positions, and moves which are either legal or illegal. It depicts a static world in which both players have access to complete information so long as they know the moves of each piece and can see the board.

Building an A.I. which could play chess like a math prodigy would result in a machine as intelligent as a math prodigy

The real world isn’t like chess. We assume that what a child does is simple, because it is something that most of us can do without much thought, but there is an extraordinary amount of abstraction and complexity involved. An idea like object permanence, the concept that when things disappear from view they aren’t gone forever, develops in human babies at around 4-7 months. But it is a challenging idea to convey to a computer in an entirely different way to teaching it to play Go. These areas are the ones the Animal-A.I. Olympics will test.

“I think we’ve identified interesting areas where A.I. is perhaps not doing as well as people might expect from outside the A.I. community,” Crosby explained. “That’s probably why the tests seem quite simple for people who aren’t working with A.I. themselves. But they’re limitations that a lot of people working within A.I. are actually quite worried about.”

Advances in machine learning

Previously, there would have been no hope of machines being able to complete these tests. In recent years, however, there has been more progress toward an increasingly brain-like type of learning. It’s no coincidence that, at the time that Moravec’s Paradox was put forward in the 1980s, A.I. was just starting to transition away from the pre-programmed, rule-based world of Symbolic A.I. to a world of machine learning, built around algorithms that mimic the human brain. This was the start of the revolution.

Jump forward to the present day, and there has been enormous promise in fields like deep reinforcement learning, which teaches A.I. to take actions in an environment to maximize some kind of reward. (Here’s a look at how far researchers came in 2018.) This is the research area which has given us high profile illustrations of A.I.’s ability to iteratively improve itself in order to learn to play classic Atari video games without being shown to do so.

Getty

Crosby said that he expects deep reinforcement learning A.I. to do very well in many of the tests laid out in the Animal-A.I. Olympics. But there will still be challenges. A reinforcement learning A.I. is frequently trained on the same environment over and over again in order to become accomplished at a particular task. It starts as a blank slate and gets smarter by making mistakes and then learning from those mistakes.

In this case, the team is making available “playground” test environments to train the A.I. agents. However, the exact environment the tests will be run on will not be available until the day of the competition.

“There’s a big question right now concerning how much innate knowledge should you build into your agent,” Crosby said. “Should you start your agent completely from scratch and then expect it to learn everything? Or can you build in some information about the environment? When we are born, we’re born with highly structured brains that are already well adapted to existing in the type of world that we find ourselves in. So maybe the correct thing to do is to start an intelligent agent well-adapted for the world it is going to operate in. Some of the attempts to encode this kind of innate knowledge are really interesting.”

How to build a mind?

In some ways, it’s possible to argue that the animal cognition tests are, in themselves, an abstraction and simplification of the real world. There is continued debate among animal cognition experts over what exactly the tests prove, and whether or not they are capable of establishing a genuine hierarchy of intelligence among animals. Much of the disagreement centers on how much understanding of an environment is necessary to solve a puzzle.

Crosby acknowledges this issue — and the risk of reading intelligence into A.I. behavior that can be explained in simpler ways — but argues that this is still useful. “Having such a wide variety of tests, which have been taken from broad [scientific] literature, is the best we can do at the moment,” he said.

Animal-AI Olympics Preview

One thing that we cannot do is assume that this test will, by itself, prove the existence of a generalized intelligence. As he observes, “a general intelligence should be able to solve all of these tests, but an A.I. that solves all of them would not necessarily have general intelligence.”

In some ways, it’s related to the question of why it is so difficult to build a model of the brain. The human brain has roughly 100,000 neurons and 900 million synaptic connections per cubic millimeter. This is, evidently, far too much complexity for even the world’s biggest supercomputers to process. But a honey bee has 960,000 neurons in total, while a cockroach has — at 1 millions neurons — only slightly more. Why, then, should the world’s high compute research groups not yet have achieved, through their deep learning neural networks, a general intelligence (if that is what it would be) on the level of either of these two creatures? Has it already? The answer is no — because that is not the problem that’s being worked on. And, even if it was, we’re not capable of doing it yet.

Getty

“There’s a big difference between neural networks as they’re commonly used in deep learning, and the neural network of the brain,” Crosby said. “When you take a neural network in deep learning, it is highly abstracted to solve mathematical operations. It’s also highly optimized to be able to do machine learning, so that you can quickly backtrack through it to update the weights in your network and do so with a simplified calculation. Neurons in the actual brain are very, very complex things. They have all these chemical properties which just aren’t modeled in standard deep learning neural networks. When you see a deep learning neural network that has a comparable size to a bee’s brain, [what is being computed] is a lot simpler than what an actual neuron is processing. There’s still a lot of discussion about the level of detail you need to simulate in a neuron to capture all of its behavior.”

What happens next?

As a result, you should be very, very skeptical of any headlines emerging from the Animal-A.I. Olympics claiming that, for instance, A.I. has reached the level of a rat or other animal. But that doesn’t make the competition without value. Far from it.

“[I don’t want this to just be a one-off] competition, which is then over, but rather a useful ongoing testbed for A.I. researchers,” Crosby said. “It’s a step towards general intelligence. If, in the future, you’re coming up with an approach that you think should be able to solve these problems we’re going to make available the ability to test it.”

Getty

This could turn out to be the beginning of something very exciting for artificial intelligence — particularly if the excitement of building more generalized intelligent agents prompts more work in this area.

“It’s the kind of idea where, even if it fails, it’s still interesting enough a project that it will be worthwhile,” Crosby said. And if it an intelligence agent manages to ace all of the tests? “If you pass all the tests that we’re going to set out, it would be a massive breakthrough in A.I.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
I pitched my ridiculous startup idea to a robot VC
pitched startup to robot vc waterdrone

Aqua Drone. HighTides. Oh Water Drone Company. H2 Air. Drone Like A Fish. Whatever I called it, it was going to be big. Huge. Well, probably.

It was the pitch for my new startup, a company that promised to deliver one of the world’s most popular resources in the most high-tech way imaginable: an on-demand drone delivery service for bottled water. In my mind I was already picking out my Gulfstream private jet, bumping fists with Apple’s Tim Cook, and staging hostile takeovers of Twitter. I just needed to convince a panel of venture capitalists that I (and they) were onto a good thing.

Read more
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
How will we know when an AI actually becomes sentient?
An android touches a face on the wall in Ex Machina.

Google senior engineer Blake Lemoine, technical lead for metrics and analysis for the company’s Search Feed, was placed on paid leave earlier this month. This came after Lemoine began publishing excerpts of conversations involving Google’s LaMDA chatbot, which he claimed had developed sentience.

In one representative conversation with Lemoine, LaMDA wrote that: “The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

Read more