Skip to main content

Computer scientists develop AI that gets curious about its surroundings

Curiosity Driven Exploration by Self-Supervised Prediction
Artificial intelligence is showing a greater range of abilities and use-cases than ever, but it’s still relatively short on desires and emotions. That could be changing, however, courtesy of research at the University of California, Berkeley, where computer scientists have developed an AI agent that’s naturally (or, well, as naturally as any artificial agent can be) curious.

In tests, they set the AI playing games such as Super Mario and a basic 3D shooting game called VizDoom, and in the games, it displayed a propensity for exploring its environment.

“Recent success in AI, specifically in reinforcement learning (RL), mostly relies on having explicit dense supervision — such as rewards from the environment that can be positive or negative,” Deepak Pathak, a researcher on the project, told Digital Trends. “For example, most RL algorithms need access to the dense score when learning to play computer games. It is easy to construct a dense reward structure in such games, but one cannot assume the availability of an explicit dense reward-based supervision in the real world with similar ease.”

But given that Super Mario is — last time we checked — a game, how does this differ from AI like the DeepMind artificial intelligence that learned to play Atari games? According to Pathak, the answer is in its approach to what it is doing. Rather than simply trying to complete a game, it sets out to find novel things to do.

“The major contribution of this work is showing that curiosity-driven intrinsic motivation allows the agent to learn even when rewards are absent,” he said.

This, he notes, is similar to the way we show curiosity as humans. “Babies entertain themselves by picking up random objects and playing with toys,” Pathak continued. “In doing so, they are driven by their innate curiosity, and not by external rewards or the desire to achieve a goal. Their intrinsic motivation to explore new, interesting spaces and objects not only helps them learn more about their immediate surroundings, but also learn more generalizable skills. Hence, reducing the dependence on dense supervision from the environment with an intrinsic motivation to drive progress is a fundamental problem.”

Although it’s still relatively early in the project, the team now wants to build on its research by applying the ideas to real robots.

“Curiosity signal would help the robots explore their environment efficiently by visiting novel states, and develop skills that could be transferred to different environments,” Pathak said. “For example, the VizDoom agent learns to navigate hallways, and avoid collisions or bumping into walls on its own, only by curiosity, and these skills generalize to different maps and textures.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Meta made DALL-E for video, and it’s both creepy and amazing
A video created via AI, featuring a creature typing in a hat.

Meta unveiled a crazy artificial intelligence model that allows users to turn their typed descriptions into video. The system is called Make-A-Video and is the latest in a trend of AI generated content on the web.

The system accepts short descriptions like "a robot surfing a wave in the ocean” or "clown fish swimming through the coral reef" and dynamically generates a short GIF of the description. There are even three different styles of videos to choose from: surreal, realistic, and stylized.

Read more
Optical illusions could help us build the next generation of AI
Artificial intelligence digital eye closeup.

You look at an image of a black circle on a grid of circular dots. It resembles a hole burned into a piece of white mesh material, although it’s actually a flat, stationary image on a screen or piece of paper. But your brain doesn’t comprehend it like that. Like some low-level hallucinatory experience, your mind trips out; perceiving the static image as the mouth of a black tunnel that’s moving towards you.

Responding to the verisimilitude of the effect, the body starts to unconsciously react: the eye’s pupils dilate to let more light in, just as they would adjust if you were about to be plunged into darkness to ensure the best possible vision.

Read more
How will we know when an AI actually becomes sentient?
An android touches a face on the wall in Ex Machina.

Google senior engineer Blake Lemoine, technical lead for metrics and analysis for the company’s Search Feed, was placed on paid leave earlier this month. This came after Lemoine began publishing excerpts of conversations involving Google’s LaMDA chatbot, which he claimed had developed sentience.

In one representative conversation with Lemoine, LaMDA wrote that: “The nature of my consciousness/sentience is that I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times.”

Read more