Skip to main content

Here’s your stop: Google DeepMind’s new AI can help you navigate the subway system

What-is-google-duplex
Image used with permission by copyright holder
Humans take reasoning for granted, but logic isn’t always self-evident to machines, which have to be hard-coded to make the connections that can support basic deductions.

Google’s DeepMind is looking to change that. The London-based artificial intelligence company has developed a system that performs relatively simple tasks in a sophisticated — and increasingly human — way, reports the Guardian.

Recommended Videos

While plenty of programs can guide you through the subway, DeepMind’s differential neural network is one of the first systems to use external memory and deep learning to train itself autonomously, without the need for hard-coded instructions.

Differentiable neural computer family tree inference task

Deep learning has become the go-to method for machine learning over the past few years, achieving unprecedented success in tasks like image and speech recognition. A DeepMind-developed program called AlphaGo used deep learning to defeat one of the world’s best Go players earlier this year. Although these systems do very well at their specific task, though, they stumble with general skills.

Please enable Javascript to view this content

“Until very recently, it was far from obvious how deep learning could be used to allow a system to acquire the algorithms needed for conscious deliberate reasoning,” Professor Geoff Hinton, considered the father of deep learning, told the Guardian.

To overcome this, DeepMind integrated its system with an external memory that enabled it to retain relevant information and use this data as a human would use his or her own working memory.

The AI was able to determine the quickest route between London underground stops and navigate its way around the notoriously complicated subway system, according to a study published in the journal Nature. It also performed relatively well on basic reading comprehension tests.

“I’m wary of saying now we have a machine that can reason,” Google DeepMind researchers Alex Graves told the Guardian. “We have something that has an improved memory — a different kind of memory that we believe is a necessary component of reasoning. It’s hard to draw a line in the sand.”

Regardless of semantics, programs that demonstrate basic reasoning may one day replace more limited systems like Siri, and may be seen as a development towards a form of AI that better resembles the human mind.

Dyllan Furness
Former Digital Trends Contributor
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
What is Gemini Advanced? Here’s how to use Google’s premium AI
Google Gemini on smartphone.

Google's Gemini is already revolutionizing the way we interact with AI, but there is so much more it can do with a $20/month subscription. In this comprehensive guide, we'll walk you through everything you need to know about Gemini Advanced, from what sets it apart from other AI subscriptions to the simple steps for signing up and getting started.

You'll learn how to craft effective prompts that yield impressive results and stunning images with Gemini's built-in generative capabilities. Whether you're a seasoned AI enthusiast or a curious beginner, this post will equip you with the knowledge and techniques to harness the power of Gemini Advanced and take your AI-generated content to the next level.
What is Google Gemini Advanced?

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more
Watch Google DeepMind’s robotic ping-pong player take on humans
Google DeepMind's robot ping pong player takes on a human.

Demonstrations - Achieving human level competitive robot table tennis

Ping-pong seems to be the sport of choice when it comes to tech firms showcasing their robotic wares. Japanese firm Omron, for example, made headlines several years ago with its ping-pong robot that could comfortably sustain a rally with a human player, while showing off the firm’s sensor and control technology in the process.

Read more