How DeepMind’s artificial intelligence will make Google even smarter

google deepmind collaboration head and neck cancer treatment artificial intelligence
Google DeepMind

Google is ringing in 2014 with a spending spree, first dropping $3.2 billion to acquire Nest Technologies and now spending a reported $400 million (or more) on the UK-based artificial intelligence outfit DeepMind.

It’s no secret that Google has an interest in artificial intelligence; after all, technologies derived from AI research help fuel Google’s core search and advertising businesses. AI also plays a key role in Google’s mobile services, its autonomous cars, and its growing stable of robotics technologies. And with the addition of futurist Ray Kurzweil to its ranks in 2012, Google also has the grandfather of “strong AI” on board, a man who forecasts that intelligent machines may exist by midcentury.

If all this sounds troubling, don’t worry: Google’s acquisition of DeepMind isn’t about fusing a mechanical brain with faster-than-human robots and giving birth to the misanthropic Skynet computer network from the Terminator franchise. But it does raise key questions: What exactly is artificial intelligence, and what does Google hope to accomplish by buying companies like DeepMind?

Top-down versus bottom-up AI

In general terms, AI refers to machines doing intellectual tasks at a level comparable to humans. That means reasoning, planning, learning, and using language to communicate at a high level. It also probably includes sensing and interacting with the physical world, although those might not be a requirement, depending on who you ask.

AI research is almost as old as computers, going back to the 1950s. Early efforts (sometimes called symbolic or “top-down” AI) were basically collections of rules. The idea was that with enough explicit rules (like IF person(bieber) IS arrested(drunk driving) THEN respond(LOL!)), systems could make decisions and act autonomously – it was just a question of writing enough rules and waiting for computing hardware powerful enough to handle it all. Top-down AI works well when a defined “knowledge base” can be constructed. For instance, in the 1970s, Stanford’s “Mycin” expert system diagnosed blood-borne infections better than many human internists, and in the 1980s the University of Pittsburgh’s “Caduceus” extended the idea to over 1,000 different diseases. In other words, AI in real life isn’t new.

Long Exposure of a Roomba's path
A Roomba’s path represents an example of bottom-up AI.

But top-down AI can’t cope with stuff outside its rules-and-knowledge sets. Dealing with the unknown – like an autonomous car navigating the constantly changing conditions on the street – requires an inconceivably large number of rules. So researchers developed behavioral or “bottom-up” AI. Instead of writing thousands (or millions or billions) of rules, researchers built systems with simple behaviors (like “move left” or “read the next word”) and showed those systems which actions worked in different contexts – typically by “rewarding” them with points. Some bottom-up AI technologies are based on real-world neuroscience; for instance, neural networks simulate synaptic connections akin to a biological brain. As they’re trained, bottom-up systems develop behaviors – learn – to cope with unforeseen circumstances in ways top-down AI never managed. Real-world technologies developed in part from bottom-up AI include things like the Roomba vacuum, Siri’s speech recognition, and Facebook’s face recognition. Again, AI in the real world.

What is machine learning?

Google’s acquisition of DeepMind is partly about “deep learning,” or ways of teaching bottom-up AI systems about complex concepts. Teaching bottom-up systems means throwing data at them and rewarding correct interpretation or behavior – this is called “supervised” training, because the data is already labelled with the correct answers. Of course, most data in the real world (pictures, video feeds, sounds, etc.) is not labelled – or not labelled well. Very basically, deep learning pre-trains bottom-up AI systems on unlabeled (or semi-labelled) data, leaving the systems free to draw their own conclusions. The pre-trained systems then get feedback on their performance from systems that received supervised training – and they catch on very fast, thanks to their previous experience. Layer these systems on top of each other, and you get programs that can quickly cope with unknown and unlabeled data – just the kind of thing Google deals with by the thousands of gigabytes, twenty-four hours a day, seven days a week. Artificial intelligence researchers with connections to DeepMind have indicated the company’s research has recently produced significant advances in this type of machine learning.

“In my opinion, reinforcement learning and deep learning are not enough to give us ‘thinking machines.’”

Sounds silly? Google’s already been at it for years. In 2012 it constructed a (comparatively small) neural network and showed it images culled from YouTube for a week. What did it learn to recognize without any guidance from humans or labelled data? Cats. (Figures, right?) “It basically invented the concept of a cat,” Google fellow Jeff Dean told the New York Times. A year ago Google picked up image-recognition technology developed by Geoffrey Hinton at the University of Toronto and quickly put it to work on photos.google.com (login required) – they got Hinton part time, too. Last summer Google released word2vec, open source deep-learning software that runs on everyday hardware and can figure out relationships between words without training – that could have huge implications for software deducing concepts and intentions behind written and spoken language. A Google researcher speaking on background indicated he had high hopes for its use in education and information science.

What could Google do with deep learning?

What does Google see in DeepMind’s deep learning technology and (perhaps) applications that’s worth hundreds of millions of dollars? Nobody is saying – and both Google and DeepMind representatives declined to comment. But Google has many operations that could benefit:

  • Video recognition – Google says users upload more than 100 hours of new video to YouTube every minute. Google already scan new content looking for copyright violations and inappropriate material, but systems with deep learning capabilities could take the idea much further, perhaps recognizing people, objects, brands, products, places, and events. Of course, one focus could be piracy and copyright violations (potentially worth hundreds of millions to Google all by itself). But the technology could also better curate the millions of videos on YouTube, making suggestions and related videos much smarter.
  • Speech recognition and translation – Google Translate is already well regarded, but deep-learning neural networks could make it even better. Imagine traveling to a country where you don’t know the language and speaking with someone in a store using your smartphone; its microphone could hear their speech and pump an English translation into an earbud for you, then translate your speech for them. It’s not far-fetched: Microsoft Research has used the same deep-learning ideas pioneered by Geoffrey Hinton to significantly reduce error rates in speech recognition; combined with Bing Translator, they even have speech recognition, translation, and text-to-speech happening in near-real time.

  • Better search – Google’s empire is based on search, and Google has long used heuristics to refine results. (Searching for “football” this week will turn up more Super Bowl-related results than three months ago – at least for U.S. users.) Deep-learning technologies mean Google can better understand what people are searching for, producing better results. The same technology can also let Google better understand new information – think social-media posts, news items, and just-published Web pages – faster, delivering the “freshest” results more reliably.
  • Security – Deep learning and neural networks excel at pattern recognition, whether that’s pixels in an image or behaviors exhibited by users’ accounts or devices. Google could use deep-learning technologies to protect accounts and improve users’ trust in Google (no easy task these days). Security technology augmented by machine learning could not only look for suspicious behavior on individual accounts, but (perhaps more usefully) look at activity across the full breadth of Google’s services, identifying and shutting down malicious attempts to hack, phish, and manipulate users or employees.
  • Social – Google is already using deep learning technologies in Google+, so don’t be surprised when deep learning augments more social (and mobile) offerings. After all, Google needs to distinguish itself from competitors. Obvious examples include improved face recognition in videos and photos, as well as recognizing places and events, but the technology could go further, recognizing objects (skis, cameras, cars, holiday decor), products, clothing – heck, even types of food. After all, pictures of cats are only outnumbered on social networks by pictures of lunch.
  • Let’s not forget ecommerce – The bulk of Google’s revenue comes from online advertising, where deep-learning technologies could be applied to targeting users even more precisely with ads. But Google also wants to sell users movies, music, books, and apps via Google Play – and let’s not forget Google has been trying (not very successfully) to sell goods online via efforts like Google Shopping. Just as deep-learning technologies can enrich social experiences, they can power product recommendations and custom offers, perhaps helping Google compete with the likes of Amazon and Groupon.

Google will have to walk a fine line: Any of these applications could exponentially increase Google “creep factor” as leverage our personal data. Curiously, Google’s acquisition of DeepMind reportedly includes oversight by an internal ethics board.

Will DeepMind help the “Google Brain?”

So what the effort to create an artificial intelligence on par with human intellect? Sadly for fans of robot overlords, the DeepMind acquisition is at best peripheral to that effort, and probably unrelated.

“I’m glad to hear the news about Google’s acquisition of DeepMind, since it will attract more attention to this field,” noted Pei Wang, an artificial general intelligence researcher at Temple University. “However, in my opinion, reinforcement learning and deep learning are not enough to give us ‘thinking machines.'”

Google is still a long way from achieving the processing scale of a human brain, let alone understanding how it works.

Part of the problem is scale. Google’s neural network that identified cats had 16,000 nodes, while a human brain has an estimated 100 billion neurons and 100 to 500 trillion synapses. Even Google doesn’t have that kind of computing horsepower sitting around.

More significantly, a “node” in a neural network – even one trained by deep learning – doesn’t correspond to a biological neuron. We still only have general ideas of how neurons work. If we want to build human-level intelligence by emulating biological processes, that means modeling physical and chemical details of neurons – and that’ll take even more computing power. Efforts have been made: In 2005, a 27-processor cluster took 50 days to simulate one second of the activity of 100 billion neurons; since then, the biggest brain simulation effort has probably been IBM’s 24,576-node effort to simulate a cat brain – although it did not model individual neurons.

In other words, Google is still a long way from achieving the processing scale of a human brain, let alone understanding how it works. Even with DeepMind.

Emerging Tech

DeepSqueak is a machine learning A.I. that reveals what rats are chatting about

Want to know what rats are squeaking about? You'd better check out DeepSqueak, the new deep learning artificial intelligence developed by researchers at the University of Washington.
Computing

Nvidia promises DLSS at low resolutions will be ‘top priority’ in future updates

Nvidia's deep learning super sampling needs work. Gamers know it and now we know Nvidia knows it too. The company made it clear on the technology's FAQ page that it plans to make fixing DLSS a top priority.
Emerging Tech

China’s mind-controlled cyborg rats are proof we live in a cyberpunk dystopia

Neuroscience researchers from Zhejiang University, China, have created a method that allows humans to control the movements of rats using a technology called a brain-brain interface.
Emerging Tech

A.I.-powered website creates freakishly lifelike faces of people who don’t exist

No, this isn't a picture of a missing person. It's a face generated by a new artificial intelligence on the website ThisPersonDoesNotExist.com. Here's how the impressive A.I. works.
Mobile

The 2-year-old Nokia 6 is now being updated to Android Pie

Android 9.0 Pie has been released. But is your phone getting Android 9.0 Pie, and if so, when? We've done the hard work and asked every device manufacturer to see when their devices would be getting the update.
Mobile

Samsung beefs up just about everything in its Galaxy S10 smartphone range

Samsung has unveiled its 2019 flagship smartphone lineup, and there aren't just two phones as usual -- there are four. There's the Galaxy S10, S10 Plus, as well as a new entry called the S10e, as well as the Galaxy S10 5G.
Mobile

With Galaxy S10e, Samsung unapologetically rips a page out of Apple’s playbook

Samsung's Galaxy S10e -- a new entry in the Galaxy S-series -- has a few things in common with Apple's lower-cost iPhone XR. From the price tag to the color, we take a look atthe similarities.
Product Review

If price is top of mind, Samsung’s Galaxy S10e is the flagship phone to buy

Samsung’s Galaxy S10 and S10 Plus are joined with a new entry into the Galaxy S family -- the Galaxy S10e. It costs a little more than the original price of the Galaxy S9, but it’s meant to be the more affordable phone compared to the…
Mobile

Samsung goes big with the next-gen Galaxy S10 5G smartphone

Samsung has announced a whopping four new Galaxy S10 devices, from the low-cost S10e to the triple-camera S10 and S10 Plus. But it's the Galaxy S10 5G that steals the show as it's among the first 5G-ready smartphones to hit the market.
Product Review

Samsung's Galaxy S10 phones are its most refined yet. Be prepared to pay up

Samsung has unveiled its lineup for its most popular smartphones, and it includes the Galaxy S10 and S10 Plus. The two flagship phones boast hole-punch cameras, fingerprint sensors embedded in the display, and beefier batteries.
Mobile

Folding smartphones hinge on the success of the Samsung Galaxy Fold

The Samsung Galaxy Fold has arrived, and it goes on sale soon. Folding out from a 4.6-inch display to a tablet-sized 7.3-inch display, this unique device has six cameras, two batteries, and special software to help you use multiple apps.
Mobile

Samsung Galaxy S10 vs. S10 Plus vs. S10e vs. S10 5G: Which should you buy?

With four stunning Galaxy S10 phones to choose from, Samsung is bombarding us with choice, but which one should you buy? We compare the S10, S10 Plus, S10e, and S10 5G in various categories to find out exactly how they differ.
Wearables

Samsung's new Galaxy Watch Active can track your blood pressure

Looking for a new fitness buddy? Samsung just launched the Galaxy Watch Active and the Galaxy Fit, two new wearables with a raft of fitness-focused features that'll keep you moving and get you down the gym.
Mobile

Here’s where you can buy the brand-new Samsung Galaxy S10

The Samsung Galaxy S10 is one of the most-anticipated phones of the year, offering a new chipset, beautiful display, and more. Now that the phone has been announced, however, you might be wondering where you can get it for yourself.