Skip to main content

Google Assistant 2.0 isn’t just a minor evolution. It’s a game-changing upgrade

Image used with permission by copyright holder
Sundar Pichai stands in front of a Google logo at Google I/O 2021.
This story is part of our complete Google I/O coverage

Folding devices like the Galaxy Fold and Huawei Mate X represent the next major alteration in phone design, but what about the next, next? What will change the way we interact with our phones, once we’re done folding them in half?

Google gave us a teaser during the Google I/O 2019 keynote presentation, as it demonstrated the prowess of Google Assistant when the masses of data it requires to operate is shifted from the cloud to the device. Voice control has been part of our smartphone experience for a while, but the speed, versatility, and accuracy of this advanced system could be a game-changer.

Recommended Videos

Meet Google Assistant 2.0

What did Google announce? A next generation version of the Google Assistant we currently know and love from our Android phones, Google Nest products, or even Android Auto. Google Assistant uses three complex algorithms to understand, predict, and act upon what we’re saying, which requires 100GB of data storage and a network connection to operate. Google announced it has used deep learning to combine and shrink those algorithmic models down to 500MB — and that means it’ll fit happily on our phones, and stops network latency slowing responses and actions down.

google i/o assistant
Julian Chokkattu/Digital Trends

Google CEO Sundar Pichai said using the next-generation Assistant is so fast it’ll make tapping the screen seem slow.

“I think this going to transform the future of the Assistant,” Pichai said.

Hyperbole? No. The demo was mind-blowing. The verbal commands were back-to-back, and included setting timers, opening apps, performing searches, doing basic phone operations, and even taking a selfie. A second demo showed how Assistant could quickly and easily generate and create message and email replies using Google Photos, and search. It used continuous conversation, without saying the Hey Google wake word, along with natural commands, and often across multiple apps.

Next Generation Google Assistant: Demo 2 at Google I/O 2019

Scott Huffman, Google’s vice president of engineering for Google Assistant, summed up what the new Assistant could do, saying: “This next generation Assistant will let you instantly operate your phone with your voice, multi-task across apps, and complete complex actions, all with nearly zero latency.”

Simply put, Google is giving you the tools to confidently speak to your phone, and have it work faster than when you touch it. This has the potential to transform the way we use our devices, and even the overall design of the software and hardware, in the future.

Transformative

Integrating a reliable, fast version of Google Assistant into our phones without the need for a network connection is the final hurdle for creating a truly voice-operated device. Voice-controlled programs like this need to be helpful for us to use, and until they can do everything with little or no alterations in the way we speak from us, they won’t become indispensable. The on-device Assistant is a massive step toward this.

Speed is everything, because with it comes convenience.

Recently, Google has pushed for changes in how we summon Assistant on our phones, with many new devices using a short press of the sleep/wake key to open Assistant, rather than an on-screen action. Many phones now also come with a dedicated Google Assistant button as well. This walkie-talkie action makes it easier to call the Assistant without looking at the phone, ready for verbal control through a pair of headphones, and is crucial for speeding up and simplifying the launch process.

Removing the need for a wake word, such as Hey Google, and introducing continuous conversation is also key. Continued conversation is already part of Google Home, but not Assistant on our phones, but without it the speed required for true seamless voice control wouldn’t be possible. All of this combined gives you a look at Google’s plan to help us make Assistant part of our regular phone routine.

Speed is everything, because with it comes convenience. Without it, there’s only frustration. You can reply to messages now using dictation, but you have to go through a series of steps first, and Assistant can’t always help. Using voice is faster, provided the software is accurate and responsive enough. Google Assistant 2.0 looks like it will achieve this goal, and using our phones for something more than only basic, often-repeated tasks may be about to become a quicker, less screen-intensive process.

Scenarios

Less screen intensive? Definitely. If we trust the software to do what we ask it, even in the most basic situation, we will look at our phone less. We can carry out simple basic tasks now, using Assistant and our voice; but not with the same level of accuracy, versatility, and speed shown at Google I/O.

Google

It’s the versatility that shouldn’t be overlooked. Performing multiple tasks, all in succession, without manually flicking through apps or making multiple gesture-based selections, will make our phones more natural to use. It’s the way we perform tasks in the real world, and how we tell others what we want them to do, or communicate what we’re about to do. It’s all very natural.

Retraining our brains not to resort to using a finger or gesture on our phones will take some time.

However, the concept of a voice-controlled phone isn’t without problems. First, to do all this will take some practice. Understanding how to use voice — from which commands it can accept, to ending a conversation — requires patience, and retraining our brains not to resort to using a finger or gesture on our phones, will take some time.

Not only that, it will require us to become more comfortable with using voice for control, mostly outside the home. It will also need an acceptance that Google will know more about us, and that careless talk could potentially open up privacy problems when talking to a phone in public. We’ll all have to be more vigilant with what we share with Google, and what actions we carry out in public, when we start to use voice more often.

Google’s not the first

The on-stage Assistant demo was easily the most comprehensive and relatable example of how voice can transform our phone use that we’ve seen so far; but Google isn’t the first to try and harness the power of speech for device control, or explore the speed of on-device A.I. processing.

Huawei made excellent use of on-device A.I. for image recognition and other camera-related features when it introduced the Kirin 970 processor, which had a Neural Processing Unit (NPU) onboard, ready to take the A.I. strain rather than leave the processing in the hands of a cloud-based system. The speed benefits were enormous, and unique at the time. It has since gone on to demonstrate the ability of the NPU in interesting ways, and outline how it sees A.I. shaping the future, while some other manufacturers have struggled on with cloud-driven A.I. with poor results.

Huawei AI Kirin 970 chip
Huawei

When Samsung launched its own virtual assistant, Bixby, in 2017, the goal was to create an assistant that could cover everything we’d normally do with a touch command. Samsung’s Injong Rhee told Digital Trends at the time, “What we’re looking at is revolutionizing the interface.” Bixby isn’t the best example of a capable voice assistant, but Samsung’s prediction of a revolution should it work correctly is accurate.

When will it happen?

What we’re on the cusp of here, now that Google has found a way to squeeze 100GB of competent and complex data modeling into 500MB, is the development of phone interfaces, apps, and potentially even hardware designs that rely on us looking and touching less, and speaking more. Pichai wasn’t exaggerating when he called this breakthrough a “significant milestone.”

We’re not even going to have to wait long before it’ll be possible to try it out. Huffman promised that the next generation assistant will first come to the new Pixel phones — meaning the Pixel 4 — later in 2019. Assistant is available on the vast majority of Android smartphones, and although it’ll debut on the new Pixel and Android Q software, more phones will almost certainly get the feature in the future.

The question is, are you ready to use voice as often as you use touch to control your phone?

Andy Boxall
Andy is a Senior Writer at Digital Trends, where he concentrates on mobile technology, a subject he has written about for…
These smart glasses have a digital crown just like an Apple Watch
A person wearing the Looktech AI Glasses.

Smart glasses with cameras built-in have taken off in 2024, and now a new pair has arrived on Kickstarter called the Looktech AI Glasses. The glasses provide hands-free access to an AI assistant, but what makes the otherwise familiar design stand out is the addition of a “crown” to control some of the features.

We’re used to seeing this type of control on the Apple Watch and many of the best smartwatches, but this is the first time we’ve seen it on a pair of smart glasses. The crown is set on the underside of the right-hand side arm of the Looktech AI Glasses and can apparently be twisted and pressed to control music playback through the built-in speakers.

Read more
The iOS 18.2 update includes a special feature just for iPhone 16 Pro users
A person holding the Apple iPhone 16 Pro Max.

If you have an iPhone 16 Pro or iPhone 16 Pro Max, updated to iOS 18.2, and regularly use the Voice Memos app, then your phone just got even better if you're a musician. Originally teased in September’s iPhone 16 event, Layered Recordings is now available in the Voice Memos app with the iOS 18.2 update.

What exactly are Layered Recordings? Basically, you can now add a vocal track layer on top of any existing instrumental recording without the need for headphones. In the iOS 18.2 update, users are now able to play original instrument ideas through the iPhone’s built-in speakers while simultaneously recording vocals with the studio-quality microphone on the iPhone 16 Pro or Pro Max.

Read more
Google boosts Android security against unknown tracking devices
Unknown tracker alert for Android.

Google is adding a couple of new features to Android’s safety alert system that will help users find unknown trackers moving with them. The new features cover all tags and tracking devices that support Google’s Find My Device service for locating lost hardware.

The first one is Find Nearby. This one will help users locate any hidden tracker. For example, if your Android phone flashes an unknown tracker alert, you can check for its presence using the Play Sound feature.

Read more