Skip to main content

Google Assistant 2.0 isn’t just a minor evolution. It’s a game-changing upgrade

Image used with permission by copyright holder

Folding devices like the Galaxy Fold and Huawei Mate X represent the next major alteration in phone design, but what about the next, next? What will change the way we interact with our phones, once we’re done folding them in half?

Google gave us a teaser during the Google I/O 2019 keynote presentation, as it demonstrated the prowess of Google Assistant when the masses of data it requires to operate is shifted from the cloud to the device. Voice control has been part of our smartphone experience for a while, but the speed, versatility, and accuracy of this advanced system could be a game-changer.

Meet Google Assistant 2.0

What did Google announce? A next generation version of the Google Assistant we currently know and love from our Android phones, Google Nest products, or even Android Auto. Google Assistant uses three complex algorithms to understand, predict, and act upon what we’re saying, which requires 100GB of data storage and a network connection to operate. Google announced it has used deep learning to combine and shrink those algorithmic models down to 500MB — and that means it’ll fit happily on our phones, and stops network latency slowing responses and actions down.

google i/o assistant
Julian Chokkattu/Digital Trends

Google CEO Sundar Pichai said using the next-generation Assistant is so fast it’ll make tapping the screen seem slow.

“I think this going to transform the future of the Assistant,” Pichai said.

Hyperbole? No. The demo was mind-blowing. The verbal commands were back-to-back, and included setting timers, opening apps, performing searches, doing basic phone operations, and even taking a selfie. A second demo showed how Assistant could quickly and easily generate and create message and email replies using Google Photos, and search. It used continuous conversation, without saying the Hey Google wake word, along with natural commands, and often across multiple apps.

Next Generation Google Assistant: Demo 2 at Google I/O 2019

Scott Huffman, Google’s vice president of engineering for Google Assistant, summed up what the new Assistant could do, saying: “This next generation Assistant will let you instantly operate your phone with your voice, multi-task across apps, and complete complex actions, all with nearly zero latency.”

Simply put, Google is giving you the tools to confidently speak to your phone, and have it work faster than when you touch it. This has the potential to transform the way we use our devices, and even the overall design of the software and hardware, in the future.


Integrating a reliable, fast version of Google Assistant into our phones without the need for a network connection is the final hurdle for creating a truly voice-operated device. Voice-controlled programs like this need to be helpful for us to use, and until they can do everything with little or no alterations in the way we speak from us, they won’t become indispensable. The on-device Assistant is a massive step toward this.

Speed is everything, because with it comes convenience.

Recently, Google has pushed for changes in how we summon Assistant on our phones, with many new devices using a short press of the sleep/wake key to open Assistant, rather than an on-screen action. Many phones now also come with a dedicated Google Assistant button as well. This walkie-talkie action makes it easier to call the Assistant without looking at the phone, ready for verbal control through a pair of headphones, and is crucial for speeding up and simplifying the launch process.

Removing the need for a wake word, such as Hey Google, and introducing continuous conversation is also key. Continued conversation is already part of Google Home, but not Assistant on our phones, but without it the speed required for true seamless voice control wouldn’t be possible. All of this combined gives you a look at Google’s plan to help us make Assistant part of our regular phone routine.

Speed is everything, because with it comes convenience. Without it, there’s only frustration. You can reply to messages now using dictation, but you have to go through a series of steps first, and Assistant can’t always help. Using voice is faster, provided the software is accurate and responsive enough. Google Assistant 2.0 looks like it will achieve this goal, and using our phones for something more than only basic, often-repeated tasks may be about to become a quicker, less screen-intensive process.


Less screen intensive? Definitely. If we trust the software to do what we ask it, even in the most basic situation, we will look at our phone less. We can carry out simple basic tasks now, using Assistant and our voice; but not with the same level of accuracy, versatility, and speed shown at Google I/O.


It’s the versatility that shouldn’t be overlooked. Performing multiple tasks, all in succession, without manually flicking through apps or making multiple gesture-based selections, will make our phones more natural to use. It’s the way we perform tasks in the real world, and how we tell others what we want them to do, or communicate what we’re about to do. It’s all very natural.

Retraining our brains not to resort to using a finger or gesture on our phones will take some time.

However, the concept of a voice-controlled phone isn’t without problems. First, to do all this will take some practice. Understanding how to use voice — from which commands it can accept, to ending a conversation — requires patience, and retraining our brains not to resort to using a finger or gesture on our phones, will take some time.

Not only that, it will require us to become more comfortable with using voice for control, mostly outside the home. It will also need an acceptance that Google will know more about us, and that careless talk could potentially open up privacy problems when talking to a phone in public. We’ll all have to be more vigilant with what we share with Google, and what actions we carry out in public, when we start to use voice more often.

Google’s not the first

The on-stage Assistant demo was easily the most comprehensive and relatable example of how voice can transform our phone use that we’ve seen so far; but Google isn’t the first to try and harness the power of speech for device control, or explore the speed of on-device A.I. processing.

Huawei made excellent use of on-device A.I. for image recognition and other camera-related features when it introduced the Kirin 970 processor, which had a Neural Processing Unit (NPU) onboard, ready to take the A.I. strain rather than leave the processing in the hands of a cloud-based system. The speed benefits were enormous, and unique at the time. It has since gone on to demonstrate the ability of the NPU in interesting ways, and outline how it sees A.I. shaping the future, while some other manufacturers have struggled on with cloud-driven A.I. with poor results.

Huawei AI Kirin 970 chip

When Samsung launched its own virtual assistant, Bixby, in 2017, the goal was to create an assistant that could cover everything we’d normally do with a touch command. Samsung’s Injong Rhee told Digital Trends at the time, “What we’re looking at is revolutionizing the interface.” Bixby isn’t the best example of a capable voice assistant, but Samsung’s prediction of a revolution should it work correctly is accurate.

When will it happen?

What we’re on the cusp of here, now that Google has found a way to squeeze 100GB of competent and complex data modeling into 500MB, is the development of phone interfaces, apps, and potentially even hardware designs that rely on us looking and touching less, and speaking more. Pichai wasn’t exaggerating when he called this breakthrough a “significant milestone.”

We’re not even going to have to wait long before it’ll be possible to try it out. Huffman promised that the next generation assistant will first come to the new Pixel phones — meaning the Pixel 4 — later in 2019. Assistant is available on the vast majority of Android smartphones, and although it’ll debut on the new Pixel and Android Q software, more phones will almost certainly get the feature in the future.

The question is, are you ready to use voice as often as you use touch to control your phone?

Editors' Recommendations

Andy Boxall
Andy is a Senior Writer at Digital Trends, where he concentrates on mobile technology, a subject he has written about for…
JBL’s Google Assistant-powered Link Bar now available for purchase

When the Google Home Max debuted in 2017, we asked Google if the smart speaker would be compatible with TVs as a soundbar. The answer was no, but the company is finally placing some emphasis on Google Assistant-powered soundbars. The first product in this category is JBL's Link Bar. It's an Android TV streaming box, soundbar, and smart speaker all in one. Though it debuted in 2018, it's only now -- as of July 11, 2019 -- available for purchase, for $400. We'll hopefully get our hands on a review model soon, but in the meantime, here's what we learned after spending some time checking it out at Google I/O 2018.

Soundbars are slim and don't take much space, and they're a relatively affordable way of improving the audio capabilities of your TV. The JBL Link Bar looks like any other soundbar: It's sleek and its presence is subtle, with physical controls for volume on the top, including a mute switch. What makes it different is that it not only has Google Assistant smarts, but it's also powered by Android TV.

Read more
Beresheet 2.0 won’t go to moon, will have another significant objective instead
beresheet israeli craft posts selfie

SpaceIL hoped to be the company to put the first Israeli craft on the moon, but that hope was scuppered when the Beresheet craft crash landed. Now, the company has announced that it will be not sending a second mission to the moon after all, but will be pursuing a different, currently unnamed objective instead.

In April this year, the first private mission to the moon tried to touch down on the lunar surface. Unfortunately for the Beresheet craft, it suffered problems during the descent and crashed into the moon. It subsequently emerged that the crash was caused by a manual command which was entered incorrectly into the craft's computer, leading to a chain reaction which caused the engine to switch off too soon. The engine didn't fire early enough to slow the descent of the craft and it impacted the surface, leaving a visible impact crater which was captured by NASA’s Lunar Reconnaissance Orbiter.

Read more
Swipe typing isn’t new, but it’s the best addition to iOS since Siri
ios 13 swipe keyboard wwdc 2019 feat 2

Dismissed in a short sentence during the WWDC 2019 keynote presentation was one of the more interesting new features in iOS 13 — the introduction of swipe-typing on the standard iOS keyboard. If you’re faithful to Apple, then the whole concept of swiping-to-type may have passed you by, but it’s standard on Google’s Gboard app and various other Android keyboards, such as the one installed on Samsung devices.

Even then, you may not have given it a try, as most are so used to typing with their two thumbs, or prodding each key individually with a single finger, that alternative ways to type only slow us down. That said, I appreciate many will have already been converted over to the joy of being a swipist, so if you’re one of them, please enjoy making your smug face.

Read more