Skip to main content

Gemini app finally gets the world-understanding Project Astra update

Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open
Nirave Gondhia / Digital Trends

At MWC 2025, Google confirmed that its experimental Project Astra assistant will roll out widely in March. It seems the feature has started reaching out to users, albeit in a phased manner, beginning with Android smartphones.

On Reddit, one user shared a demo video that shows a new “Share Screen With Live” option when the Gemini Assistant is summoned. Moreover, the Gemini Live interface also received two new options for live video and screen sharing.

Recommended Videos

Google has also confirmed to The Verge that the aforementioned features are now on the rollout trajectory. So far, Gemini has only been capable of contextual on-screen awareness courtesy of the “Ask about screen” feature.

Project Astra is the future of Gemini AI

Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open
Nirave Gondhia / Digital Trends

In case you aren’t familiar, Project Astra is the most futuristic iteration of an AI that can understand text, audio, picture, video, and live camera feed in real-time. Google Deepmind’s research director, Greg Wayne, likened it to a “little parrot on your shoulder that’s hanging out with you and talking to you about the world.”

When you summon Gemini and enable the Share Screen With Live option in any app, it will analyze the on-screen content and will answer queries based on it. For example, users can ask it to describe the current activity happening in an app, break down or summarize any article they were reading, or talk to them about the scene during video playback.

The more impressive capability is the world understanding system. When you switch to the Live mode for Gemini, it now shows a video feed option that opens the camera. In this mode, if you point your camera at any object, Gemini will be able to see, comprehend, and answer questions based on what it sees.

Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open
Nirave Gondhia / Digital Trends

Pointing the camera at a book passage, asking Gemini to tell more about a monument, getting advice on decor, or solving problems written on a board or book — Gemini Live’s Project Astra upgrade can do it all. It is not too different from Apple’s Visual Intelligence on iPhones, or the open-source HuggingSnap app that promises an offline world-understanding AI

Digital Trends got a demo of Project Astra at MWC earlier this year, getting an early taste of a massively upgraded AI assistant experience on smartphones. It is worth pointing out that Gemini Live’s Project Astra upgrade will be limited to customers who pay for a Gemini Advanced subscription.

It seems the scale of this Project Astra update rollout isn’t too wide at the moment. I have a Gemini Advanced subscription via Google One AI Premium bundle, but don’t see it yet on any of my Pixel phones running the latest stable version of Android 15 or the beta build of Android 16.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
The search system in Gmail is about to get a lot less frustrating
Gmail icon on a screen.

Finding relevant information on Gmail can be a daunting task, especially if you have a particularly buzzy inbox. Right now, the email client uses a search operator system that acts somewhat like a shortcut, but not many users know about it.
Today, Google has announced an update to how search on Gmail works, thanks to some help from AI. When you look up a name or keyword in Gmail, the matching results are shown in chronological order.
Moving ahead, search results will be shown based on their relevance. In Google’s words, relevance will take into account three factors viz. frequent contacts, most-clicked emails, and how recently the relevant emails arrived in your inbox.

Old search (left), new search (right) Google
“With this update, the emails you’re looking for are far more likely to be at the top of your search results — saving you valuable time and helping you find important information more easily,” the company says in a blog post.
The updated search system in Gmail is rolling out to users worldwide, and it will be implemented on the desktop version as well as the mobile app. And just in case you are wondering, this is not an irreversible change to the search function in Gmail.
Google says users can switch between “most relevant” and “most recent” search results at their convenience. The overarching idea is to help users find the intended material at a quicker pace.

Read more
HuggingSnap app serves Apple’s best AI tool, with a convenient twist
HuggingSnap recognizing contents on a table.

Machine learning platform, Hugging Face, has released an iOS app that will make sense of the world around you as seen by your iPhone’s camera. Just point it at a scene, or click a picture, and it will deploy an AI to describe it, identify objects, perform translation, or pull text-based details.
Named HuggingSnap, the app takes a multi-model approach to understanding the scene around you as an input, and it’s now available for free on the App Store. It is powered by SmolVLM2, an open AI model that can handle text, image, and video as input formats.
The overarching goal of the app is to let people learn about the objects and scenery around them, including plant and animal recognition. The idea is not too different from Visual Intelligence on iPhones, but HuggingSnap has a crucial leg-up over its Apple rival.

It doesn’t require internet to work
SmolVLM2 running in an iPhone
All it needs is an iPhone running iOS 18 and you’re good to go. The UI of HuggingSnap is not too different from what you get with Visual Intelligence. But there’s a fundamental difference here.
Apple relies on ChatGPT for Visual Intelligence to work. That’s because Siri is currently not capable of acting like a generative AI tool, such as ChatGPT or Google’s Gemini, both of which have their own knowledge bank. Instead, it offloads all such user requests and queries to ChatGPT.
That requires an internet connection since ChatGPT can’t work in offline mode. HuggingSnap, on the other hand, works just fine. Moreover, an offline approach means no user data ever leaves your phone, which is always a welcome change from a privacy perspective. 

Read more
Cost-cutting strips Pixel 9a of the best Gemini AI features in Pixel 9
Person holds Pixel 9a in hand while sitting in a car.

The Pixel 9a has been officially revealed, and while it's an eye candy, there are some visible cutbacks over the more premium Pixel 9 and 9 Pro series phones. The other cutbacks we don't see include lower RAM than the Pixel 9 phones, which can limit the new mid-ranger's ability to run AI applications, despite running the same Tensor G4 chipset.

Google's decision to limit the RAM to 8GB, compared to the 12GB on the more premium Pixel 9 phones, sacrifices its ability to run certain AI tasks locally. ArsTechnica has reported that as a result of the cost-cutting, Pixel 9a runs an "extra extra small" or XXS variant -- instead of the "extra small" variant on Pixel 9 -- of the Gemini Nano 1.0 model that drives on-device AI functions.

Read more