Skip to main content

Google quietly just announced a step toward AI seeing the world better than humans can

Circle to Search
Google

Google has announced some new updates to its AI Overviews in Google Search, specifically some new screens in Circle to Search.

What was spotted more subtly, by beta testers over at AndroidAuthority, was just how smart this can be.

Recommended Videos

The new features will allow you to take an image with text in it and have Google summarize it, extract that text and explain what’s going on with further information.

Essentially this means the AI smarts behind this feature not only see the image – and pick out the text – but also simplify it on one hand and look at it in more depth on the other.

While we could focus on how terrifying the thought of future machines seeing on those two levels at the same time could be – let’s focus on its uses right now.

How do the new Circle to Search features help?

Explain. This feature lets you see who wrote the text and highlights the main copy. There’s also an AI-generated blurb with links to relevant websites.

Summarization. This helps clarify what’s on the page, in what’s highlighted as well as more, if it is deemed relevant. This looks quite similar to Gemini‘s current summarization feature.

Extract. This takes text out of the image and breaks it down into an organized layout, with sub-headings where needed, while keeping it as minimal as possible for clarity.

What’s the future for this tech?

The obvious next step would be to use this in Google Lens, so all those rich features are available as you point your camera at something of interest.

Imagine this on an AR display, pulling in information about what is being seen while adding depth and context to suit the situation or individual. A futuristic dream, previously reserved for sci-fi, appears to be fast coming close to reality.

Luke Edwards
Luke has over two decades of experience covering tech, science and health. Among many others, Luke writes about health tech…
Thanks to Gemini, you can now talk with Google Maps
Gemini’s Ask about place chip in Google Maps.

Google is steadily rolling out contextual improvements to Gemini that make it easier for users to derive AI’s benefits across its core products. For example, opening a PDF in the Files app automatically shows a Gemini chip to analyze it. Likewise, summoning it while using an app triggers an “ask about screen” option, with live video access, too.
A similar treatment is now being extended to the Google Maps experience. When you open a place card in Maps and bring up Gemini, it now shows an “ask about place” chip right about the chat box. Gemini has been able to access Google Maps data for a while now using the system of “apps” (formerly extensions), but it is now proactively appearing inside the Maps application.

The name is pretty self-explanatory. When you tap on the “ask about place” button, the selected location is loaded as a live card in the chat window to offer contextual answers. 

Read more
Gemini app finally gets the world-understanding Project Astra update
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open

At MWC 2025, Google confirmed that its experimental Project Astra assistant will roll out widely in March. It seems the feature has started reaching out to users, albeit in a phased manner, beginning with Android smartphones.
On Reddit, one user shared a demo video that shows a new “Share Screen With Live” option when the Gemini Assistant is summoned. Moreover, the Gemini Live interface also received two new options for live video and screen sharing.
Google has also confirmed to The Verge that the aforementioned features are now on the rollout trajectory. So far, Gemini has only been capable of contextual on-screen awareness courtesy of the “Ask about screen” feature.

Project Astra is the future of Gemini AI

Read more
Cost-cutting strips Pixel 9a of the best Gemini AI features in Pixel 9
Person holds Pixel 9a in hand while sitting in a car.

The Pixel 9a has been officially revealed, and while it's an eye candy, there are some visible cutbacks over the more premium Pixel 9 and 9 Pro series phones. The other cutbacks we don't see include lower RAM than the Pixel 9 phones, which can limit the new mid-ranger's ability to run AI applications, despite running the same Tensor G4 chipset.

Google's decision to limit the RAM to 8GB, compared to the 12GB on the more premium Pixel 9 phones, sacrifices its ability to run certain AI tasks locally. ArsTechnica has reported that as a result of the cost-cutting, Pixel 9a runs an "extra extra small" or XXS variant -- instead of the "extra small" variant on Pixel 9 -- of the Gemini Nano 1.0 model that drives on-device AI functions.

Read more