Skip to main content

Google quietly just announced a step toward AI seeing the world better than humans can

Circle to Search
Google

Google has announced some new updates to its AI Overviews in Google Search, specifically some new screens in Circle to Search.

What was spotted more subtly, by beta testers over at AndroidAuthority, was just how smart this can be.

Recommended Videos

The new features will allow you to take an image with text in it and have Google summarize it, extract that text and explain what’s going on with further information.

Essentially this means the AI smarts behind this feature not only see the image – and pick out the text – but also simplify it on one hand and look at it in more depth on the other.

While we could focus on how terrifying the thought of future machines seeing on those two levels at the same time could be – let’s focus on its uses right now.

How do the new Circle to Search features help?

Explain. This feature lets you see who wrote the text and highlights the main copy. There’s also an AI-generated blurb with links to relevant websites.

Summarization. This helps clarify what’s on the page, in what’s highlighted as well as more, if it is deemed relevant. This looks quite similar to Gemini‘s current summarization feature.

Extract. This takes text out of the image and breaks it down into an organized layout, with sub-headings where needed, while keeping it as minimal as possible for clarity.

What’s the future for this tech?

The obvious next step would be to use this in Google Lens, so all those rich features are available as you point your camera at something of interest.

Imagine this on an AR display, pulling in information about what is being seen while adding depth and context to suit the situation or individual. A futuristic dream, previously reserved for sci-fi, appears to be fast coming close to reality.

Luke Edwards
Luke has over two decades of experience covering tech, science and health. Among many others, Luke writes about health tech…
Expert reveals the phones AI fans need to push Gemini & ChatGPT to the limit
Person holding a phone depicting Micron's UFS 4.1 storage module.

One of the most obvious — and honestly, the dullest —trends within the smartphone industry over the past couple of years has been the incessant talk about AI experiences. Silicon warriors, in particular, often touted how their latest mobile processor would enable on-device AI processes such as video generation.

We’re already there, albeit not completely. Amidst all the hype show with hit-and-miss AI tricks for smartphone users, the debate barely ever went beyond the glitzy presentations about the new processors and ever-evolving chatbots.

Read more
I tested the world-understanding avatar of Gemini Live. It was shocking
Scanning a sticker using Gemini Live with camera and screen sharing.

It’s somewhat unnerving to hear an AI talking in an eerily friendly tone and telling me to clean up the clutter on my workstation. I am somewhat proud of it, but I guess it’s time to stack the haphazardly scattered gadgets and tidy up the wire mess. 

My sister would agree, too. But jumping into action after an AI “sees” my table, recognizes the mess, and doles out homemaker advice is the bigger picture. Google’s Gemini AI chatbot can now do that. And a lot more. 

Read more
Google just gave vision to AI, but it’s still not available for everyone
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open

Google has just officially announced the roll out of a powerful Gemini AI feature that means the intelligence can now see.

This started in March as Google began to show off Gemini Live, but it's now become more widely available.

Read more