Skip to main content

This new Google Lens feature looks like it’s straight out of a sci-fi movie

Google has introduced AR Translate as part of a trio of updates for Google Lens that are aimed at taking image translation further into the future. The company held a demonstration at the Google Search On 2022 conference on Wednesday to show that AR Translate can use AI to make images featuring a foreign language look more natural after text is translated to another language.

Currently, any text that’s converted into a different language uses colored blocks to mask bits of the background image. AR Translate better preserves the image by removing the blocks and just swapping the text outright to make the translated image look as though it was the original photo.

What you love about Translating with Lens is now even better. 💡

With major advancements in AI, translated text appears seamlessly integrated, as if it was part of the original picture. Turning text… into context! #SearchOn

— Google (@Google) September 28, 2022

Google said it optimized the machine learning models so that AR Translate could work in 100 milliseconds. The translation speed is made possible with the same technology Google uses for Magic Eraser, whether you’re trying to translate a screenshot or a poster with the live Lens Camera.

AR Translate is an impressive innovation that looks as though it’s taken straight out of a sci-fi film. People will not only be able to translate the posters they would see at a museum, zoo, or other tourist attraction when they travel to different countries, but one day they will also be able to translate street signs and storefront signs faster than the blink of an eye. Although, we’re not sure if the live Lens Camera will be able to translate signs in which the text is raised, like on building signs. Even so, it’s impressive to see Google continuing to improve one of its most impressive (and helpful) features.

AR Translate will roll out on the Google app later this year.

Editors' Recommendations

Cristina Alexander
Cristina Alexander has been writing since 2014, from opining about pop culture on her personal blog in college to reporting…
Google Lens’ Images feature recognizes objects you might want to buy

Google's artificial intelligence is getting better and better. The company first announced Google Lens, and artificial intelligence system for images and video, at Google I/O in 2017, but since then it has brought the tech to the Google Pixel camera, Google Assistant, and more. Now, Google Lens is being integrated into another Google service: Google Images.

This year has already been a big one for Google Images. Earlier, Google announced a redesign of Google Images, which aimed to use a new ranking algorithm to help users more easily find what they were looking for. Now, with Lens integration, Images is set to get even more helpful.

Read more
We toured the Google Hardware Store pop-up in New York: Here’s what it’s like
google pop up stores 2018 hardware 35



Read more
Bing Visual Search is a Google Lens competitor — with an extra feature
bing visual search launches bingsearch

Visual Search

Microsoft wants searchers to be able to skip the keyboard and search not just with a photo, but within a specific part of that photo. Thanks to artificial intelligence, that feature is now arriving to the Bing app on iOS and Android. Visual Search, announced on Thursday, June 21, uses a camera or an existing photo to search or shop for objects, landmarks, and animals, or to scan a barcode. The Google Lens-like competitor is rolling out to the Bing app as well as Microsoft Launcher on Android and is also expected to head to Microsoft Edge and at a later date.

Read more