Google Lens has been changing the way that smartphone users make use of the camera on their device for a year now. Using deep machine learning to analyze images collected through a device’s camera, the app can perform tasks like telling you about a book when you take a photo of the cover, identifying shops or locations by looking at a picture of them, or connecting to a wifi network when the camera is pointed at a label showing the login data.
In a new blog post, Google has given more details about the redesign of Google Lens that was launched last week. One major feature is the ability to search based on visual information rather than text information. So, perhaps you see a cute dog and you want to know what breed it is. With traditional text-based search, you would have to look up individuals breeds and compare images, or look for a full listing of dog breeds and hope to find the right one. With Google Lens, you can use your camera to capture an image of the dog and have Google identify the breed from that image.
Similarly, you can also search from images to identify items of a similar style. If you see an outfit that you like while you are out and about, or a home decor item like a beautiful lamp, then you can snap an image with Lens and it will search not only for the original item, but also for similar items that have the same style. This process works through a machine learning algorithm that looks through hundreds of millions of images online to pull out the salient visual features of a particular item, allowing Lens to identify both an item and other similar items from just an image.
One challenge for Lens is getting it working with text. Teaching the camera to understand text requires a feature called optical character recognition (OCR) which lets the Lens identify written characters even when they are in different fonts, at an angle, or in non-optimal colors or lighting. With the updated OCR in Lens, you can now copy and paste text from a physical document onto your phone using your camera.
Google are betting that as smartphones get better cameras, we’ll use them more and more not only as digital devices, but also for interacting with the real world.
- How to perform a reverse image search in Android or iOS
- Google Images gets shoppable ads to help you spend even more money online
- Nvidia’s A.I. Playground lets you edit photos, experience deep learning research
- Google’s Gboard is about to get a lot better at speech recognition
- Get a glimpse of the ocean floor with remarkable images of deep sea creatures