Unveiled earlier in 2017 at Google I/O, the first public version of the artificially intelligent computer vision program Google Lens is now part of the new Google Pixel 2. During Wednesday’s October 4 event in San Fransisco, Google shared a preview of Lens that will ship inside the new Pixel 2 smartphone, with integration into both Google Photos and Google Assistant.
Google Lens is the tech giant’s computer vision software that collects information from a photograph to either save some time by skipping the typing or to learn something new about the things that we see around us. The tool effectively mixes Google search with a camera, and while the Pixel 2 only contains a preview of the feature, the platform already creates a few promising shortcuts.
During the event, Google’s Aparna Chennapragada shared how the new feature allows the smartphone’s camera to be used as a sort of keyboard. When taking a photo of something with text, like a flyer, Google Lens allows users to highlight text such as email addresses, phone numbers, websites, and street addresses and copy the information. The shortcut makes it easy to look up a location on Google Maps or call a phone number without typing it into the keyboard.
Besides serving as a visual shortcut to typing in long and unusual email addresses, Google Lens is also designed to help users understand the objects they see — starting with art and entertainment. Snapping a photo of a piece of art will lead to who the artist is and what else they painted. See a movie poster? Lens will tell you if the flick is worth watching or not. Snapping photos of album covers and book covers also lead you to more details on the work.
The preview inside the Pixel 2 is just a start for the computer vision software. When the software was first announced, Google listed a long number of possibilities, including translating text, getting more details on a business, reading Wi-Fi network settings or learning the name of that flower you just spotted.
Google’s computer vision also works with existing photos, powering a number of tools inside the native Google Photos app on the Pixel 2. Searching for specific objects, people and even famous landmarks is possible through the program’s auto-tagging feature.
Google Lens is based on machine learning — Google essentially used those millions of photos in their search results to train the computer to recognize what a specific object looks like. With enough photos, the program can learn to recognize what the Eiffel Tower looks like on a cloudy day, lit up at night or even blurred from camera shake to correctly identify what is in the photo.
Chennapragada said that Google Lens will continue to improve with use. For example, she said, Google’s voice recognition, at first, wouldn’t always recognize speech correctly, particularly with factors like accents. Now, after several years of development, Google voice has a 95 percent accuracy rate.
Google CEO Sudar Pichai said that the object recognition AI built by Google had a 39 percent accuracy rate. Using what’s called AutoML, which is essentially artificial intelligence building more AI programs, that accuracy rate has improved to 43 percent and is continuing to improve.
“This is why we are excited about the shift from mobile first to AI first, it’s radically rethinking how computers work,” Pichai said during the presentation. “Computers should adapt to how people live their life, rather than people adapting to computers.”
Google Lens will first be available in Pixel 2 by tapping the Lens icon inside both Google Photos and Google Assistant.
- Google Lens’ landmark, text recognition expands to all Android devices
- The Google Lens app is now available on Google Photos for iOS
- Everything you need to know about Google Assistant
- Over 60 ARCore apps are now available from the Google Play Store
- 3,000 Google employees demand an end to cooperation with military on A.I.