Skip to main content

Google Photos now using A.I. to simplify editing and sharing images

Google rolling out A.I.-enhanced features to Google Photos app

Sundar Pichai stands in front of a Google logo at Google I/O 2021.
This story is part of our complete Google I/O coverage

On May 8, Google CEO Sundar Pichai took the stage at Google I/O to deliver his keynote presentation, which centered on artificial intelligence. And now, some of those A.I.-based features are beginning to roll out across many of its products and services, including Google Photos with “suggested actions.” While Google Color Pop was the first feature to roll out after Google I/O, all of the new A.I.-enhanced features are now available on the Google Photos app.

Over 5 billion pictures are viewed every day in Photos, and Google sees A.I. as the answer to speeding up the editing and sharing process. Suggested actions, as the name implies, are context sensitive actions that will display automatically while viewing individual photos. For example, using facial recognition, Photos will know who’s in a picture and offer a one-tap option to share it with that person (assuming this is someone in your contact list whose face Google has already learned). If that person appears in multiple images, Photos will even suggest to share all of them — again, with just a single tap.

When it comes to editing, different corrections will be suggested based on the look of the photo. If an image is underexposed, a simple “Fix brightness” suggestion will pop up automatically. Other suggested actions will be less subtle, including the “Pop color” option that desaturates the background to draw attention to your subject. Sure, selective color is one of the more notorious photographic clichés today, but the impressive part here is how the app is able to accurately differentiate the subject from the background. On images where the software is able to pick out the subject, Google Assistant will suggest the edit.

Recommended Videos

Even more impressive — and likely more useful — is the ability to add color to a black-and-white photograph. When viewing a monochrome image, Photos will suggest to “colorize” it. Like magic, tapping the button turns the image into a full color photograph. Not surprisingly, showing off this feature earned a chorus of cheers from the audience. Adobe demonstrated a very similar technology last year at the annual Adobe MAX conference.

Please enable Javascript to view this content

Another audience favorite feature was much more mundane, but no less useful. By taking a picture of a document, Google Photos will be able to automatically convert the image into a PDF — even if the photo was shot at a wonky angle. The app recognizes the document within the frame and crops it out and changes perspective as necessary. Admittedly, this is likely one of those features you will use rarely — but when you need it, you will really appreciate having it.

If the suggested actions work as well in practice as they did in the recorded demonstration, it will likely be the most important update to Google Photos yet.  Suggested actions will begin rolling out to users “in the next couple of months,” according to Pichai.

Although not related to photography, Google also demonstrated new machine vision capabilities coming to Google Lens. Simply by pointing the camera at things, you’ll be able to learn more about them, from looking up words on a restaurant menu, to identifying the building in front of you, to analyzing an outfit you like and automatically being shown similar styles. Again, it remains to be seen how this works in practice, but if it’s anywhere close to the performance we saw in Google’s presentation, this will be a very impressive new feature.

Later during I/O, Google announced a partners program for Google Photos, which will allow third-party apps to integrate with the photo platform.

Updated on May 16, 2018: Updated post to reflect A.I. enhancements are now available. 

Steven Winkelman
Former Digital Trends Contributor
Steven writes about technology, social practice, and books. At Digital Trends, he focuses primarily on mobile and wearables…
I saw Google’s futuristic Project Astra, and it was jaw-dropping
Google presenting Project Astra at Google I/O 2024.

If there's one thing to come out of Google I/O 2024 that really caught my eye, it's Google's Project Astra. In short, Astra is a new AI assistant with speech, vision, text, and memory capabilities. You can talk to it as if it were another person in the room, ask it to describe things it sees, and even ask it to remember information about those things.

During the I/O keynote announcing Astra, one of the most impressive moments happened when a person was running Astra on a phone, asking it to describe things in a room. When the person asked Astra where their glasses were, Astra quickly pointed out where they were in the room -- even without being prompted earlier in the video about them.

Read more
The 4 biggest things Google didn’t announce at Google I/O 2024
A photo of Sundar at the Google I/O 2024 keynote.

Google’s big keynote at its I/O 2024 developer conference was mostly focused on Gemini, its AI tool. There are big changes coming to Google Search, Google Photos, Google Workspaces, Android 15, and more, as expected.

If you were expecting more from the Google I/O keynote that didn’t involve Gemini AI, then you may be disappointed. There were no hardware announcements, though there was a possible tease of something in the future.

Read more
Watch Google’s 10-minute recap of its AI-filled I/O keynote
The stage for Google I/O 2024.

Google unveiled a slew of generative-AI goodies at its annual I/O event on Tuesday during a packed keynote that lasted almost two hours.

If you couldn’t watch it at the time, or really don’t want to sit through all 110 minutes of it on Google’s YouTube channel, the web giant has kindly shared a video that compresses the best bits of the event into a mere 10 minutes. You can watch it below:

Read more