Skip to main content

AI can now predict whether or not humans will think your photo is awesome

everypixel aesthetic ranks photos
Everypixel
Just how good is that photo you just snapped? Everypixel Aesthetics thinks it may have the answer. The new neural network algorithm is designed to both auto tag and generate a probability that the photo is a good one.

The tool comes from Everypixel, a startup that is looking to change the stock photography market by creating a search tool that browses multiple platforms at once, giving the little guy just as much exposure as the stock photo giants.

The Aesthetics tool, still in beta testing, allows users to upload a photo and get an auto-generated list of tags, as well as a percentage rate on the “chance that this image is awesome.” According to developers, the neural network was trained to view an image much in the same way a human photo editor would, looking at factors such as color, sharpness, and subject. The system was generated by allowing computers to watch and learn from a training dataset with 946,894 images.

The system is designed to help curate the best images — but just how accurate is it? As early users report, the system seems to be fairly good at recognizing factors like whether or not the image is sharp and if the composition is interesting, but it is certainly far from a pair of human eyes. One user doodled a black brushstroke on a white canvas in Photoshop and still got a rating over 70 percent. The program is not the first of its kind either — Dreamstime recently started using artificial intelligence to help photo editors speed up the approval process.

While the results of just how “awesome” a photo is may not be accurate for every image, the auto-tagging tool could prove useful, generating a list of keywords from object recognition as well as less concrete terms, like love, happiness, and teamwork. Clicking on a keyword will bring up an Everypixel search for other images with that same tag, or users can copy and paste the list of keywords.

The tool may have some bugs to work out but it’s free to upload a photo and see just what a robot thinks of your work.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Photoshop AI thinks ‘happiness’ is a smile with rotten teeth
Phil Nickinson, as edited by Adobe Photoshop's Neural Filter.

You can't swing a dead cat these days without running into AI. And nowhere is that more true than in photography. I've certainly had fun with it on more than my share of photos. But the more I attempt to be a "serious" photographer, the less I want to rely on artificial intelligence to do my job for me.

That's not to say it doesn't have its place. Because it does. And at the end of the day, using AI filters isn't really any different than hitting "auto" in Photoshop or Lightroom and using those results. And AI certainly has its place in the world of art. (Though I'd probably put that place somewhere way in the back, behind the humans who make it all possible in the first place.)

Read more
Forget Dall-E, you can sign up to create AI-generated videos now
A frame from an AI-generated video in claymation style.

Dall-E, ChatGPT, and other AI-generation technologies continue to amaze us. Still, AI image-generation tools like Midjourney might seem boring once you see the new, AI-powered video-generation abilities that will soon be available to us all.

Runway provides an advanced online video editor that offers many of the same features as a desktop app. The company has distinguished its service from others, however, by pioneering the use of AI tools that help with various time-consuming video chores, such as masking out the background.

Read more
This AI can spoof your voice after just three seconds
man speaking into phone

Artificial intelligence (AI) is having a moment right now, and the wind continues to blow in its sails with the news that Microsoft is working on an AI that can imitate anyone’s voice after being fed a short three-second sample.

The new tool, dubbed VALL-E, has been trained on roughly 60,000 hours of voice data in the English language, which Microsoft says is “hundreds of times larger than existing systems”. Using that knowledge, its creators claim it only needs a small smattering of vocal input to understand how to replicate a user’s voice.

Read more