Skip to main content

AI can now predict whether or not humans will think your photo is awesome

everypixel aesthetic ranks photos
Everypixel
Just how good is that photo you just snapped? Everypixel Aesthetics thinks it may have the answer. The new neural network algorithm is designed to both auto tag and generate a probability that the photo is a good one.

The tool comes from Everypixel, a startup that is looking to change the stock photography market by creating a search tool that browses multiple platforms at once, giving the little guy just as much exposure as the stock photo giants.

Related Videos

The Aesthetics tool, still in beta testing, allows users to upload a photo and get an auto-generated list of tags, as well as a percentage rate on the “chance that this image is awesome.” According to developers, the neural network was trained to view an image much in the same way a human photo editor would, looking at factors such as color, sharpness, and subject. The system was generated by allowing computers to watch and learn from a training dataset with 946,894 images.

The system is designed to help curate the best images — but just how accurate is it? As early users report, the system seems to be fairly good at recognizing factors like whether or not the image is sharp and if the composition is interesting, but it is certainly far from a pair of human eyes. One user doodled a black brushstroke on a white canvas in Photoshop and still got a rating over 70 percent. The program is not the first of its kind either — Dreamstime recently started using artificial intelligence to help photo editors speed up the approval process.

While the results of just how “awesome” a photo is may not be accurate for every image, the auto-tagging tool could prove useful, generating a list of keywords from object recognition as well as less concrete terms, like love, happiness, and teamwork. Clicking on a keyword will bring up an Everypixel search for other images with that same tag, or users can copy and paste the list of keywords.

The tool may have some bugs to work out but it’s free to upload a photo and see just what a robot thinks of your work.

Editors' Recommendations

OpenAI’s GPT-3 algorithm is here, and it’s freakishly good at sounding human
GPT-2 AI Text Generator

When the text-generating algorithm GPT-2 was created in 2019, it was labeled as one of the most “dangerous” A.I. algorithms in history. In fact, some argued that it was so dangerous that it should never be released to the public (spoiler: It was) lest it ushers in the “robot apocalypse." That, of course, never happened. GPT-2 was eventually released to the public, and after it didn't destroy the world, its creators moved on to the next thing. But how do you follow up the most dangerous algorithm ever created?

The answer, at least on paper, is simple: Just like the sequel to any successful movie, you make something that’s bigger, badder, and more expensive. Only one xenomorph in the first Alien? Include a whole nest of them in the sequel, Aliens. Just a single nigh-indestructible machine sent back from the future in Terminator? Give audiences two of them to grapple with in Terminator 2: Judgment Day.

Read more
A.I. can now add the northern lights or the moon to your photos in Luminar 4.2
skylum luminar 4 point 2

Artificial intelligence can now help photographers fix the face distortion caused by wide-angle lenses -- or add objects to the sky. The latest update to Skylum Luminar brings an updated list of portrait tools and the A.I. Augmented Sky tool for simpler, realistic sky composites.

A.I. Augmented Sky is designed to add objects to the sky without first masking the sky to separate it from the rest of the image. The tool can be used to create realistic effects, like adding clouds to a cloud-free sky or even adding the northern lights to the night sky. Or, the tool can also be used to create digital art and add planets or fairy tale elements.

Read more
Facebook 3D Photos no longer requires Portrait mode on dual-camera phones
fatal shooting facebook live app

Facebook turned to artificial intelligence so that its 3D Photos feature will no longer require the use of Portrait mode on dual-camera smartphones.

Facebook's 3D Photos, first revealed at the 2018 F8 developer conference and rolled out a few months later, utilizes the capabilities of dual-camera setups to create images with depth and movement when the smartphone is tilted. While any mobile device is capable of viewing the 3D Photos, the effect could only be created by phones with a Portrait mode.

Read more