Skip to main content

This viral AI image fooled the world, and you may have already seen it

Thought you could point out an AI-generated image? Well, this viral image tricked lots of folks online this weekend — and you just might be one of them.

The absurd image of the Pope in a puffy white coat that spread across Twitter was, in fact, generated with Midjourney. It quickly became a meme, but very few people were commenting on the true source of the image.

The AI-generated image of the Pope wearing a puffy coat.
Image used with permission by copyright holder

Celebrities like Chrissy Teigen jumped in, admitting that that they were also fooled by the Pope’s puffer jacket.

I thought the pope’s puffer jacket was real and didnt give it a second thought. no way am I surviving the future of technology

— chrissy teigen (@chrissyteigen) March 26, 2023

While the image is certainly convincing, there are technical details that give away the photo as a fake. The most notable detail is the glasses’ shadow across the Pope’s face, which look strange and unnatural.

An AI image of the Pope in a puffy coat.
Image used with permission by copyright holder

This Pope image is far from the only AI-generated image that has taken off. An image of Trump being handcuffed and arrested also went viral last week, though that one didn’t fool as many people, perhaps because of the gravity of the subject matter.

In the case of the Pope, it was the perfect storm of believability. And it’ll likely not be the last. As the situation with AI continues to evolve (and as more of the tools become available in applications like Adobe Express or Bing Image Creator), the internet is going to be filled with these AI-generated images. Whether or not we find news ways of identifying these images, we’ll all need to be a bit more careful with how we judge the images that show up in our social media feeds.

Editors' Recommendations

Luke Larsen
Senior Editor, Computing
Luke Larsen is the Senior editor of computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
New ‘poisoning’ tool spells trouble for AI text-to-image tech
Profile of head on computer chip artificial intelligence.

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn't involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Read more
Apple may finally beef up Siri with AI smarts next year
The Siri activation animation on an iPhone running iOS 14.

As the world has been taken over by generative artificial intelligence (AI) tools like ChatGPT, Apple has stayed almost entirely out of the game. That could all change soon, though, as a new report claims the company is about to bring its own AI -- dubbed “Apple GPT” -- to a massive range of products and services.

That’s all according to reporter Mark Gurman, who has a strong track record when it comes to Apple leaks and rumors. In his latest Power On newsletter, Gurman alleges that “Apple executives were caught off guard by the industry’s sudden AI fever and have been scrambling since late last year to make up for lost time.”

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more