Skip to main content

Fake AI images are showing up in Google search — and it’s a problem

AI-generated images showing up in Google search.
Digital Trends

Right now, if you type “Israel Kamakawiwoʻole” into Google search, you don’t see one of the singer’s famous album covers, or an image of him performing one of his songs on his iconic ukulele. What you see first is an image of a man sitting on a beach with a smile on his face — but not a photo of the man himself taken with a camera. This is fake photo generated by AI. In fact, when you click on the image, it takes you to the Midjourney subreddit, where the series of images were initially posted.

I saw this first posted by Ethan Mollick on X (formerly known as Twitter), a professor at Wharton who is studying AI.

Recommended Videos

Looking at the photo up close, it’s not hard to see all the traces of AI left behind in it. The fake depth of field effect is applied unevenly, the texture on his shirt is garbled, and of course, he’s missing a finger on his left hand. But none of that is surprising. As good as AI-generated images have become over the past year, they’re still pretty easy to spot when you look closely.

The real problem, though, is that these images are showing up as the first result for a famous, known figure without any watermarks or indications that it is AI-generated. Google has never guaranteed the authenticity of the results of its image search, but there’s something that feels very troubling about this.

Now, there are some possible explanations for why this happened in this particular case. The Hawaiian singer, commonly known as Iz, passed away in 1997 — and Google always wants to feed the latest information to users. But given that not a lot of new articles or discussion is happening about Iz since then, it’s not hard to see why the algorithm picked this up. And while it doesn’t feel particularly consequential for Iz — it’s not hard to imagine some examples that would be much more problematic.

Even if we don’t continue see this happen at scale in search results, it’s a prime example of why Google needs to have rules around this. At the very least, it seems like AI-generated images should be marked clearly in some way before things get out of hand. If nothing else, at least give us a way to automatically filter out AI images. Given Google’s own interest in AI-generated content, however, there are reasons to think it may want to find ways to sneak AI-created content into its results, and not clearly mark it.

Luke Larsen
Former Senior Editor, Computing
Luke Larsen is the Senior Editor of Computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
Google’s Gemini AI can now process and talk about audio files
Hey Gemini, give me a one-page summary of this two-hour lecture. Thanks!
A person using Google Gemini on the Google Pixel 9a.

Google's Gemini AI is multi-modal, which means it can process and generate files in various formats, ranging from text and images to videos. Though it can generate audio, so far, it has lacked the ability to process audio files uploaded by users. That finally changes, as Gemini now lets you feed audio files and talk about them.

What's the big change?

Read more
I’m excited for Gemini to come to Google Home products because Google Assistant sucks
Google Nest Hub showing how to knead dough on a table with flour.

Anyone who has a Google smart home set up will likely be able to tell you that Google Assistant isn’t the smartest AI tool in the shed. I have the full rigamarole set up across my house from Google Chromecasts to Google Nest Hubs and even a couple of 'Works with Google' products for good measure.

While the products themselves are great, the actual AI that runs them feels quite outdated - a bit like Siri when she first came out and couldn't really understand what you were asking.

Read more
Google’s Gemini deemed “high risk” for kids in research by non-profit
For kids under 13, Gemini usage should only be allowed under close parental watch.
Gemini on Samsung Galaxy Watch 8.

Over the past few months, AI chatbots offered by the top names, such as OpenAI and Meta, have been found engaged in problematic behavior, especially with young users. The latest investigation covers Gemini, noting that Google’s chatbot can share “inappropriate and unsafe” content with kids and teens. 

What's new in the chatbot risk arena?

Read more