Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

OpenAI’s new tool can spot fake AI images, but there’s a catch

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

OpenAI Dall-E 3 alpha test version image.
MattVidPro AI

If it is as accurate as OpenAI claims, it may be able to offer the public the knowledge that the images they are seeing are either genuine or AI-generated. Still, OpenAI did not appear to reveal how this tool will alert people to AI images, whether by using a watermark, a text warning, or something else.

It’s worth noting that the tool is only designed to detect Dall-E images, and it may not be able to spot fakes generated by rival services like Midjourney, Stable Diffusion, and Adobe Firefly. That might limit its usefulness in the grand scheme of things, but anything that can highlight fake images could have a positive impact.

Continuing development

Cartoon characters hooked to their phone.
Dall-E / OpenAI

OpenAI has launched tools in the past that were designed to spot content put together by its own chatbots and generators. Earlier in 2023 the company released a tool that it claimed could detect text made by ChatGPT, but it was withdrawn just a few months later after OpenAI admitted it was highly inaccurate.

As well as the new image-detection tool, OpenAI also discussed the company’s attempts to cut down on ChatGPT’s tendency to “hallucinate,” or spout nonsense and made-up information. “We’ve made a ton of progress on the hallucination issue with GPT-4, but we’re not where we need to be,” Murati said, suggesting that work on GPT-5 — the follow-up to the GPT-4 model that underpins ChatGPT — is well underway.

In March 2023, a slate of tech leaders signed an open letter pleading with OpenAI to pause work on anything more powerful than GPT-4, or risk “profound risks to society and humanity.” It seems that the request has fallen on deaf ears.

Whether OpenAI’s new tool will be any more effective than its last effort, which was canceled due to its unreliability, remains to be seen. What’s certain is that development work continues at a fast pace, despite the obvious risks.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Bing Chat just beat a security check to stop hackers and spammers
A depiction of a hacker breaking into a system via the use of code.

Bing Chat is no stranger to controversy -- in fact, sometimes it feels like there’s a never-ending stream of scandals surrounding it and tools like ChatGPT -- and now the artificial intelligence (AI) chatbot has found itself in hot water over its ability to defeat a common cybersecurity measure.

According to Denis Shiryaev, the CEO of AI startup, chatbots like Bing Chat and ChatGPT can potentially be used to bypass a CAPTCHA code if you just ask them the right set of questions. If this turns out to be a widespread issue, it could have worrying implications for everyone’s online security.

Read more
This powerful ChatGPT feature is back from the dead — with a few key changes
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

ChatGPT has just regained the ability to browse the internet to help you find information. That should (hopefully) help you get more accurate, up-to-date data right when you need it, rather than solely relying on the artificial intelligence (AI) chatbot’s rather outdated training data.

As well as giving straight-up answers to your questions based on info found online, ChatGPT developer OpenAI revealed that the tool will provide a link to its sources so you can check the facts yourself. If it turns out that ChatGPT was wrong or misleading, well, that’s just another one for the chatbot’s long list of missteps.

Read more
ChatGPT’s new upgrade finally breaks the text barrier
A person typing on a laptop that is showing the ChatGPT generative AI website.

OpenAI is rolling out new functionalities for ChatGPT that will allow prompts to be executed with images and voice directives in addition to text.

The AI brand announced on Monday that it will be making these new features available over the next two weeks to ChatGPT Plus and Enterprise users. The voice feature is available in iOS and Android in an opt-in capacity, while the images feature is available on all ChatGPT platforms. OpenAI notes it plans to expand the availability of the images and voice features beyond paid users after the staggered rollout.

Read more