Skip to main content

Even OpenAI has given up trying to detect ChatGPT plagiarism

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

OpenAI’s AI Classifier was an attempt to allay the fears of these and other groups. The idea was it could determine whether a piece of text was written by a human or an AI chatbot, giving people a tool to both assess students fairly and combat disinformation.

Yet even from the start, OpenAI did not seem to have much confidence in its tool. In a blog post announcing the tool, OpenAI declared that “Our classifier is not fully reliable,” noting that it correctly identified AI-written texts from a “challenge set” just 26% of the time.

The decision to drop the tool was not given much fanfare, and OpenAI has not posted a dedicated post on its website. Instead, the company has updated the post which revealed the AI Classifier, stating that “the AI classifier is no longer available due to its low rate of accuracy.”

The update continued: “We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”

Better tools are needed

A person typing on a laptop that is showing the ChatGPT generative AI website.
Matheus Bertelli / Pexels

The AI Classifier is not the only tool that has been developed to detect AI-crafted content, as rivals like GPTZero exist and will continue to operate, despite OpenAI’s decision.

Past attempts to identify AI writing have backfired spectacularly. For instance, in May 2023, a professor mistakenly flunked their entire class after enlisting ChatGPT to detect plagiarism in their students’ papers. Needless to say, ChatGPT got it badly wrong, and so did the professor.

It’s cause for concern when even OpenAI admits it can’t properly perceive plagiarism created by its own chatbot. It comes at a time of increasing anxiety about the destructive potential of AI chatbots and calls for a temporary suspension of development in this field. If AI has as much of an impact as some people are predicting, the world is going to need stronger tools than OpenAI’s failed AI Classifier.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
How much does an AI supercomputer cost? Try $100 billion
A Microsoft datacenter.

It looks like OpenAI's ChatGPT and Sora, among other projects, are about to get a lot more juice. According to a new report shared by The Information, Microsoft and OpenAI are working on a new data center project, one part of which will be a massive AI supercomputer dubbed "Stargate." Microsoft is said to be footing the bill, and the cost is astronomical as the name of the supercomputer suggests -- the whole project might cost over $100 billion.

Spending over $100 billion on anything is mind-blowing, but when put into perspective, the price truly shows just how big a venture this might be: The Information claims that the new Microsoft and OpenAI joint project might cost a whopping 100 times more than some of the largest data centers currently in operation.

Read more
OpenAI boss takes Sora tech to Hollywood, report claims
An AI image portraying two mammoths that walk through snow, with mountains and a forest in the background.

OpenAI’s new text-to-video artificial intelligence model left jaws on the floor recently when the company offered up examples of what it can do.

Sora, as it’s called, generates astonishingly realistic footage from descriptive text inputs, and while a close look can sometimes reveal slight flaws in the imagery, the technology has left many wondering to what extent it could upend the TV and movie industries.

Read more
We may have just learned how Apple will compete with ChatGPT
An iPhone on a table with the Siri activation animation playing on the screen.

As we approach Apple’s Worldwide Developers Conference (WWDC) in June, the rumor mill has been abuzz with claims over Apple’s future artificial intelligence (AI) plans. Well, there have just been a couple of major developments that shed some light on what Apple could eventually reveal to the world, and you might be surprised at what Apple is apparently working on.

According to Bloomberg, Apple is in talks with Google to infuse its Gemini generative AI tool into Apple’s systems and has also considered enlisting ChatGPT’s help instead. The move with Google has the potential to completely change how the Mac, iPhone, and other Apple devices work on a day-to-day basis, but it could come under severe regulatory scrutiny.

Read more