Skip to main content

Even OpenAI has given up trying to detect ChatGPT plagiarism

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

OpenAI’s AI Classifier was an attempt to allay the fears of these and other groups. The idea was it could determine whether a piece of text was written by a human or an AI chatbot, giving people a tool to both assess students fairly and combat disinformation.

Recommended Videos

Yet even from the start, OpenAI did not seem to have much confidence in its tool. In a blog post announcing the tool, OpenAI declared that “Our classifier is not fully reliable,” noting that it correctly identified AI-written texts from a “challenge set” just 26% of the time.

The decision to drop the tool was not given much fanfare, and OpenAI has not posted a dedicated post on its website. Instead, the company has updated the post which revealed the AI Classifier, stating that “the AI classifier is no longer available due to its low rate of accuracy.”

The update continued: “We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”

Better tools are needed

A person typing on a laptop that is showing the ChatGPT generative AI website.
Matheus Bertelli / Pexels

The AI Classifier is not the only tool that has been developed to detect AI-crafted content, as rivals like GPTZero exist and will continue to operate, despite OpenAI’s decision.

Past attempts to identify AI writing have backfired spectacularly. For instance, in May 2023, a professor mistakenly flunked their entire class after enlisting ChatGPT to detect plagiarism in their students’ papers. Needless to say, ChatGPT got it badly wrong, and so did the professor.

It’s cause for concern when even OpenAI admits it can’t properly perceive plagiarism created by its own chatbot. It comes at a time of increasing anxiety about the destructive potential of AI chatbots and calls for a temporary suspension of development in this field. If AI has as much of an impact as some people are predicting, the world is going to need stronger tools than OpenAI’s failed AI Classifier.

Alex Blake
Alex Blake has been working with Digital Trends since 2019, where he spends most of his time writing about Mac computers…
5 AI apps with deep research features to rival ChatGPT
Deep Research option for ChatGPT.

Artificial intelligence brands are in fierce competition, and their next steps are to make AI tools smarter by allowing them to execute deep search functions that can provide expert-level results and analyze larger amounts of information in a shorter time. Several companies have announced deep research features in recent weeks and months that excel in areas such as finance, science, marketing, and academics. Research that would have taken a person weeks or months can be achieved in a fraction of the time, with a properly detailed prompt. 

Deep research features are considered AI agents that can work independently and will allow you to make a query and let the AI process for several minutes while it generates the information and returns when it is finished to display the results. They are considered the first steps toward the concept of artificial general intelligence (AGI), which some define as a model that can process a query based on novel data that it has not been trained on, and it can produce unique content. However, we’re not quite there yet, and the main premise of deep research tools is processing large amounts of data and making it easier to understand.

Read more
OpenAI CEO admits ChatGPT’s personality is ‘too annoying’
Deep Research option for ChatGPT.

Have you noticed that ChatGPT has gotten a little personal lately? It's not just you. OpenAI's CEO, Sam Altman, admitted last night that the last couple of updates to GPT-4o have affected the chatbot's personality, and not in a good way.

If you use ChatGPT often enough, you might have noticed a shift in its behavior lately. Part of it might be down to its memory, as in my experience, the chatbot addresses you differently when it doesn't rely on past chats to guide the way you'd (potentially) want it to respond. However, part of it is just that somewhere along the way, OpenAI has made ChatGPT a so-called "yes man" -- a tool that agrees with you instead of challenging you, and sometimes, the outcome can be a touch obnoxious.

Read more
It’s not your imagination — ChatGPT models actually do hallucinate more now
Deep Research option for ChatGPT.

OpenAI released a paper last week detailing various internal tests and findings about its o3 and o4-mini models. The main differences between these newer models and the first versions of ChatGPT we saw in 2023 are their advanced reasoning and multimodal capabilities. o3 and o4-mini can generate images, search the web, automate tasks, remember old conversations, and solve complex problems. However, it seems these improvements have also brought unexpected side effects.

What do the tests say?

Read more