Skip to main content

Bot or not? This browser extension will identify text written by A.I.

Figuring out whether the things you read on the internet are true can be challenging. Thanks to a new web plug-in, determining whether stories were written by a human or an A.I. is now a whole lot easier. GPTrue or False is a browser extension for Chrome and Firefox that lets users select text on a website (50 words and more) and have it evaluated to determine the likelihood that it was written by OpenAI’s GPT-2 A.I. model rather than a human.

GPT-2 is a text-generating algorithm that allows users to seed it with the start of a piece of text, such as an article from a newspaper, and then dreams up the rest in terrifyingly convincing fashion. While some have used it for creative purposes, such as generating ever-changing text adventure games, others are rightfully concerned about what it could mean for the spread of fake news.

GPTrue or False runs the selected text through OpenAI’s GPT-2 Detector model, and then works out the probability that the text was human-generated rather than created by a machine.

Image used with permission by copyright holder

“I’d say the problem is pretty relevant in today’s world,” Giulio Starace, the creator of GPTrue or False, told Digital Trends. “As synthetic data generation gets more and more sophisticated, we become more and more vulnerable to being swayed in one way or another. Fake news generated by machines is one example, but consider also fake reviews. In a world where reviewing has been open-sourced, this system can be abused and consumers arbitrarily swayed to a particular business.”

Starace said he was inspired to create the plugin after seeing a tweet from Tesla’s director of A.I., Andrej Karpathy. On November 6, Karpathy tweeted a Chrome extension request to help spot GPT-2 text online. “I saw the tweet and figured, ‘hey, I can probably do that,’” Starace said. The extension can be downloaded here.

One final word of caution: While this detector is impressively accurate when it comes to spotting an A.I.’s scribblings, you’ll still want to use some common sense. Just like a spam detector spotting trash emails, there may be a few human-written pieces that slip through the cracks — or vice versa.

“It may be the case that GPT-2 generated text and human-generated text sometimes have overlapping characteristics leading the detector to accidentally think that a human-generated portion of text is actually machine-generated,” Starace said. “There are some funny examples of this on my Twitter where people show that the detector wrongly classifies a speech from Trump as computer-generated, and likewise on an excerpt by James Joyce. So either Trump and James Joyce are robots, or the detector is imperfect.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This app just got me excited for the future of AI on Macs
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

In a year where virtually every tech company in existence is talking about AI, Apple has been silent. That doesn't mean Apple-focused developers aren't taking matters into their own hands, though. An update to the the popular Mac writing app iA Writer just made me really excited about seeing what Apple's eventual take on AI will be.

In the iA Writer 7 update, you’ll be able to use text generated by ChatGPT as a starting point for your own words. The idea is that you get ideas from ChatGPT, then tweak its output by adding your distinct flavor to the text, making it your own in the process. Most apps that use generative AI do so in a way that basically hands the reins over to the artificial intelligence, such as an email client that writes messages for you or a collaboration tool that summarizes your meetings.

Read more
One year ago, ChatGPT started a revolution
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

Exactly one year ago, OpenAI put a simple little web app online called ChatGPT. It wasn't the first publicly available AI chatbot on the internet, and it also wasn't the first large language model. But over the following few months, it would grow into one of the biggest tech phenomenons in recent memory.

Thanks to how precise and natural its language abilities were, people were quick to shout that the sky was falling and that sentient artificial intelligence had arrived to consume us all. Or, the opposite side, which puts its hope for humanity within the walls of OpenAI. The debate between these polar extremes has continued to rage up until today, punctuated by the drama at OpenAI and the series of conspiracy theories that have been proposed as an explanation.

Read more
New ‘poisoning’ tool spells trouble for AI text-to-image tech
Profile of head on computer chip artificial intelligence.

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn't involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Read more