Figuring out whether the things you read on the internet are true can be challenging. Thanks to a new web plug-in, determining whether stories were written by a human or an A.I. is now a whole lot easier. GPTrue or False is a browser extension for Chrome and Firefox that lets users select text on a website (50 words and more) and have it evaluated to determine the likelihood that it was written by OpenAI’s GPT-2 A.I. model rather than a human.
GPT-2 is a text-generating algorithm that allows users to seed it with the start of a piece of text, such as an article from a newspaper, and then dreams up the rest in terrifyingly convincing fashion. While some have used it for creative purposes, such as generating ever-changing text adventure games, others are rightfully concerned about what it could mean for the spread of fake news.
GPTrue or False runs the selected text through OpenAI’s GPT-2 Detector model, and then works out the probability that the text was human-generated rather than created by a machine.
“I’d say the problem is pretty relevant in today’s world,” Giulio Starace, the creator of GPTrue or False, told Digital Trends. “As synthetic data generation gets more and more sophisticated, we become more and more vulnerable to being swayed in one way or another. Fake news generated by machines is one example, but consider also fake reviews. In a world where reviewing has been open-sourced, this system can be abused and consumers arbitrarily swayed to a particular business.”
Starace said he was inspired to create the plugin after seeing a tweet from Tesla’s director of A.I., Andrej Karpathy. On November 6, Karpathy tweeted a Chrome extension request to help spot GPT-2 text online. “I saw the tweet and figured, ‘hey, I can probably do that,’” Starace said. The extension can be downloaded here.
One final word of caution: While this detector is impressively accurate when it comes to spotting an A.I.’s scribblings, you’ll still want to use some common sense. Just like a spam detector spotting trash emails, there may be a few human-written pieces that slip through the cracks — or vice versa.
“It may be the case that GPT-2 generated text and human-generated text sometimes have overlapping characteristics leading the detector to accidentally think that a human-generated portion of text is actually machine-generated,” Starace said. “There are some funny examples of this on my Twitter where people show that the detector wrongly classifies a speech from Trump as computer-generated, and likewise on an excerpt by James Joyce. So either Trump and James Joyce are robots, or the detector is imperfect.”
- Nvidia’s new voice A.I. sounds just like a real person
- IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live
- Can A.I. beat human engineers at designing microchips? Google thinks so
- Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
- This startup wants to deepfake clone your voice and sell it to the highest bidder