A.I.-generated text is supercharging fake news. This is how we fight back

Last month, an A.I. startup backed by sometimes A.I. alarmist Elon Musk announced a new artificial intelligence they claimed was too dangerous to release to the public. While “only” a text generator, OpenAI’s GPT2 was reportedly capable of generating text so freakishly humanlike that it could convince people that it was, in fact, written by a real flesh and blood human being.

To use GPT2, a user would only have to feed it the start of a document, before the algorithm would take over and complete it in a highly convincing manner. For instance, give it the opening paragraphs of a newspaper story and it would manufacture “quotes” and assorted other details.

Such tools are becoming increasingly common in the world of A.I. — and the world of fake news, too. The combination of machine intelligence and, perhaps, the distinctly human unintelligence that allows disinformation to spread could prove a dangerous mix.

Fortunately, a new A.I. developed by researchers at MIT, IBM’s Watson A.I. Lab and Harvard University is here to help. And just like a Terminator designed to hunt other Terminators, this one — called GLTR — is uniquely qualified to spot bot impostors.

Fighting the good fight

As its creators explain in a blog post, text generation tools like GPT2 open up “paths for malicious actors to … generate fake reviews, comments or news articles to influence the public opinion. To prevent this from happening, we need to develop forensic techniques to detect automatically generated text.”

GLTR takes the same models that are are used as the basis for fake text generation by GPT2. By looking at a piece of text, and then predicting which words the algorithm would likely have picked to follow one another, it can give a verdict on whether it thinks it was written by a machine. The tool is available for users to try online. (If anyone has ever told you that your own writing is too machine-like, this might be your chance to prove them wrong!)

GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing. OpenAI

Until now, it’s been relatively easy for humans to pick out writing generated by machines — usually because it is overly formulaic or, in creative writing, makes little to no sense. That’s fast changing, though, and the creators of GLTR think that tools such as this will therefore become more necessary.

“We believe that machines and humans excel at detecting fundamentally different aspects of generated text,” Sebastian Gehrmann, a Ph.D. candidate in Computer Science at Harvard, told Digital Trends. “Machine learning algorithms are great at picking up statistical patterns such as the ones we see in GLTR. However, at the moment machines do not actually understand the content of a text. That means that algorithms could be fooled by completely nonsensical text, as long as the patterns match the detection. Humans, on the other hand, can easily tell when a text does not make any sense, but cannot detect the same patterns we show in GLTR.”

“Imagine getting emails or reading news, and a browser plugin tells you for the current text how likely it was produced by model X or model Y.”

Hendrik Strobelt, a data scientist at IBM Research, told us that figuring out whether a piece of text comes from a human origin will become more of a pressing issue. “[Our current] visual tool might not be the solution to that, but it might help to create algorithms that work like spam detection algorithms,” he said. “Imagine getting emails or reading news, and a browser plugin tells you for the current text how likely it was produced by model X or model Y.”

A cat and mouse game

Similar games of one upmanship — in which A.I. tools are used to spot fakes created by others A.I.s — are taking place across the tech industry. This is particularly true when it comes to fake news. For example, “deepfakes” have caused plenty of alarm with their promise of being able to realistically superimpose one person’s head onto another’s body.

To help counter deepfakes, researchers from Germany’s Technical University of Munich have developed an algorithm called XceptionNet that’s designed to quickly spot faked videos posted online. Speaking with Digital Trends last year, one of the brains behind XceptionNet suggested a similar approach involving a possible browser plugin that runs the algorithm continuously in the background.

It seems likely that others are working on solutions for spotting the A.I. behind other forms of machine-masquerading-as-humans, such as Google’s Duplex voice calling tech or the spate of artificial intelligences capable of accurately mimicking celebrity voices and making them say anything the user wants.

google duplex hands on io2018 2835

This kind of cat-and-mouse game will be of no great shock to anyone who has followed the world of hacking. Hackers spot vulnerabilities in systems and exploit them, then somebody notices and patches the hole, leaving hackers to move onto the next vulnerability. In this case, however, the escalation involves cutting edge artificial intelligence.

“In the future, we will see increasingly common [use and] abuse of algorithmically generated text,” Gehrmann continued. “In only a few years, algorithms could potentially be used to influence the public opinion on products, movies, personalities, or politics on a larger and larger scale. Therefore, tools to detect fake content will become more and more relevant for real-world use. As researchers, we see it as our goal to develop detection methods at a faster rate than the generation methods to combat and extinguish this abuse.”

Now we just have to hope that the good guys can work harder and faster than the bad ones. Unfortunately, if history has taught us anything it’s that there’s guarantee that this will be the case. Keep your fingers crossed that it is!

Emerging Tech

How emotion-tracking A.I. will change computing as we know it

Affectiva is just one of the startups working to create emotion-tracking A.I. that can work out how you're feeling. Here's why this could change the face of computing as we know it.
Movies & TV

The best shows on Netflix right now (April 2019)

Looking for a new show to binge? Lucky for you, we've curated a list of the best shows on Netflix, whether you're a fan of outlandish anime, dramatic period pieces, or shows that leave you questioning what lies beyond.
Mobile

Samsung Galaxy S10 update gives manual control of Bright Night mode

Samsung 2019 flagship smartphone lineup is here, and there aren't just two phones as usual — there are four. There's the Galaxy S10, S10 Plus, as well as a new entry called the S10e, as well as the Galaxy S10 5G.
Cars

Mercedes-Benz gives the tech-savvy 2020 CLA more power at New York Auto Show

Mercedes-Benz introduced the second-generation CLA during CES 2019, and it will expand the lineup when it unveils a midrange model named CLA 35 at the upcoming 2019 New York Auto Show.
Emerging Tech

Gravitational forces at heart of Milky Way shaped this star cluster like a comet

Hubble has captured the stunning Messier 62 cluster. The cluster is warped, with a long tail which stretches out to form a shape like a comet. It is thought this distortion is due to Messier 62's proximity to the center of the galaxy.
Emerging Tech

The grid of the future will be powered by … giant subterranean bagpipes?

In order to transition to a more renewable-focused energy system, we need to scale up our grid storage capacity --- and our existing methods aren't going to cut it. Could compressed air be the key?
Emerging Tech

Burgers are just the beginning: Embracing the future of lab-grown everything

You’ve almost certainly heard of the 'farm to fork' movement, but what about 'lab to table'? Welcome to the fast-evolving world of lab-grown meat. Is this the future of food as we know it?
Emerging Tech

Troubleshooting Earth

It’s no secret that humans are killing the planet. Some say it’s actually so bad that we’re hurtling toward a sixth major extinction event -- one which we ourselves are causing. But can technology help us undo the damage we’ve…
Emerging Tech

Inside the Ocean Cleanup’s ambitious plan to rid the ocean of plastic waste

In 2013, Boyan Slat crowdfunded $2.2 million to fund the Ocean Cleanup, a nonprofit organization that builds big, floating trash collectors and sets them out to sea, where they’re designed to autonomously gobble up garbage.
Emerging Tech

Climeworks wants to clean the atmosphere with a fleet of truck-sized vacuums

Using machines that resemble jet engines, Climeworks wants to fight climate change by extracting CO2 from thin air. The gas can then be sold to carbonated drink and agriculture companies, or sequestered underground.
Emerging Tech

How 3D printing has changed the world of prosthetic limbs forever

When he was 13 years old, Christophe Debard had his leg amputated. Here in 2019, Debard's Print My Leg startup helps others to create 3D-printed prostheses. Welcome to a growing revolution!
Emerging Tech

Geoengineering is risky and unproven, but soon it might be necessary

Geoengineering is a field dedicated to purposely changing the world's climate using technology. Call it 'playing god' if you must; here's why its proponents believe it absolutely must happen.
Digital Trends Live

Digital Trends Live: Earth Day, indoor container farming, robot submarines

Today on Digital Trends Live, we discuss how technology intersects with Earth Day, a new Tim Cook biography, indoor container farming, robot spy submarines, A.I. death metal, and more.
Gaming

Google’s Stadia is the future of gaming, and that’s bad news for our planet

Google’s upcoming Stadia cloud gaming service, and its competitors, are ready to change the way gamers play, but in doing so they may kick off a new wave of data center growth – with unfortunate consequences for the environment.