Skip to main content

Researchers have found a new way to spot the latest deepfakes

Fortunately, most of the high-profile deepfakes so far have been frivolous things like splicing Sly Stallone’s face onto Arnold Schwarzenegger’s body in Terminator II or, err, Nicolas Cage’s face onto Nicolas Cage impressions, but that doesn’t mean that the technology doesn’t pose some serious ethical issues. Ever-more-realistic deepfakes, which use artificial intelligence technology to make it appear as though people did things they never did, could be used for creating political “fake news” and no end of other malicious uses.

It’s crucial, then, that there are ways to sort the real from the fake — and to spot deepfakes wherever they crop up.

Previous attempts to do this have focused on statistical methods. A new approach (well, new in its application to deepfakes) focuses on something called discrete cosine transform, first invented in 1972 for use in the signal processing community. This frequency analysis technology examines deepfakes, which are created using machine-learning models called generative adversarial networks (GANs), to find specific artifacts in the high-frequency range. The method can be used to determine whether an image has been created using machine learning techniques.

“We chose a different approach than previous research by converting the images into the frequency domain using the discrete cosine transformation,” Joel Frank from the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum told Digital Trends. “[As a result, the images are] expressed as the sum of many different cosine functions. Natural images consist mainly of low-frequency functions. In contrast, images generated by GANs exhibit artifacts in the high-frequency range — for example, a grid structure emerges in the frequency domain. Our approach detects these artifacts.”

Of course, as with any computer security issue, there’s no guarantee this vulnerability on the part of deepfakes will persist long-term. Already, the deepfakes created today like a world better than the ones created just a year ago, lacking many of the more “uncanny valley” features that accompanied early examples.

Frank said that the team’s experiments have, so far, “demonstrated that these artifacts stem from structural problems of deep learning algorithms. Thus, we believe these findings will have relevance in the future.” But he acknowledges that machine learning is evolving at an “incredible pace.”

At the conference this work was shown off, this week’s International Conference on Machine Learning (ICML), there were more than 6,000 researchers in attendance. All are working to advance the field in some way or another. As a result, “one can never be 100% sure,” Frank said.

It works for now, though. Sometimes, that’s the best a security professional can hope for.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
This basic human skill is the next major milestone for A.I.
Profile of head on computer chip artificial intelligence.

Remember the amazing, revelatory feeling when you first discovered the existence of cause and effect? That’s a trick question. Kids start learning the principle of causality from as early as eight months old, helping them to make rudimentary inferences about the world around them. But most of us don’t remember much before the age of around three or four, so the important lesson of “why” is something we simply take for granted.

It’s not only a crucial lesson for humans to learn, but also one that today’s artificial intelligence systems are pretty darn bad at. While modern A.I. is capable of beating human players at Go and driving cars on busy streets, this is not necessarily comparable with the kind of intelligence humans might use to master these abilities. That’s because humans -- even small infants -- possess the ability to generalize by applying knowledge from one domain to another. For A.I. to live up to its potential, this is something it also needs to be able to do.

Read more
This groundbreaking new style of A.I. learns things in a totally different way
History of AI neural networks

With very rare exceptions, every major advance in artificial intelligence this century has been the result of machine learning. As its name implies (and counter to the symbolic A.I. that characterized much of the first half of the field’s history), machine learning involves smart systems that don’t just follow rules but actually, well, learn.

But there’s a problem. Unlike even a small human child, machine learning needs to be shown large numbers of training examples before it can successfully recognize them. There’s no such thing as, say, seeing an object like a “doofer” (you don’t know what it is, but we bet you would remember it if you saw one) and, thereafter, being able to recognize every subsequent doofer you see.

Read more
New self-driving car algorithm keeps you safe by constantly predicting doom
waymo self driving car testing

Call it fatalistic, pessimistic, or just really, really, smart, but a new self-driving car algorithm developed by researchers at Germany’s Technical University of Munich (TUM) thrives on thinking about the worst thing that could happen at every moment. And then figuring out how to get out of it without endangering or obstructing traffic.

“Current autonomous driving systems usually incorporate most-likely evolutions of a traffic scenario, [such as] the preceding vehicle will most likely accelerate,” Christian Pek, a researcher in the university's researcher in the Cyber-Physical Systems Group, told Digital Trends. “However, this design might result in unsafe behaviors if traffic participants behave differently than expected -- for example, [if instead] the preceding vehicle decelerates. Our algorithm addresses this problem by computing all possible future evolutions of the scenario by considering all possible motions of other traffic participants that are compliant with traffic rules. As a result, we are able to ensure that decisions are safe regardless of the future legal motion of other traffic participants.”

Read more