Fortunately, most of the high-profile deepfakes so far have been frivolous things like splicing Sly Stallone’s face onto Arnold Schwarzenegger’s body in Terminator II or, err, Nicolas Cage’s face onto Nicolas Cage impressions, but that doesn’t mean that the technology doesn’t pose some serious ethical issues. Ever-more-realistic deepfakes, which use artificial intelligence technology to make it appear as though people did things they never did, could be used for creating political “fake news” and no end of other malicious uses.
It’s crucial, then, that there are ways to sort the real from the fake — and to spot deepfakes wherever they crop up.
Previous attempts to do this have focused on statistical methods. A new approach (well, new in its application to deepfakes) focuses on something called discrete cosine transform, first invented in 1972 for use in the signal processing community. This frequency analysis technology examines deepfakes, which are created using machine-learning models called generative adversarial networks (GANs), to find specific artifacts in the high-frequency range. The method can be used to determine whether an image has been created using machine learning techniques.
“We chose a different approach than previous research by converting the images into the frequency domain using the discrete cosine transformation,” Joel Frank from the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum told Digital Trends. “[As a result, the images are] expressed as the sum of many different cosine functions. Natural images consist mainly of low-frequency functions. In contrast, images generated by GANs exhibit artifacts in the high-frequency range — for example, a grid structure emerges in the frequency domain. Our approach detects these artifacts.”
Of course, as with any computer security issue, there’s no guarantee this vulnerability on the part of deepfakes will persist long-term. Already, the deepfakes created today like a world better than the ones created just a year ago, lacking many of the more “uncanny valley” features that accompanied early examples.
Frank said that the team’s experiments have, so far, “demonstrated that these artifacts stem from structural problems of deep learning algorithms. Thus, we believe these findings will have relevance in the future.” But he acknowledges that machine learning is evolving at an “incredible pace.”
At the conference this work was shown off, this week’s International Conference on Machine Learning (ICML), there were more than 6,000 researchers in attendance. All are working to advance the field in some way or another. As a result, “one can never be 100% sure,” Frank said.
It works for now, though. Sometimes, that’s the best a security professional can hope for.
- Spot’s latest robot dance highlights new features
- The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
- A Star Trek fan deepfaked Next Generation-era Data into the new Picard series
- Eminem appears to diss Mark Zuckerberg in hilarious, A.I.-generated deepfake
- To build a lifelike robotic hand, we first have to build a better robotic brain