Skip to main content

Thanks to A.I., there is finally a way to spot ‘deepfake’ face swaps online

Image used with permission by copyright holder

The ability to use deep learning artificial intelligence to realistically superimpose one person’s face onto another person’s body sounds like good, wholesome fun. Unfortunately, it’s got a sinister side, too, as evidenced by phenomenon like the popularity of “deepfake” pornography starring assorted celebrities. It’s part of a wider concern about fake news and the ease with which cutting- edge tech can be used to fraudulent effect.

Recommended Videos

Researchers from Germany’s Technical University of Munich want to help, however — and they are turning to some of the same A.I. tools to help them in their fight. What they have developed is an algorithm called XceptionNet that quickly spots faked videos posted online. It could be used to identify misleading videos on the internet so that they could be removed when necessary. Or, at the very least, reveal to users when they have been manipulated in some way.

“Ideally, the goal would be to integrate our A.I. algorithms into a browser or social media plugin,” Matthias Niessner, a professor in the university’s Visual Computing Group, told Digital Trends. “Essentially, the algorithm [will run] in the background, and if it identifies an image or video as manipulated it would give the user a warning.”

The team started by training a deep-learning neural network with a dataset of more than 1,000 videos and 500,000 images. By showing the computer both the doctored and undoctored images, the machine learning tool was able to figure out the differences between the two — even in cases where this would be difficult to spot for a human.

“For compressed videos, our user study participants could not tell fakes apart from real data,” Niessner continued. On the other hand, the A.I. is able to easily distinguish between the two. Where humans were right 50 percent of the time, making it the equivalent of random guesses, the convolution neural network could get compressed videos right anywhere from 87 percent to 98 percent of the time. This is particularly impressive since compressed images and video are harder to distinguish than uncompressed pictures.

Compared to other fraudulent image-spotting algorithms, XceptionNet is way ahead of the curve. It’s another amazing illustration of the power of artificial intelligence and, in this case, of how it can be used for good.

A paper describing the work titled, “FaceForensics: A Large-scale Video Data Set for Forgery Detection in Human Faces,” is available to read online.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Analog A.I.? It sounds crazy, but it might be the future
brain with computer text scrolling artificial intelligence

Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.

Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog -- and had been for as long as computers have been around.

Read more
The funny formula: Why machine-generated humor is the holy grail of A.I.
microphone in a bar

In "The Outrageous Okona," the fourth episode of the second season of Star Trek: The Next Generation, the Enterprise's resident android Data attempts to learn the one skill it has previously been unable to master: Humor. Visiting the ship’s Holodeck, Data takes lessons from a holographic comedian to try and understand the business of making funny.

While the worlds of Star Trek and the real world can be far apart at times, this plotline rings true for machine intelligence here on Earth. Put simply, getting an A.I. to understand humor and then to generate its own jokes turns out to be extraordinarily tough.

Read more
Nvidia’s latest A.I. results prove that ARM is ready for the data center
Jensen Huang at GTX 2020.

Nvidia just published its latest MLPerf benchmark results, and they have are some big implications for the future of computing. In addition to maintaining a lead over other A.I. hardware -- which Nvidia has claimed for the last three batches of results -- the company showcased the power of ARM-based systems in the data center, with results nearly matching traditional x86 systems.

In the six tests MLPerf includes, ARM-based systems came within a few percentage points of x86 systems, with both using Nvidia A100 A.I. graphics cards. In one of the tests, the ARM-based system actually beat the x86 one, showcasing the advancements made in deploying different instruction sets in A.I. applications.

Read more