Skip to main content

Thanks to A.I., there is finally a way to spot ‘deepfake’ face swaps online

Image used with permission by copyright holder

The ability to use deep learning artificial intelligence to realistically superimpose one person’s face onto another person’s body sounds like good, wholesome fun. Unfortunately, it’s got a sinister side, too, as evidenced by phenomenon like the popularity of “deepfake” pornography starring assorted celebrities. It’s part of a wider concern about fake news and the ease with which cutting- edge tech can be used to fraudulent effect.

Researchers from Germany’s Technical University of Munich want to help, however — and they are turning to some of the same A.I. tools to help them in their fight. What they have developed is an algorithm called XceptionNet that quickly spots faked videos posted online. It could be used to identify misleading videos on the internet so that they could be removed when necessary. Or, at the very least, reveal to users when they have been manipulated in some way.

Recommended Videos

“Ideally, the goal would be to integrate our A.I. algorithms into a browser or social media plugin,” Matthias Niessner, a professor in the university’s Visual Computing Group, told Digital Trends. “Essentially, the algorithm [will run] in the background, and if it identifies an image or video as manipulated it would give the user a warning.”

Please enable Javascript to view this content

The team started by training a deep-learning neural network with a dataset of more than 1,000 videos and 500,000 images. By showing the computer both the doctored and undoctored images, the machine learning tool was able to figure out the differences between the two — even in cases where this would be difficult to spot for a human.

“For compressed videos, our user study participants could not tell fakes apart from real data,” Niessner continued. On the other hand, the A.I. is able to easily distinguish between the two. Where humans were right 50 percent of the time, making it the equivalent of random guesses, the convolution neural network could get compressed videos right anywhere from 87 percent to 98 percent of the time. This is particularly impressive since compressed images and video are harder to distinguish than uncompressed pictures.

Compared to other fraudulent image-spotting algorithms, XceptionNet is way ahead of the curve. It’s another amazing illustration of the power of artificial intelligence and, in this case, of how it can be used for good.

A paper describing the work titled, “FaceForensics: A Large-scale Video Data Set for Forgery Detection in Human Faces,” is available to read online.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
IBM’s A.I. Mayflower ship is crossing the Atlantic, and you can watch it live
Mayflower Autonomous Ship alone in the ocean

“Seagulls,” said Andy Stanford-Clark, excitedly. “They’re quite a big obstacle from an image-processing point of view. But, actually, they’re not a threat at all. In fact, you can totally ignore them.”

Stanford-Clark, the chief technology officer for IBM in the U.K. and Ireland, was exuding nervous energy. It was the afternoon before the morning when, at 4 a.m. British Summer Time, IBM’s Mayflower Autonomous Ship — a crewless, fully autonomous trimaran piloted entirely by IBM's A.I., and built by non-profit ocean research company ProMare -- was set to commence its voyage from Plymouth, England. to Cape Cod, Massachusetts. ProMare's vessel for several years, alongside a global consortium of other partners. And now, after countless tests and hundreds of thousands of hours of simulation training, it was about to set sail for real.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more