A.I. is getting better and better at producing fake videos, for everything from amusingly adding Nicholas Cage into movies to maliciously spreading fake news. Now Samsung has developed software which makes creating fake videos even easier.
The new A.I. software was developed at Samsung’s A.I. Center in Moscow. As described in a paper available on pre-publication archive arXiv, is a new development in the technology. Previously, most deep fake software required a very large number of images of a particular person’s face in order to map that face onto a video. But the new software can create somewhat convincing fakes from just a few images of a person. Potentially, it could even work with a single image of a face.
The quality of fakes produced by A.I. is still very variable, and how convincing a fake will be depends on factors like the lighting of the original and the target images and the commonality between the two.
To demonstrate the new software, the Samsung team shared a video showing fun applications like “living portraits,” in which images of celebrities like Marilyn Monroe and Salvador Dali are brought to life. There’s even a video clip of the Mona Lisa, animated to show the abilities of the software.
But the potential for abuse of this technology is serious, as demonstrated in a doctored clip of politician Nancy Pelosi which is currently doing the rounds on Facebook.
The authors of the paper, Egor Zakharov and colleagues, are aware of this potential for abuse and seem mindful of it. “We believe that telepresence technologies in AR, VR and other media are to transform the world in the not-so-distant future,” they write on YouTube. “We realize that our technology can have a negative use for the so-called ‘deepfake’ videos. However, it is important to realize that Hollywood has been making fake videos (aka ‘special effects’) for a century, and deep networks with similar capabilities have been available for the past several years.”
The authors describe their software as democratizing special effects, and write that “the net effect of democratization on the [w]orld has been positive, and mechanisms for stemming the negative effects have been developed. We believe that the case of neural avatar technology will be no different.”
Arguably, the ability to doctor videos with this kind of software is not so different from the ability to doctor images with Photoshop. But as the software becomes more common, it’s important to remember that just because you see it in a video, doesn’t mean it’s real.
- Are deepfakes a dangerous technology? Creators and regulators disagree
- How artists and activists are using deepfakes as a force for good
- A.I. teaching assistants could help fill the gaps created by virtual classrooms
- How a hyperrealistic robo-dolphin is paving the way for animatronic aquariums
- Leaps, bounds, and beyond: Robot agility is progressing at a feverish pace