YouTube took down an incredibly realistic — and fake — video purporting to show Kim Kardashian West discussing a shadowy organization called “Spectre” and mocking her fans for violating copyright. The takedown on Monday could give public figures a new weapon in the fight against deepfakes, but it won’t help much as fake videos increasingly target everyday people.
Deepfakes — or incredibly realistic fake videos — have grown from a nuisance to a worldwide problem as they’ve become harder and harder to detect. The videos can essentially make it look like someone said something when they never did, and could be used for all kinds of nefarious purposes, from pranks to politics.
The Kardashian deepfake, uploaded to YouTube on May 29 by anti-advertising activists Brandalism, was removed because of a copyright claim by publisher Condé Nast. The original video used to make the deepfake came from a video uploaded in April by the publisher’s Vogue magazine.
“It certainly shows how the existing legal infrastructure could help,” Henry Ajder, head of communications and research analysis at Deeptrace, told Digital Trends. “But it seems to be available for the privileged few.”
Ajder said that Deeptrace, an organization building a system to detect deepfakes on the web, has noticed an increase in deepfakes being uploaded to YouTube, both from the U.S. and around the globe. The Kardashian copyright claim has the potential to set a new precedent for when and how these kinds of videos are taken down, he added. It’s a tricky problem, since no one has decided if the manipulated videos fall into the category of fair use. Taking videos like these down open up giant tech companies to accusations that they’re impinging on freedom of expression.
But if deepfakes are subject to copyright claims — like the Kardashian video apparently is — it could give sites a fairly simple way to take down misleading fake videos. We reached out to YouTube to see if this is part of a new policy for deepfakes, but have yet to hear back. Brandalism also did not respond to a request for comment.
While this gives some ammo in the fight against deepfakes, there’s still a long way to go. For one, Condé Nast is a huge company that can easily make a copyright claim on YouTube (and likely makes many every day). If someone made a deepfake of you, for example, it wouldn’t be quite so easy. Someone could record you, then manipulate the footage to make it look like you said or did something you never did. If they recorded the footage, they own it — so there’s no copyright claim.
The problem could be even worse than that, according to Ajder. “Someone could scrape images from Facebook and make a video of you doing something you never did,” he said.
That’s already happening, Ajder said. A huge amount of deepfake targets have been women including those subjected to fake revenge porn, with their faces pasted on the bodies of others. Once a video is out there, there’s not much someone can do to take it down.
“The legal recourse to take down deepfakes of individuals are sparse, “ Ajder said. “We don’t have the infrastructure in place to deal with these problems.”
Some are working to change that. Rep. Yvette Clarke (D-NY) recently introduced a bill that would impose rules on deepfakes, but they would be largely unenforceable in the Wild West of the web.
Not everyone seems to be following YouTube’s approach. Another Brandalism deepfake showing CEO Mark Zuckerberg praising Spectre has more than 100,00 views on Facebook-owned Instagram. That video, made from an interview with Zuckerberg on CBS News, remained online as of Monday morning despite CBS requesting Instagram remove it for an “unauthorized use of the CBSN” trademark, a CBS spokesperson said. The Kardashian video is still online on both Twitter and Instagram — and a Zuckerberg deepfake is still online on Brandalism’s YouTube page.
YouTube did take down a doctored video purporting to show Nancy Pelosi slurring her words, but it remained up on Facebook with a note saying that the video was fake. The Pelosi video seemed much more intentionally malicious than the Kardashian ones, which could be considered parody. YouTube will shut down misinformation when pushed, but Facebook would prefer not to be an arbiter of truth.
Neither of the massive tech companies seems prepared for the coming wave of deepfakes targeting individuals, however.
“There’s been pushback for public figures,” said Ajder. “But deepfakes are on track for an exponential increase in reputation damage or misinformation.”
- What is Section 230? Inside the legislation protecting social media
- Are deepfakes a dangerous technology? Creators and regulators disagree
- What the biggest tech companies are doing to make the 2020 election more secure
- Facebook, Instagram can soon actively search for — and block — stolen images
- Trump campaign used Cambridge Analytica data to suppress Black vote, leak shows