Skip to main content

Why tech companies are ill-equipped to combat the internet’s deepfake problem

A deepfake of Mark Zuckerberg
Image used with permission by copyright holder

How do you solve a problem like deepfake? It’s a question that everyone from tech companies to politicians are having to ask with the advent of new, increasingly accessible tools that allow for the creation of A.I. manipulated videos in which people’s likenesses are reappropriated in once unimaginable ways.

Such videos are sometimes created for satirical or sometimes darkly comedic purposes. Earlier this year, a deepfake video showed CEO Mark Zuckerberg gleefully boasting about his ownership of user data. A PSA about fake news, ventriloquized by Jordan Peele, meanwhile depicted Barack Obama calling his Presidential successor a “total and complete dipshit.” With the 2020 Presidential elections looming on the horizon, there’s more concern than ever about how deepfakes could be abused to help spread mistruth.

“I think people should be deeply concerned about deepfake technology,” David Wright, director of Trilateral Research and a stakeholder in the EU’s SHERPA project, concerning ethical use of artificial intelligence, told Digital Trends. “It will continue to evolve and become even more difficult to distinguish between what is real and what isn’t. Porn sites will continue to exploit celebrities — voices and images — with deepfake technologies. Cyber gangs will inevitably use deepfake technology for ultra-sophisticated spear phishing. We can expect right-wing politicos and their henchmen to use it to deceive voters and undermine the reputations of their opponents. They will be aided and abetted by foreign powers interfering in electoral processes.”

Recently, Democratic Representative Adam Schiff asked Facebook, Twitter and Google how they plan to combat the spread of doctored videos and images, including deepfakes. All three have said that they’re working on the problem. But is this a problem that’s even possible to solve?

Hunting the deepfakes

Fortunately, the spread of deepfakes isn’t taking place in isolation. As the tools to create them improve and become more pervasive, there are also researchers working on the flipside of the issue. These researchers are developing the means by which A.I. technologies can help spot deepfakes with high levels of accuracy. At Drexel University, a team of researchers in the Multimedia and Information Security Lab recently developed a deep neural network which can spot manipulated images with a high degree of accuracy. Similar tools have been developed by other universities, such as Germany’s Technical University of Munich.

“With the current rate that new deepfake technologies are being produced, it is nearly impossible for forensic researchers to keep up.”

These tools aren’t quite ready for prime time, however — as Brian Hosler, one of the researchers behind the Drexel University deepfake finder, readily admits. “Of the detectors that we have seen, many of them are very accurate, but only work given certain assumptions,” said Hosler. “Any difference from these assumptions, such as the use of different deepfake software, or changing the aspect ratio of the video, could potentially cripple the detector.”

One major problem is that current assumptions revolve around the artifacts left by the software that’s used to create deepfakes. These are quirks that, right now, an astute person may be able to notice, such as inconsistent blinking or odd lip movement; elements which betray that the resulting videos reside somewhere in the uncanny valley.

But deepfake technology is getting better all the time, meaning that visual anomalies like the floaty heads of early deepfakes (bear in mind that that is only a couple of years ago) have already been largely fixed. While there continue to be visual markers like odd facial depths and distorting motion blurs when a face moves too quickly, these increasingly are things that may not be spotted by your average viewer. That’s particularly true if your average viewer, already accustomed to variable image quality online, has no reason to believe that what they are watching might be faked.

Terminator learns how to smile [DeepFake]

“With the current rate that new deepfake technologies are being produced, it is nearly impossible for forensic researchers to keep up,” Hosler continued. “The biggest hurdle for creating widely applicable and reliable deepfake detectors it the consistency of the videos themselves. Deepfakes was originally an online community and the name of a specific piece of software, but now refers to any A.I.-generated or edited video. The methods used to create these videos can vary widely, and make reliable detection difficult.”

Even things like recompression of existing videos can be enough to cover up whatever traces detectors are using to spot deepfakes. That’s a particular problem for the most widely used video-distributing platforms, which recompress uploaded image and video in order to save on file sizes. As a result, these newly compressed files frequently introduce addition video artifacts which overwrite the ones being used as clues by deepfake detectors.

How to deploy them

But developing the right detectors is only part of the problem. The much thornier issue is how to deploy these tools once they are robust enough to be used. Right now, nobody seems to know the answer. Banning deep fakes, while conceivably technically possible, doesn’t make too much sense. Banning software for carrying out image editing because it might be used for nefarious purposes is a bit like trying to ban keyboards because someone wrote some nasty comments on the internet.

“It would be great if social media and technology companies could automatically scan every video uploaded to their website …”

In a recent letter dated July 31, Twitter’s director of public policy and philanthropy, Carlos Monje, said that that the platform makes efforts to stay on top of malicious deepfakes. “If we become aware of the use of deepfakes to spread misinformation in violation of our policies governing election integrity, we will remove that content,” Monje wrote.

There are two challenges when it comes to meeting this goal. The first is a subjective one, discerning between content that it designed to spread misinformation and content produced for satire. A popular YouTube video from 2012 shows Barack Obama “singing” Carly Rae Jepsen’s pop hit “Call Me Maybe.” It was produced by editing together hundreds of micro-clips of Obama saying individual words. At time of writing, it has racked up more than 50 million views. Meanwhile, a recent manipulated video depicted a slightly slowed-down Nancy Pelosi to make it appear as though she was slurring her words. It was tweeted out by President Trump on May 24, and has so far racked up more than 95,000 likes. Both the 50 million views of the Obama song and the 95,000 likes of the Pelosi video are considerably higher than the approximately 80,000 which swung the last Presidential election.

“PELOSI STAMMERS THROUGH NEWS CONFERENCE” pic.twitter.com/1OyCyqRTuk

— Donald J. Trump (@realDonaldTrump) May 24, 2019

Which of these videos (neither of which, I should add, is a deepfake) is designed to spread misinformation? Many of us will come to the conclusion that the latter is more of a calculated political move than the former. But explaining to a bot why one type of vocal manipulation is fine (even welcome), while the other is potentially not is very difficult. What if the second video was to be tweeted by a clearly satirical, nonpartisan account called, say, DrunkFakes? How about if it was then retweeted, without context, by a political opponent? Is a deepfake of Apprentice-era Donald Trump fine, but one of him post-inauguration not? These kinds of contextual, nuanced arguments matter.

But if the battle against deepfakes is going to be fought seriously, decisions will need to be taken quickly, authoritatively, and without accusations of bias. Acting quickly is crucial, but so is making the right decision.

The problem of scale

This is where things are hampered by the second of the two big challenges: scale. “It would be great if social media and technology companies could automatically scan every video uploaded to their website, but the reality is that it’s nearly impossible given the amount of video content that is uploaded every day,” Hosler said.

According to stats from May 2019, around 500 hours of video is uploaded to YouTube every single minute of the day. This makes manual detection difficult, but immediate action necessary. “We can’t rely on technology companies having infinite resources,” Hosler said.

“The current [anti-deepfake] technology isn’t yet ready to be used by the average person.”

Recently, there has been more of a concerted push to make platforms like Facebook legally responsible for the content that they host. Whether or not these proposals amount to anything remains to be seen. When it comes to deepfakes, it’s a far more challenging ask than monitoring for certain keywords in posts or links to less desirable websites.

Politicians are quickly embracing deepfakes as one of the big challenges to be solved, however. And they’ve got the theoretical solutions to help. For example, New York Democratic Representative Yvette Clarke has suggested that media which is altered must be labeled as such using a watermark. Not labelling your video correctly would result in civil and criminal penalties for the creator or uploader.

Meanwhile, the California Legislature is considering a bill that would prohibit a person or entity from knowingly distributing deceptive audio or visual media of a candidate within 60 days of an election. Again, however, the aspect of intent is tricky to quantify. As the bill notes, it would not represent a blanket ban on political deepfakes, but rather be aimed at those, “with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.” It would also presumably be difficult to use this to rule against deepfakes created outside of the U.S.

With plenty of concern, but no solutions quite ready to go the battle over deepfakes will continue. What would be the ideal solution? On the technological front, some kind of robust, real-time tool for blocking or calling attention to questionable videos would surely be desirable. But education focused on raising awareness of the risks of deepfakes is also important. And while all of this is going on, hopefully researchers will be holed up developing the tools which could one day be baked into browsers or built into system backends to ensure that deepfake news is kept at bay.

“The current technology isn’t yet ready to be used by the average person,” Hosler said. “The good news is that, in the hands of an expert, these tools can be hugely informative, and effective in fighting the spread of misinformation. For now, we will have to take everything on a case-by-base basis, but we will continue to pursue tools that are accurate and accessible to everyone.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
ChatGPT just plugged itself into the internet. What happens next?
OpenAI's website open on a MacBook, showing ChatGPT plugins.

OpenAI just announced that ChatGPT is getting even more powerful with plugins that allow the AI to access portions of the internet. This expansion could simplify tasks like shopping and planning trips without the need to access various websites for research.

This new web integration is in testing with select partners at the moment. The list includes Expedia, FiscalNote, Instacart, Kayak, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.

Read more
Google missed big chance with ChatGPT-like tech, report claims
Google Logo

Google missed a golden opportunity to lead the way with its own ChatGPT-like chatbot technology tool two years ago, but an overly cautious attitude from those at the top prevented the company from releasing it, according to a Wall Street Journal report on Tuesday.

The two Google researchers who created the powerful conversational AI technology reportedly told colleagues at the time that their creation could revolutionize how people searched on the internet and worked with computers.

Read more
Here’s why Bing Chat conversation lengths are now limited
A sad robot holds a kitchen timer that's in the red.

Bing Chat seems to now limit the length of conversations, in an attempt to avoid the AI's occasional, unfortunate divergence from what you might expect from a helpful assistant.

Bing Chat has only been live for a little over a week, and Microsoft is already restricting the usage of this powerful tool that should be able to help you get through a busy day.  Microsoft has analyzed the results of this initial public outing and made a few observations about the circumstances that can lead Bing Chat to become less helpful.

Read more