How artists and activists are using deepfakes as a force for good

In a recently released video, what appears to be a tense Brett Kavanaugh speaks before members of the United States Congress. “It’s time to set the record straight,” he begins. Over the next few minutes, the Supreme Court Justice admits it’s possible that he committed sexual assault and expresses remorse for the way he responded to the allegations by Christine Blasey Ford in his testimony. “For that, I take responsibility and I apologize.”

Thing is, this scene isn’t real. The footage is doctored, and Kavanaugh never actually said those things.

In reality, Kavanaugh denied and disregarded the charges and played the victim. The video described above is from a series deepfaked clips that envision a future where divisive public figures like Kavanaugh, Alex Jones, and Mark Zuckerberg take responsibility for their past transgressions.

The series, titled Deep Reckonings, is the brainchild of Stephanie Lepp — an artist who aims to elicit positive change in the world by leveraging deepfake technology to help people see and imagine better versions of themselves.

It’s a lofty and somewhat abstract project, but Lepp isn’t alone in her efforts. She’s part of a growing league of creators that aim to use deepfake technology to do good.

Deepfake it ’till you make it

Deepfakes have had a controversial journey so far. The technology has been used widely for nefarious purposes like pornography creation and disinformation campaigns, which has brought it under sharp scrutiny from both governments and tech companies that fear the technology’s weaponization.

“Given that the overwhelming majority of deepfakes are nefarious in nature, it’s understandable that we’ve focused on their weaponization,” says Lepp. “But this focus has prevented us from realizing their prosocial potential. Specifically, deepfakes can be used for purposes of education, health, and social change.”

Stephanie Lepp
Stephanie Lepp

She argues that, similar to how virtual reality has been used to help patients recover from brain injuries by letting them interact with virtual memories, deepfakes can be employed for psychological healing in trauma victims. For example, imagine a scenario where doctors could script deepfakes of an addict’s sober future self and use that to encourage them down the path of recovery.

The concept, at least in theory, is sound. Jonathan Gratch, director of virtual human research at the University of Southern California’s Institute for Creative Technologies, has found that seeing yourself in VR can be highly motivating, and that the same concept could easily be applied to deepfake footage. He suggests that if a patient’s face was subtly blended into their doctor’s face, the patient would be more likely to follow the doctor’s advice.

More than memes and misinformation

Despite the fact that negative applications of deepfakes tend to get more attention, positive applications like Lepp’s are on the rise. Within the past couple years, the technology has made appearances in the fields of storytelling, prosocial projects, and more.

The ALS Association’s Project Revoice, for example, enables amyotrophic lateral sclerosis patients who’ve lost their ability to speak to continue using their voice. How? By using deepfakes to create personalized synthetic vocal tracks that can be played on demand with a soundboard.

In a separate project from the nonprofit antimalaria organization Malaria Must Die, celebrity athlete David Beckham delivered a message in nine different languages (and voices) thanks to deepfaked audio and video that made his lips match the words.

In one particularly striking campaign from earlier in 2020, the Massachusetts Institute of Technology’s Center for Advanced Virtuality sought to educate the public on misinformation by producing a deepfake of former U.S. President Richard M. Nixon delivering the contingency speech written in 1969 in the event the Apollo 11 crew were unable to return from the moon.

These kinds of public service announcements and awareness campaigns are just the tip of the iceberg. Deepfake tools have also helped to simplify processes in the entertainment industry that otherwise demand high-end equipment and time-consuming resources, such as de-aging, voice cloning, and a lot more. Every face in a recent music video by The Strokes was fake, for instance, so that the band’s roughly 40-year-old members look could like they are 20.

Ohad Fried, a senior lecturer of computer science at Israel’s Interdisciplinary Center Herzliya, says that thanks to deepfakes, “what used to take years of artist time can now be achieved by independent small studios. This is always good news for diversity and quality of the media we consume.”

Tipping the scales

However, deepfake technology’s potential to do harm — especially as it gets more accessible — remains a concern. Aviv Ovadya, founder of the Thoughtful Technology Project, agrees that the ability to create synthetic media can have “numerous positive impacts, for storytelling, for those with disabilities, and by enabling more seamless communication across languages.” But at the same time, he warns there’s still a lot of room for harm when the technology goes mainstream and a lot of work that needs to be done to minimize these risks.

“Even these positive use cases can unintentionally lead to real and significant harm,” he told Digital Trends. “Excerpts from art pieces attempting to create empathy can also be taken out of context and misused.”

“The goal should be to build this technology in a way that mitigates those negative impacts as much as possible.”

Experts have repeatedly sounded the horn for channeling more resources into detection programs and official ethics guidelines — though legal intervention could end up hampering free speech. But no one’s quite sure yet about which direction deepfakes will ultimately take. Like any emerging technology, there will be a point where deepfakes will reach a balance and the responsibility will fall on tech companies, policymakers, and creators to ensure the scales remain tipped toward the good side.

Ovadya also suggests limiting deepfake tools’ accessibility for the masses until researchers are able to “complete some of the fortifications that we need to protect our society from the potential negative impacts. The goal should be to build this technology in a way that mitigates those negative impacts as much as possible at the very least.”

For now, though, Lepp will spend her time focusing on her next deepfake protagonist: Donald Trump and his concession speech.

Editors' Recommendations