Skip to main content

How artists and activists are using deepfakes as a force for good

In a recently released video, what appears to be a tense Brett Kavanaugh speaks before members of the United States Congress. “It’s time to set the record straight,” he begins. Over the next few minutes, the Supreme Court Justice admits it’s possible that he committed sexual assault and expresses remorse for the way he responded to the allegations by Christine Blasey Ford in his testimony. “For that, I take responsibility and I apologize.”
Promotional image for Tech For Change. Person standing on solar panel looking at sunset.
This story is part of Tech for Change: an ongoing series in which we shine a spotlight on positive uses of technology, and showcase how they're helping to make the world a better place.

Thing is, this scene isn’t real. The footage is doctored, and Kavanaugh never actually said those things.

What if Brett Kavanaugh had a reckoning? | Infinite Lunchbox

In reality, Kavanaugh denied and disregarded the charges and played the victim. The video described above is from a series deepfaked clips that envision a future where divisive public figures like Kavanaugh, Alex Jones, and Mark Zuckerberg take responsibility for their past transgressions.

Recommended Videos

The series, titled Deep Reckonings, is the brainchild of Stephanie Lepp — an artist who aims to elicit positive change in the world by leveraging deepfake technology to help people see and imagine better versions of themselves.

It’s a lofty and somewhat abstract project, but Lepp isn’t alone in her efforts. She’s part of a growing league of creators that aim to use deepfake technology to do good.

Deepfake it ’till you make it

Deepfakes have had a controversial journey so far. The technology has been used widely for nefarious purposes like pornography creation and disinformation campaigns, which has brought it under sharp scrutiny from both governments and tech companies that fear the technology’s weaponization.

“Given that the overwhelming majority of deepfakes are nefarious in nature, it’s understandable that we’ve focused on their weaponization,” says Lepp. “But this focus has prevented us from realizing their prosocial potential. Specifically, deepfakes can be used for purposes of education, health, and social change.”

Stephanie Lepp
Stephanie Lepp Image used with permission by copyright holder

She argues that, similar to how virtual reality has been used to help patients recover from brain injuries by letting them interact with virtual memories, deepfakes can be employed for psychological healing in trauma victims. For example, imagine a scenario where doctors could script deepfakes of an addict’s sober future self and use that to encourage them down the path of recovery.

The concept, at least in theory, is sound. Jonathan Gratch, director of virtual human research at the University of Southern California’s Institute for Creative Technologies, has found that seeing yourself in VR can be highly motivating, and that the same concept could easily be applied to deepfake footage. He suggests that if a patient’s face was subtly blended into their doctor’s face, the patient would be more likely to follow the doctor’s advice.

More than memes and misinformation

Despite the fact that negative applications of deepfakes tend to get more attention, positive applications like Lepp’s are on the rise. Within the past couple years, the technology has made appearances in the fields of storytelling, prosocial projects, and more.

Project Revoice: Helping this Ice Bucket Challenge founder take his voice back from ALS

The ALS Association’s Project Revoice, for example, enables amyotrophic lateral sclerosis patients who’ve lost their ability to speak to continue using their voice. How? By using deepfakes to create personalized synthetic vocal tracks that can be played on demand with a soundboard.

In a separate project from the nonprofit antimalaria organization Malaria Must Die, celebrity athlete David Beckham delivered a message in nine different languages (and voices) thanks to deepfaked audio and video that made his lips match the words.

David Beckham speaks nine languages to launch Malaria Must Die Voice Petition

In one particularly striking campaign from earlier in 2020, the Massachusetts Institute of Technology’s Center for Advanced Virtuality sought to educate the public on misinformation by producing a deepfake of former U.S. President Richard M. Nixon delivering the contingency speech written in 1969 in the event the Apollo 11 crew were unable to return from the moon.

These kinds of public service announcements and awareness campaigns are just the tip of the iceberg. Deepfake tools have also helped to simplify processes in the entertainment industry that otherwise demand high-end equipment and time-consuming resources, such as de-aging, voice cloning, and a lot more. Every face in a recent music video by The Strokes was fake, for instance, so that the band’s roughly 40-year-old members look could like they are 20.

Ohad Fried, a senior lecturer of computer science at Israel’s Interdisciplinary Center Herzliya, says that thanks to deepfakes, “what used to take years of artist time can now be achieved by independent small studios. This is always good news for diversity and quality of the media we consume.”

Tipping the scales

However, deepfake technology’s potential to do harm — especially as it gets more accessible — remains a concern. Aviv Ovadya, founder of the Thoughtful Technology Project, agrees that the ability to create synthetic media can have “numerous positive impacts, for storytelling, for those with disabilities, and by enabling more seamless communication across languages.” But at the same time, he warns there’s still a lot of room for harm when the technology goes mainstream and a lot of work that needs to be done to minimize these risks.

“Even these positive use cases can unintentionally lead to real and significant harm,” he told Digital Trends. “Excerpts from art pieces attempting to create empathy can also be taken out of context and misused.”

“The goal should be to build this technology in a way that mitigates those negative impacts as much as possible.”

Experts have repeatedly sounded the horn for channeling more resources into detection programs and official ethics guidelines — though legal intervention could end up hampering free speech. But no one’s quite sure yet about which direction deepfakes will ultimately take. Like any emerging technology, there will be a point where deepfakes will reach a balance and the responsibility will fall on tech companies, policymakers, and creators to ensure the scales remain tipped toward the good side.

Ovadya also suggests limiting deepfake tools’ accessibility for the masses until researchers are able to “complete some of the fortifications that we need to protect our society from the potential negative impacts. The goal should be to build this technology in a way that mitigates those negative impacts as much as possible at the very least.”

For now, though, Lepp will spend her time focusing on her next deepfake protagonist: Donald Trump and his concession speech.

Shubham Agarwal
Former Digital Trends Contributor
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India. His work has previously appeared in Firstpost…
Star Wars legend Ian McDiarmid gets questions about the Emperor’s sex life
Ian McDiarmid as the Emperor in Star Wars: The Rise of Skywalker.

This weekend, the Star Wars: Revenge of the Sith 20th anniversary re-release had a much stronger performance than expected with $25 million and a second-place finish behind Sinners. Revenge of the Sith was the culmination of plans by Chancellor Palpatine (Ian McDiarmid) that led to the fall of the Jedi and his own ascension to emperor. Because McDiarmid's Emperor died in his first appearance -- 1983's Return of the Jedi -- Revenge of the Sith was supposed to be his live-action swan song. However, Palpatine's return in Star Wars: Episode IX -- The Rise of Skywalker left McDiarmid being asked questions about his character's comeback, particularly about his sex life and how he could have a granddaughter.

While speaking with Variety, McDiarmid noted that fans have asked him "slightly embarrassing questions" about Palpatine including "'Does this evil monster ever have sex?'"

Read more
Waymo and Toyota explore personally owned self-driving cars
Front three quarter view of the 2023 Toyota bZ4X.

Waymo and Toyota have announced they’re exploring a strategic collaboration—and one of the most exciting possibilities on the table is bringing fully-automated driving technology to personally owned vehicles.
Alphabet-owned Waymo has made its name with its robotaxi service, the only one currently operating in the U.S. Its vehicles, including Jaguars and Hyundai Ioniq 5s, have logged tens of millions of autonomous miles on the streets of San Francisco, Los Angeles, Phoenix, and Austin.
But shifting to personally owned self-driving cars is a much more complex challenge.
While safety regulations are expected to loosen under the Trump administration, the National Highway Traffic Safety Administration (NHTSA) has so far taken a cautious approach to the deployment of fully autonomous vehicles. General Motors-backed Cruise robotaxi was forced to suspend operations in 2023 following a fatal collision.
While the partnership with Toyota is still in the early stages, Waymo says it will initially study how to merge its autonomous systems with the Japanese automaker’s consumer vehicle platforms.
In a recent call with analysts, Alphabet CEO Sundar Pichai signaled that Waymo is seriously considering expanding beyond ride-hailing fleets and into personal ownership. While nothing is confirmed, the partnership with Toyota adds credibility—and manufacturing muscle—to that vision.
Toyota brings decades of safety innovation to the table, including its widely adopted Toyota Safety Sense technology. Through its software division, Woven by Toyota, the company is also pushing into next-generation vehicle platforms. With Waymo, Toyota is now also looking at how automation can evolve beyond assisted driving and into full autonomy for individual drivers.
This move also turns up the heat on Tesla, which has long promised fully self-driving vehicles for consumers. While Tesla continues to refine its Full Self-Driving (FSD) software, it remains supervised and hasn’t yet delivered on full autonomy. CEO Elon Musk is promising to launch some of its first robotaxis in Austin in June.
When it comes to self-driving cars, Waymo and Tesla are taking very different roads. Tesla aims to deliver affordability and scale with its camera, AI-based software. Waymo, by contrast, uses a more expensive technology relying on pre-mapped roads, sensors, cameras, radar and lidar (a laser-light radar), that regulators have been quicker to trust.

Read more
Uber partners with May Mobility to bring thousands of autonomous vehicles to U.S. streets
uber may mobility av rides partnership

The self-driving race is shifting into high gear, and Uber just added more horsepower. In a new multi-year partnership, Uber and autonomous vehicle (AV) company May Mobility will begin rolling out driverless rides in Arlington, Texas by the end of 2025—with thousands more vehicles planned across the U.S. in the coming years.
Uber has already taken serious steps towards making autonomous ride-hailing a mainstream option. The company already works with Waymo, whose robotaxis are live in multiple cities, and now it’s welcoming May Mobility’s hybrid-electric Toyota Sienna vans to its platform. The vehicles will launch with safety drivers at first but are expected to go fully autonomous as deployments mature.
May Mobility isn’t new to this game. Backed by Toyota, BMW, and other major players, it’s been running AV services in geofenced areas since 2021. Its AI-powered Multi-Policy Decision Making (MPDM) tech allows it to react quickly and safely to unpredictable real-world conditions—something that’s helped it earn trust in city partnerships across the U.S. and Japan.
This expansion into ride-hailing is part of a broader industry trend. Waymo, widely seen as the current AV frontrunner, continues scaling its service in cities like Phoenix and Austin. Tesla, meanwhile, is preparing to launch its first robotaxis in Austin this June, with a small fleet of Model Ys powered by its camera-based Full Self-Driving (FSD) system. While Tesla aims for affordability and scale, Waymo and May are focused on safety-first deployments using sensor-rich systems, including lidar—a tech stack regulators have so far favored.
Beyond ride-hailing, the idea of personally owned self-driving cars is also gaining traction. Waymo and Toyota recently announced they’re exploring how to bring full autonomy to private vehicles, a move that could eventually bring robotaxi tech right into your garage.
With big names like Uber, Tesla, Waymo, and now May Mobility in the mix, the ride-hailing industry is evolving fast—and the road ahead looks increasingly driver-optional.

Read more