Skip to main content

Why deepfakes will soon be as commonplace as Photoshop

Did you know you can make a deepfake video from the comfort of your own home, or on your phone? Download one of the plethora of face-swap or deepfake apps casually available from your local app store and you too can influence an election.

Okay, not really, and we definitely don’t endorse that. But in the hype and concern around deepfake technology, and its very real misuse, the simple truth is that this technology isn’t going away. In fact, it’s going to become as commonplace as Photoshop, especially if the app developers working on deepfake tech have anything to say about it: We could soon see hyper-targeted ads with our own faces on them.

Roman Mogylnyi, the CEO of RefaceAI, a startup based in Ukraine, said they have been working with machine learning since 2011, and pivoted to making their own apps based on deepfake tech in 2014. RefaceAI has already released a photo-deepfake app called Reflect, and is on the verge of releasing both their own video-deepfake app, as well as a web service that will help detect deepfake videos.

“We saw from the very beginning that this technology was being misused,” Mogylnyi said. “We thought we could do it better.”

RefaceAI Deepfake Technology | We Will Dance

RefaceAI has already worked with some film production companies — although Mogylnyi couldn’t say which ones — to use its technology to swap the faces of actors onto body doubles, at a cost far less than what it would have been to fly the actors back to set and reshoot the scenes in question. This, Mogylnyi said, is what the company sees as the future of deepfakes. “We want to use it for marketing, for personalizing ads, for gifs, for entertainment materials in general,” he said.

The future of media

RefaceAI isn’t the company one trying to get ahead of this inevitable marketing curve. Carica, an app based out of South Korea, is also developing deepfake GIFs and videos wherein a person can graft their face onto a popular reaction GIF to send to friends and family. The app already features pop-up advertisements that incorporate the user’s face into the ad’s photo or video.

Carica

“We want our company to become a media company,” Carica engineer Joseph Jang told Digital Trends. Deepfakes are just the way they’re starting out. “I think media companies will start adopting this feature. It will become an option you have, just like a filter. It will become just so normal for people.”

Both Jang and Mogylnyi used the proliferation of Photoshop as the model for where they see Deepfakes going: So common as to be unremarkable, if still a bit controversial. And for both of them, the political and ethical problems wrapped up in deepfakes are really just run of the mill.

Shamir Allibhai, the CEO of the deepfake-detecting platform Amber Video, told Digital Trends that his overriding view was that deepfake technology, like most other kinds of tech, is amoral.

“It would be like saying Microsoft Word is immoral, is evil because potential terrorists will use it.”

“It would be like saying Microsoft Word is immoral, is evil because potential terrorists will use it to espouse violent, extremist ideology, which will inspire others to espouse violent extremist ideology,” he said. “I very much see this technology in the same vein.”

Amber Video is a platform that advertises its services as combating deepfakes to “prevent fraud, reputation loss, and stakeholder mistrust.” Allibhai said he didn’t know if he was ready to buy something just because his face was on it, but he did agree that, eventually, the technology will be pervasive.

“It’s a mirror on society,” he said. “You see this in technologies like the Twitter platform. You see the best of humanity and the worst of us. Deepfake technology will also mirror society. There are people who will use it for lighthearted satire and poke fun at authoritarian regimes. But we will also try to sew chaos.”

Weaponization of deepfakes

The biggest fear remains the potential abuse and the weaponization of deepfakes. As they become more common, we could even see them deployed not just for wide-scale disinformation campaigns, but for practical jokes or high school bullying. If the technology is accessible enough, anyone can use it.

Both Carica and RefaceAI said they are taking steps to mitigate potential abuse — RefaceAI with its deepfake detection webservice, and Carica with content moderators. The whole Carica company is just nine people right now, Jang said, and they trade off content moderation duties.

“Even Photoshop can and has been used for bullying. But on the other hand, we’re providing an antidote.”

“This was really the first question we asked ourselves,” said RefaceAI’s Mogylnyi. “We’ve been dealing with this in terms of political scandals in Ukraine for years. We understand our tech can be misused. Even Photoshop can and has been used for bullying. But on the other hand, we’re providing an antidote for it. We did have users upload sexual content to our app, and we banned those users right away.”

Carica is a small enough app that that’s what they have to work with for now. But overall, Jang wasn’t worried.

“This happens with all new technology,” Jang said. “First, it’s driven by misuse, but then after that phase, it becomes available for everyone and people use it for normal things.”

Editors' Recommendations

Maya Shwayder
I'm a multimedia journalist currently based in New England. I previously worked for DW News/Deutsche Welle as an anchor and…
AI drone beats pro drone racers at their own game
Light trails from racing drones.

Champion-level Drone Racing using Deep Reinforcement Learning (Nature, 2023)

Professional drone racers are the latest to suffer the ignominy of being outsmarted by artificial intelligence after quadcopters powered by the technology beat them at their own game.

Read more
Security robots could be coming to a school near you
Team 1st Technologies' security robot.

A number of U.S. schools are testing AI-equipped security robots designed to roam the campus around the clock looking for unwanted visitors.

School safety is an ongoing concern for staff, students, and parents, with mass shootings at the extreme end of things to be worried about.

Read more
AI-powered commentary is coming to next month’s Wimbledon
The grounds of the Wimbledon Tennis Championships.

Sports commentators who thought they were safe from the ever-expanding tentacles of generative AI should think again.

In a first for professional tennis, and possibly the entire sports world, this year’s Wimbledon Tennis Championships will deploy an AI-powered commentator for all of its video highlights.

Read more