Skip to main content

Don’t be fooled — this automated system sneakily manipulates video content

Transferring One Video Into the Style of Another

A team of researchers at Carnegie Mellon University have developed an artificial intelligence system that automatically transfers content from one film into the style of another. In the vein of “deep fakes”, the A.I.-augmented videos infamous for superimposing one person’s face onto another’s body, the CMU system presents another case for how difficult it could be to distinguish fiction from reality in the future.

The CMU researches have named their system Recycle-GAN, after a class of algorithms that help make deep fakes possible by applying the style of one image or video to another.

In a video released early this month, the researchers demonstrated how a source video of Barack Obama speaking can be processed to make it seem as though Donald Trump is mouthing the word. Or a monologue from John Oliver can be transformed into one from Stephen Colbert. Recycle-GAN isn’t limited to human faces either. The researchers also show how the system can make a daffodil bloom with the same mechanics as a hibiscus.

The end result isn’t perfect — a slew of digital artifacts around the edges of the edited faces make it clear that things aren’t exactly as they seem. Still, it’s pretty impressive.

“Recycle-GAN encodes both spatial and temporal information,” Aayush Bansal, a CMU Ph.D. student who worked on the project, told Digital Trends. “Spatial constraints enable it to learn transformation from one domain to another, and the temporal information helps in better learning stylistic information and improve the spatial transformation.”

Bansal said he was motivated to develop Recycle-GAN by an urge to, in a sense, resurrect the dead. “One of my life goals is to bring back … Charlie Chaplin in our movies,” he said. As such, Bansal sees the system as a tool for artists, such as moviemakers, and data-hungry researchers.

Bansal acknowledged that bad actors could exploit a tool like Recycle-GAN to perform troubling manipulations similar to those we’ve seen with deep fakes, including fake news videos and fake porn. However, he hopes his team’s approach could provide a solution for identifying deep fakes rather than fuel to the fire.

“Our approach enables generation of data which could be used to train a simple machine learning model that can discriminate between real and fake,” he said. “Generating this fake data was hard earlier because most deep fakes out there require human intervention or manual supervision, and as such we could never get an automatic way to detect them. However, now we have an automatic way to generate such data that we can train models that could detect fake content with some reliable accuracy.”

Bansal presented his team’s work at the European Conference on Computer Vision in Germany earlier this month.

Editors' Recommendations

Dyllan Furness
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Don’t speak: This wearable lets you give voice commands without saying a word
Alterego by Arnav Kapur and MIT

Imagine if you had a version of Amazon’s Alexa or Google Assistant inside your head, capable of feeding you external information whenever you required it, without you needing to say a single word and without anyone else hearing what it had to say back to you. An advanced version of this idea is the basis for future tech-utopian dreams like Elon Musk’s Neuralink, a kind of connected digital layer above the cortex that will let our brains tap into hitherto unimaginable machine intelligence.

Arnav Kapur, a postdoctoral student with the MIT Media Lab, has a similar idea. And he’s already shown it off.

Read more
Samsung’s new A.I. software makes generating fake videos even easier
samsung ai deepfake videos software

Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

A.I. is getting better and better at producing fake videos, for everything from amusingly adding Nicholas Cage into movies to maliciously spreading fake news. Now Samsung has developed software which makes creating fake videos even easier.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more