Skip to main content

How the Welcome to Chechnya documentary found security in face-swapping VFX

Welcome To Chechnya (2020): Official Trailer | HBO

This year’s Academy Award season already includes plenty of firsts due to the pandemic-related circumstances surrounding the ceremony, but one film’s presence among the early Oscar contenders is particularly noteworthy: Welcome to Chechnya.

Director David France’s documentary chronicles the antigay purges in the Russian republic of Chechnya and the efforts of activists working to rescue victims. While it’s no surprise the powerful film is a potential nominee in the Oscars’ Best Documentary Feature category, Welcome to Chechnya is also the first documentary to be on the shortlist of contenders in the Best Visual Effects category, thanks to the method it used to protect the identities of people featured in the film.

In order to strike a balance between protecting Welcome to Chechnya‘s interview subjects and preserving the emotional resonance of their experiences, France and visual effects supervisor Ryan Laney combined digital face-replacement, machine-learning software and a group of actors and activists who let their own faces serve as digital stand-ins for some of the individuals in the film.

Digital Trends spoke to France and Laney about their use of face-swapping as identity protection in Welcome to Chechnya and how it could shape the future of documentary filmmaking.

Image used with permission by copyright holder

Digital Trends: How did you feel when you first heard Welcome to Chechnya was included on the Oscars’ visual effects shortlist?

Ryan Laney: We were floored. Honestly, David and I had a conversation in December about submitting for the Visual Effects Society Awards, and we were thinking of best supporting visual effects for it. [It felt like] a supporting role. But when we heard about the Academy considering us for Best Visual Effects, we were just in a state of disbelief that it was real.

It’s easy to compare this particular effect to the deepfake videos we’ve seen, but it doesn’t feel quite the same when you’re watching the film. How did you settle on using this particular technique?

Laney: Yeah, it is a machine-learning process, so it does share some lineage with deepfakes. But deepfakes are inherently nonconsensual. The actor doesn’t know they’re being used and the subject in the film doesn’t know they’re being used, and it’s an attempt to fool the audience. But David was very careful about his subjects, the volunteers who lent their faces, and how he spoke to the audience.

David France: Obviously, we wanted to disguise the people in the film in ways that would make them feel comfortable to participate and tell their stories, knowing that they are literally being hunted around the globe in order to keep them from speaking. We began approaching the question about how to disguise them very indirectly, and certainly not having anything to do with deepfakes.

When we started talking with Ryan, he initially proposed something he called “style transfer” that would take, for example, a piece of art like a Picasso and use that in a kind of algorithmic way to replace the face with a new skin. That was really interesting, but it was also disquieting. It didn’t give us the kind of human face we were looking for, and ultimately led to us discussing how we put an actual face into this process.

Image used with permission by copyright holder

At first, we were calling it “face transplants” or “face doubles.” And when we looked at an early example, we were astonished at how effective it was at not just anonymizing the people who needed to be hidden, but also allowing us to see the intimacy of their expressions, from the horrors of what they’ve been through and the uncertainty of their situation in the underground, to their hope for a better life.  You could read all of that through this process safely, behind the face of a volunteer.

We then reached out to some activists in New York and asked them if they would perform this work as a kind of a human shield to protect the lives of the people. And in the end, that’s exactly what they did.

Image used with permission by copyright holder

There are so many different methods typically used to disguise someone’s identity in a documentary. What other techniques did you try, and why didn’t they work for the film?

France: We tried using a kind of rotomation — a sort of A Scanner Darkly approach — to render the individuals as cartoonlike figures. My initial assumption was that the audience would learn how to watch them and experience the journey with them in this two-dimensional rendering, but what it didn’t do was to disguise them. It was a lot of work and a lot was changed, but certain aspects of their presentation were still there. It emphasized those elements in a way — a lot like the way caricatures bring out individuals’ uniqueness and make them even more identifiable. So we realized that approach wasn’t going to work.

We then tried blurry ovals — the standard blurry or pixelated look on their faces. That omitted all of their humanity, though. We went to artists and asked them to reinterpret the faces, but we felt that had the journalistically problematic impact of reinterpreting their journeys, and putting an artist’s impression in between reality and the audience. So those techniques didn’t work, either.

Image used with permission by copyright holder

We even tried a Snapchat-like technology where we would put glasses on them — or masks or new noses or things like that — to disguise them in some way. What that really wasn’t doing, though, was helping us tell this really urgent, human story. We kept losing the human aspect of it.

It wasn’t until we saw Ryan’s first pass on the face swap using a volunteer that we knew we had something that would allow us to show the movie to an audience. We had promised everyone in the film that we wouldn’t release it until they were satisfied with their disguises and their presentation. But every early attempt we came up with was so far away from what we knew they would accept that we never showed it to them. Ultimately, it was months into the R&D on this that we finally had something we could present to them as a possibility and get their sign-off.

There’s this really amazing, pivotal moment late in the film when one of the individuals, Grisha, goes public and the volunteer’s face layered over his own dissolves away just as he reveals his real name, Maxim. How did that scene develop behind the scenes? 

France: We had long discussions about whether to cover him in the first place, knowing that he would eventually go public. We realized that we wanted the audience to know that people were being covered. That was a part of the way we were telling the story of their danger. If we didn’t cover Maxim, it would have suggested he wasn’t in the same kind of danger the others were facing.

Image used with permission by copyright holder

He was in mortal danger through the first two-thirds of the film, so we felt it was important to cover him. We did try some experimentation with the point in the press conference when the Grisha face melts away to reveal Maxim’s face, and eventually settled on that moment in the film when he is called out by name — his official name — for the first time, and the camera swings to him.

That’s the moment when he is most exposed and really the most courageous. We chose that moment to give him a close-up in order to give the audience a chance to understand what it must be like for him at that moment, to become so open and so at risk, but also be so brave.

Unlike most films, Welcome to Chechnya makes a decision to tell its audience upfront about the visual effects it uses. Like you mentioned, they’re a part of the story. What were the early discussions like around how to present the visual effects in the film?

Laney: We wanted to make sure the audience would be aware of where we were touching pixels, and that came from a couple of angles. One was this idea of media integrity, because this is a journalistic project. Changing faces in a blockbuster film is no big deal, but when you’re talking about journalism, it’s a different story. We wanted to be honest and upfront about what we were doing.

Image used with permission by copyright holder

Witness testimony also has a very specific visual language, and people know instantly that blurry ovals or fogging over faces are indicative of danger — that the person being covered should not be seen on film. And so, along with finding the fine line between hiding faces and using blurry ovals, we wanted to tie what we were doing to that visual language, too.

David mentioned earlier about wanting to train the audience’s eyes on the effects. We ended up softening things a little extra in the first 20 or so shots to help acclimate the audience to the effect.

Protecting the identities of everyone in the film is so important. What steps did you take to make sure their disguises were secure?

Laney: The mechanism involves something like an encryption key, so without the encryption key, it’s impossible to reverse engineer what we did. We felt good from that aspect of things. But there was a lot of adjustment to security overall for the film. We built a secret lab that was entirely offline, so all of our turnovers were on a hand-delivered drive that was given to me in person. All of the transfers we did for dailies and work reviews were all done in very encrypted fashion with passwords that weren’t shared online.

Image used with permission by copyright holder

There were a lot of other security considerations, too. Cell phones weren’t allowed in the workspace. No smartwatches or anything that could record were allowed, and all the machines in the workspace were all offline.

Have you heard from other visual effects artists or studios about your work on the film? This feels like such a smart, new use of the techniques that are becoming popular lately. 

Laney: I have heard from some of them, and the response has been been great. We’re just thrilled people see what we’re trying to do. Thanos in Avengers: Endgame used a similar kind of deep-learning for some of his facial reconstruction, and I think this idea of using machine-learning techniques is going to work its way into more visual effects as time goes on.

But more than just that, it’s that we had more than 400 shots — over an hour of face veiling — in a documentary. That’s a big change for documentary filmmaking. There’s now this tool for filmmakers to tell their stories in ways that haven’t been done before, and it also provides some additional security for witnesses to tell their stories and do it in a human way. They don’t need to be monsters in the shadows. They can have a voice and be in the light and have their story translated effectively and truthfully. Their expressions and emotions can really come through in the film.

The documentary Welcome to Chechnya is available now on HBO and the HBO Max streaming service.

Editors' Recommendations

Topics
Rick Marshall
A veteran journalist with more than two decades of experience covering local and national news, arts and entertainment, and…
How visual effects created Snowpiercer’s frozen world
A train hurtles the snow against a red horizon in an early visual effects shot from the Snowpiercer series.

Three seasons into Snowpiercer, the post-apocalyptic thriller set aboard a high-speed train carrying the last survivors of humanity, the tension still hasn't let up. The TNT series has followed stars Daveed Diggs and Jennifer Connelly as their characters navigate the precarious balance of society aboard the massive train, which hurtles from one end of the world to the other, surrounded by a deadly frozen Earth.

Season 1 of Snowpiercer premiered in 2020 to positive reviews, with high praise paid to the show's use of the claustrophobic train environment and the harsh conditions just outside its cars to build a terrifying, but believable world for its desperate characters. The third season of the series concluded in March, leaving the balance of power in the train tipped once again, but also offering a glimmer of hope for both the passengers and Earth itself.

Read more
How visual effects made Manhattan a war zone in HBO’s DMZ
An image from the DMZ series featuring actors against a backdrop created with visual effects.

The HBO Max limited series DMZ presents a world in which the Second American Civil War has torn the country in half, with the island of Manhattan serving as a demilitarized zone between the United States of America and the seceding Free States of America.

Created by Westworld and Sons of Anarchy writer Roberto Patino and loosely based on Brian Wood and Riccardo Burchielli's comic book series of the same name, DMZ casts Rosario Dawson as Alma "Zee" Ortega, an NYC medic who was separated from her son while evacuating the island. After searching for him throughout the USA and FSA, she returns to the DMZ to continue her search, only to find herself caught in another war raging between the rival factions attempting to control Manhattan.

Read more
How visual effects made The Batman hit harder & drive faster
Batman approaches the camera on a rainy street with a fire in the background in The Batman.

Matt Reeves' The Batman roared onto the big screen in March after years of development and a production -- then release -- schedule interrupted by a global pandemic. None of that was enough to stop the Dark Knight, though, and the film went on to become the highest-grossing film of 2022 so far while earning critical acclaim for its dark reinvention of the DC Comics hero.

While Reeves took a grounded approach to tell his Batman tale, bringing the film's action to the screen still required assistance from a talented visual effects team spread across multiple studios. Among them was celebrated VFX studio Wētā FX, led by supervisor (and two-time Oscar nominee) Anders Langlands.

Read more