Netflix and acclaimed director Martin Scorsese initially seemed like an odd pairing for The Irishman, the filmmaker’s adaptation of Charles Brandt’s 2004 novel I Heard You Paint Houses. Any uncertainties about that collaboration were swiftly dismissed by the film’s success both on the big screen and on the streaming service, however, and it’s now regarded as frontrunner in the upcoming awards season.
Chronicling the life of alleged mafia hitman Frank Sheeran and his possible connection to the disappearance of Teamsters union leader Jimmy Hoffa, The Irishman features impressive performances by leading men Robert De Niro, Al Pacino, and Joe Pesci as they portray a trio of real-world characters whose lives intertwined over the course of several generations. The film follows the characters across a wide range of ages using visual effects, but it doesn’t rely on the usual high-tech helmets, motion-capture suits, or facial markers, opting instead for an innovative new technology.
Leading the visual effects team was Industrial Light & Magic’s Pablo Helman, who previously worked with Scorsese on 2016’s Silence and has twice been nominated for an Academy Award for his work in visual effects. Digital Trends spoke with Helman about the challenges of developing and testing the visual effects technology used in The Irishman, and maybe most importantly, convincing one of the most celebrated directors of all time to trust digital de-aging techniques on some of Hollywood’s most famous faces.
Digital Trends: Going back to the early days of this project, how did you convince Martin Scorsese that this kind of digital de-aging was not only possible, but would work with his style of filmmaking?
Pablo Helman: It all comes with trust. Working on films, you rely on each other for all kinds of things. That’s one part of it. We also talked about technology, and Marty is open to trying new things when they make sense. So I made a good argument as to why it would be a good idea to to try to make the movie that way.
What was the gist of that discussion about the technology?
Well, we talked about different methodologies. There was no way to make this movie by bringing in another actor and replacing his face because of the way the movie was going to be edited, back and forth and back and forth over time. It would be impossible to replace the bodies over and over again. So that option went away.
The second option was to cast a younger actor. Marty wanted to keep the connection between the characters (at) different ages, and you want the audience to also keep that connection, too — so it was going to be very difficult having the three main actors, plus three younger actors, plus all the makeup they each had to wear to keep that connection, and so on. So, there was really no other way to do it. So, once we were thinking we’d need to go with the digital de-aging, we had to show him what that would look like.
How did you settle on a particular type of digital de-aging? There are so many ways studios are handling this type of effect lately.
Marty said, “If we’re going to go with de-aging, Robert De Niro is not going to be somebody wearing all of that technology on his face. He’s not going to be wearing [facial] markers. He’s not going to be wearing a helmet with little cameras. He’s not going to be wearing a gray pajama suit, [and] he’s going to want to be on set. So if you can figure it out, then we can do it.”
I got really excited about that because it was a great opportunity for us to push the technology toward the natural place where we want it to go. If you think about the history of performance capture, it started with tracking the bodies manually. Then we put markers on actors’ bodies and tracked them that way, because you could see so much more, and we eventually trained the computer to track those markers. And then markers migrated to the face, and we got computers to track those markers, too. So the natural progression is for us to push for no markers whatsoever. I jumped at trying to push filmmaking in that direction, but first I had to show it to Marty.
What were the early tests like?
I proposed bringing De Niro in and having him reenact a scene from Goodfellas, because that was a way to lay the groundwork, and it was something Marty knew really well. So [De Niro] comes in and does the scene as a 74-year-old actor, and we were able to change him to look the way he did 30 to 40 years ago. At that point, they were going to trust me.
Which scene from Goodfellas did you have De Niro reenact?
It was the Pink Cadillac scene, after they steal millions of dollars and everybody starts buying stuff. De Niro’s character told everybody not to buy anything with the money, but then one of the characters shows up with a new pink Cadillac, and De Niro says, “What the fuck is wrong with you?”
We chose this scene because he was so over the top in it. Those extreme performances are so difficult to work on, and we wanted to test the behavioral likeness of the de-aging effect — because it’s not just about looking like a 40-year-old man, it’s also the way he behaves. That makes him who he is. That was going to be very difficult to catch without markers, so that test proved we could do this without markers and without interfering with the actor.
So what did you come up with to capture all of those details without markers?
Well, if we don’t have markers, then the only things we do have are the actor, who is a 3-D object in front of the camera, and the lighting that hits the actor. So we set out to capture the lighting and the textures created by that lighting, and make some 3-D geometry out of it. To do that, we had to acquire the most information possible. We had one camera, the director’s camera, to catch the action. But if we had a way to capture the action from different points of view, with different cameras, we could triangulate from each of the cameras to create that 3-D geometry.
So we came up with a three-camera rig. The center camera is still the director camera, and then to the left and right of the center camera were what we call “witness cameras,” which are infrared cameras. The software we created, Flux, takes a look at the information that comes through the three cameras and then triangulates whatever is in front of the camera to create 3-D geometry from it.
So you got three cameras’ worth of shading and depth and information from different perspectives?
Exactly. At the end of the day, the more information you have, the better the chances of creating something out of nothing.
You’ve worked on some films with massive, spectacle-driven visual effects, as well as films with a subtle approach to visual effects. The Irishman falls into the latter category. How does your approach to visual effects change when you take on a project like this, where subtlety is so important?
You have to take a look at things you overlook when you’re dealing with bigger atmospheric effects or things (that) are action-driven. With this, the camera is not moving, the framing is forehead to chin, and there is nowhere to hide. Everything in front of you is about performance, and you have to look at the performances of the actors and deconstruct that performance to do what you need to do.
So what ends up happening is, you start to understand what makes a concerned look, or a happy look, and what makes all of these different emotions come through. You start to understand that the chin moves a specific way and the nose or eyebrows all move in specific ways based on the emotion coming through, too. You really need to understand what makes that character look like the iconic actor that you’re going by.
How does the technology catch and interpret all of that information?
So, this piece of software we created and wrote did away with markers, but it gave us all the pixels that the camera sees in somebody’s face. Now, instead of 200 markers, you have thousands of markers, constantly moving. So the software takes a look at that and captures all kinds of things like, for instance, the way the face moves during dialogue.
A lot of the time [with visual effects used on a person’s face], you lose the weight of the dialogue, because when we put a consonant together with a phoneme, our face vibrates in a specific way. It’s like a rhythm. With markers, you don’t have access to that rhythm, because the software is not sensitive enough. Without markers, though, the computer is more sensitive to those kinds of things, and it captures them for you. Now, the dialogue and the whole performance starts to come together, because you see everything moving together at the same time.
And when two actors are together and their lines are hitting on the other person the right way, and the other person is understanding what they’re saying and reacting to it, and their eyes and faces and bodies are registering all of it, that also affects the rhythm of the performance. If you can capture all of that, then you end up with a fabulous shot.
You’ve mentioned in a few interviews that you want The Irishman to be a referendum on visual effects technology in films. What do you mean by that?
So, what I meant was that technology is there to allow for the performance to come through. The director wants to have control, and the actors also want to have control. So it would be great for the technology we use to be away from where the performance originates. Technology needs to service the story, not the other way around.
No director likes to be told, “You can’t move the camera that way,” or “You can’t move the actors that way.” You shouldn’t have to tell a director not to light the actors in a specific way, or that they have to wear a helmet or a suit for technology reasons. So the idea is that, little by little, as technology gets better and better, we stay away from things that could affect the performance.
At the end of the day, the performances and the stories that are told by the performances are the things that the audience needs to get. That’s when the audience gets connected to characters: When the performances are as true as possible.
- Digital Trends staff picks: Our favorite tech gear of 2021
- How Tig Notaro was digitally added to ‘Army of the Dead’ a year after filming ended
- 10 Films that should’ve won an Oscar for Visual Effects, but didn’t
- A ballet of blood and 3D printing: Behind the visual effects of The Midnight Sky
- How Mank used visual effects to turn back the clock on Hollywood