Capcom’s Resident Evil series has often masterfully created tension and scares from the sounds you hear while tiptoeing around ominous environments. While the zombies and monstrosities that walk amongst Resident Evil’s protagonists are usually visually unsettling, the audio is what takes the terror to the next level. Resident Evil 2, both the PS1 original and the 2019 remake, are prime examples of Capcom’s sound design prowess for horror games.
I spoke with Resident Evil 2 audio director Kentaro Nakashima about the process of creating sounds for the remake. Nakashima’s insightful responses are sure to please longtime Resident Evil fans, as well as those curious about what goes into designing audio for video games.
Steven Petite, Contributor, Digital Trends
The sound design for the original Resident Evil 2, in my estimation, was ahead of its time. From the different reload sounds to footsteps to the creepy noises in the darkness, a sizable portion of the atmosphere and terror came from what you heard. Can you talk a little bit about what it’s like reworking the sound design for such an iconic game, 20 years later?
Kentaro Nakashima, Audio Director, Capcom
It was a challenge that I gladly accepted. For the reboot of Resident Evil 2, we approached the sound direction from a number of different angles in a way that would “betray” the sound of the original, but in a good way. Sound is very important when it comes to fear, and with modern technology we were able to produce sounds that weren’t possible at the time of the original. This challenge greatly motivated the entire sound team and influenced every aspect of the design, helping us to uncompromisingly produce sounds of horror that I believe no one’s heard before.
Have any of the iconic sounds from the original carried over into the remake?
There will be downloadable content featuring some of the original music and sound effects.
There were no sounds that we took straight from the original, partly out of a desire to differentiate ourselves. However, there will be downloadable content featuring some of the original music and sound effects. With that DLC, we’re hoping fans will enjoy the nostalgia of playing through the game with the original music and sounds in place.
Given the advances in technology since the original, what kinds of sounds is the team excited to implement that weren’t possible in 1998?
There are three technologies at work.
Real-time Binaural System (The first-ever stereophonic sound technology of its kind)
We presented a paper on this technology at an Audio Engineering Society (AES) Conference.
Generally speaking, stereophonic sound in games had previously been implemented using a plugin that would modify regular sound with an effect to make it stereophonic. However, the sound it produced would be of lower quality and would sound more distant. The real-time binaural system we use fixes this problem. It’s the first-ever technology of its kind, and we’re excited for players to experience sound with a great deal more presence.
Impulse Response Creation
Reverb is an important aural effect that can be used for expressing not only the size of a room, for example, but also the room’s texture and condition, and high-quality reverb necessitates the use of impulse responses (IR). The normal way to implement impulse responses is to choose an approximation of what you want from a range of presets, and then adjust the sound as needed. However, for the remake, we decided to actually record the reverb we needed for every room and hallway in every stage, thus creating our own IRs. Doing so enabled us to modify and touch up the reverb in subtle ways that further heighten the player’s immersion.
Dolby Atmos Support
We also gave the utmost consideration when it came to support for immersive surround sound. One result from that was support for Dolby Atmos, which we implemented while we were mixing the music. Our goal was to fully accentuate the music in a way that would really envelop the player. Household AV amps have begun to support Dolby Atmos, as have more and more movies, but there are still relatively few games that use the technology. I really hope our support for the latest audio technologies translates into players having all-new experiences.
Can you talk about the process of creating new sounds for the game? We’re interested in hearing about the equipment and technology used.
Recording and Creating Sounds
In this process, we use voice recordings, binaural recordings, Foley recordings, environmental sound recordings, IR recordings, prop recordings, and instrument recordings. For those who are curious, the microphones we used are the following: Schoeps, Shure, and Sennheiser.
Processing and Adjustments
From there, we go into processing, adjustments, and synth sound creation using digital audio workstations (DAWs). The technology we use at this time are REAPER, Nuendo, and Pro Tools.
Middleware Implementation: Audiokinetic Wwise
The cost of changing or fixing anything is extremely high.
Here is where we add sounds, set up transitions, set up docking, audio bus settings, etc. This is when we also set the values that change according to the in-game situation. For example, this might encompass sounds that change based on the player’s health gauge, or switching music tracks based on cues received from the game.
Game Engine Implementation: RE Engine
Sounds are implemented for the game’s visuals. The sound is affected by all in-game visuals, so a variety of techniques are used here depending on the situation. There are animations shown at particularly major progression points in the game, and we had to put sound to them. When working with animation, a certain technique is used that plays the appropriate sounds based on the timing of the animation. I believe it’s still used for almost all animations. One issue with this technique, though, is that the cost of changing or fixing anything is extremely high. So we made improvements by coming up with a tool that can automatically play the audio based on different values from the animation’s transition data.
Most gamers aren’t familiar with the design process. Could you enlighten us on the workflow for creating sounds for particular moments, like cutscenes and boss fights? Do you see the game in action and then figure out where and when to add sound effects?
The Cutscene Production Process is as follows:
- Attending Motion Capture Voice Recordings
- Recording ADR Voices
- Finalizing Cutscene Animations
- Writing Music and Creating Sound Effects
- Middleware Implementation
The above steps are the general process for implementing the audio, but there is also more to it, such as when changes have to be made after completion. With changes, it depends a little on exactly what is entailed, but we use whatever methods are appropriate.
From a technical perspective, we use the latest technologies to help curtail the costs of any necessary changes. For example, in cutscenes, the sound is divided into music, sound effects, and voiceovers, and then implemented as either 5.1 channel or 7.1 channel surround sound. Said another way, the sounds for cutscenes are arranged together into one of three large groups. The issue with this is that even something like a camera angle changing in the animation will incur a cost to fix the sound accordingly.
To mitigate this, we used middleware to implement the sounds. Just like the in-game scenes, the positions at which sounds should be played are retrieved from the game itself, enabling us to make updates based on changes to the camera or dialogue at low costs. That, in turn, gave us the freedom to be much more creative with the audio.
The Boss Fight Production Process:
- Creating Boss Specifications: During this process, we make suggestions from an audio angle, which the planning section uses to create the specifications.
- Preparing Tentative Audio: We start creating the sound effects and music based on the spec document and the boss’s design.
- Middleware Implementation: We add sound to the animations.
- Fixes and Adjustments: We continually fix and adjust the audio as needed according to polishing done to the animations.
The number of animations depends on the boss, but in general there are about 300-500 motions, and thanks to technological advances, we are able to make updates to these at a low cost.
Can you discuss the challenge of developing audio frights with the over-the-shoulder camera angle versus the fixed camera of early Resident Evil games? Often times, Resident Evil 2’s frights were set up by the camera perspective.
We did encounter some trouble in having to rethink how to make the audio work, now that the camera isn’t fixed in place. An over-the-shoulder (OTS) camera stays much closer to the player than a fixed camera, so we developed audio that more closely matches the mindset and feelings an OTS camera evokes.
Both the game’s visuals and the audio need to share that same atmosphere in all points.
Our use of the real-time binaural system was also specifically because of the OTS camera. By adapting our approach to the new camera angle, we were able to better utilize sound to instill fear into the player.
Going along with the change in perspective, jump scares, a prominent feature in early Resident Evil games, have largely been replaced by atmospheric horror. Would you say that the remake has more of a modern touch in this regard?
Yes. The sharp, modern visuals of this reimagining exude their own unique atmosphere. In my opinion, both the game’s visuals and the audio need to share that same atmosphere in all points. By doing so, we’re able to use sound to create an even greater sense of immersion.
Resident Evil 7 biohazard took the series back to its roots in terms of horror, but it was in first person. Can you discuss how different it is creating sounds for a third-person game like Resident Evil 2? Do you think it’s harder to create spooky noises when using a third-person perspective?
A larger field of vision brings with it a greater sense of security. By extension, this makes it harder to manufacture fear in comparison to a first-person point of view. Unseen terror gives way for some great audio opportunities, but with a third-person perspective’s field of view being so wide, sounds for anything unseen end up being far away. The player wanders around a lot of buildings in the Resident Evil 3. In addition to the impactful frightening sounds, we recorded and used a great deal of environmental sounds, like sudden loud noises that break the silence. Together with the high-quality visuals, we were able to produce some great sounds that really make you afraid of being alone in a room.
Our readers most definitely would like to hear about the process of creating zombie sounds? The sounds that zombies make have changed throughout the series, and we’d love to hear about that evolution.
The zombie sounds in the re-imagining of Resident Evil 2 have been tuned both for maximum viscosity and immersive peril.
Dripping Viscosity: We recorded all zombie sounds in a Foley studio in order to really emphasize visceral sounds such as blood and flesh. We used things like real meat, vegetables, and slime as materials, then processed the recordings to create sounds.
Looming Peril: To accomplish this, we used the stereophonic real-time binaural system. It was vital in bringing the voices of looming zombies to life in frantic situations. We use environmental sounds to create a quiet calmness, and then interrupt it with the binaural sounds of a nearby zombie suddenly rushing the player.
- Tencent and Logitech are making a cloud gaming handheld device
- Diablo Immortal on the iPhone SE is held back by one thing, but it’s not the screen
- Resident Evil Village is getting a host of new accessibility features
- PlayStation reenters the handheld gaming scene with special edition Backbone
- PlayStation VR2: release date, launch games, price, and more