Ever stopped at a beautiful scene in a video game and wonder how it works underneath? Adrian Courrèges, a software engineer living in Tokyo, took the time to reverse-engineer the 2011 game Deus Ex: Human Revolution to find out exactly what happens to each and every frame to turn it from code into a video game world.
The first step is just to build the physical objects of the room, creating a normal map and depth map. The normal map is just a physical layout of the space – the engine traces larger objects first, then smaller and smaller until everything is represented.
The depth map is the room as seen from the point of view of the player, with objects that are closer appearing darker, and getting lighter as they get further away.
Then the rendering engine creates a map of the shadows in the room, one for each light. To keep the rendering time low for this step, only the largest objects in the room cast a shadow as seen from a major light source, and only the largest lights are used. These shadow maps are combined with the depth map, and different areas are assigned values based on how lit they are.
Using ambient occlusion, the rendering engine gives definition to the edges of objects in the room. This gives the surfaces and objects that weren’t counted by the major light sources soft shadowing, and will be used later to refine the textures.
Any smaller point-lights in the scene are now computed as well. Not a lot of calculations are done with those numbers yet, but their brightness and color information are noted for every pixel.
Now it’s time to actually start creating the image the player sees. The final color of every pixel is determined by using all of the light and shadow values from the steps before, the material and texture of the object itself, and sometimes a small texture with the patterns of the room on it to improve reflection quality.
The next step is to render any flat or transparent objects, as well as volumetric lights, like the halo given off by smaller light sources. These are the small touches the flesh out the video game world, and give it added immersion.
Just like in real life, lights glow, so the rendering engine creates a bloom effect on any light sources strong enough to warrant it.
Next, anti-aliasing is used to smooth the jagged edges on the edge of objects. It takes a bit of graphical processing power to make this happen quickly, but it’s worth the effect if you can support the overhead.
Now any gamma and color correction is done. That’s mostly at the will of the game designer, if they want the shadows of the game to be darker so you can’t see well, or want to brighten things up for a happier scene, this is when that happens.
Finally, the user interface is drawn on the screen. That doesn’t take very long at all, since it’s usually a static, or at least not very dynamic, image.
Courrèges also includes this nice timeline that shows roughly how long it takes for each piece of the process. There’s a lot more technical info in his full write-up on the process, and it’s worth a read if you’re interested in the exact process that happens with each step.