Skip to main content

New advance in 3D graphics may make the next Avengers movie look even more realistic

Are you having trouble distinguishing real life objects from digital creations in the latest movies and TV shows? Well, things are only going to get harder to determine from here on out, thanks to a new method discovered by a group of researchers.

In short, they’ve figured out how to improve the way graphics software can render light as it interacts with extremely small details on the surface of materials. That means you could finally see the metallic glitter in Captain America’s shield, the tiny sparkles in the Batmobile’s paint job, and super-realistic water animations.

Recommended Videos

As a brief explainer, the reflection of light emanating from a material’s small details is called “glints,” and until now, graphics software could only render glints in stills. But according to Professor Ravi Ramamoorthi at the University of California San Diego, his team of researchers have improved the rendering method, enabling software to create those glints 100 times faster than the current method. This will allow glints to be used in actual animations, not just in stills.

Ramamoorthi and his colleagues plan to reveal their rendering method at SIGGRAPH 2016 in Anaheim, California, later this month. He indicated that what drove him and his team to create a new rendering technique was today’s super-high display resolutions. Right now, graphics software assumes that the material surface is smooth, and artists fake metal finishes and metallic car paints by using flat textures. The result can be grainy and noisy.

“There is currently no algorithm that can efficiently render the rough appearance of real specular surfaces,” Ramamoorthi said in a press release. “This is highly unusual in modern computer graphics, where almost any other scene can be rendered given enough computing power.”

Ocean
Image used with permission by copyright holder

The new rendering solution reportedly doesn’t require a massive amount of computational power. The method takes an uneven, detailed surface and breaks each of its pixels down into pieces that are covered in thousands of microfacets, which are light-reflecting points that are smaller than pixels. A vector that’s perpendicular to the surface of the material is then computed for each microfacet, aka the point’s “normal.” This “normal” is used to figure out how light actually reflects off the material.

According to Ramamoorthi, a microfacet will reflect light back to the virtual camera only if its normal resides “exactly halfway” between the ray projected from the light source and the ray that bounces off the material’s surface. The distribution of the collective normals within each patch of microfacets is calculated, and then used to figure out which of the normals actually are in the halfway position.

Ultimately, what makes this method faster than the current rendering algorithm is that it uses this distribution system instead of calculating how light interacts with each individual microfacet. Ramamoorthi said that it’s able to approximate the normal distribution at each surface location and then compute the amount of net reflected light easily and quickly. In other words, expect to see highly-realistic metallic, wooden, and liquid surfaces in more movies and TV shows in the near future.

To check out the full paper, Position-Normal Distributions for Efficient Rendering of Specular Microstructure, grab the PDF file here. The others listed in Ramamoorthi’s team include Ling-Qi Yan, from the University of California, Berkeley; Milos Hasan from Autodesk; and Steve Marschner from Cornell University. The images provided with the report were supplied to the press by the Jacobs School of Engineering at the University of California San Diego.

Is that water render just awesome, or what?

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
Sony’s new 3D display tech keeps getting bigger and better
Sony ELF-SR2 shown in use displaying a 3D image.

Sony just announced its latest Spatial Reality Display, the ELF-SR2, which displays a stereoscopic image you can see without needing special glasses.

Sony's new generation of 4K-resolution spatial displays is bigger and better than ever. Face-tracking, 3D display technology isn't new, but the display size makes this latest offering notable.

Read more
Asus’ new ProArt Studiobook has a glasses-free 3D OLED screen
Top view of ProArt Studiobook 16 3D OLED.

Made for professionals and creators alike, the 2023 Asus ProArt Studiobook 16 OLED now comes with 3D immersion and tracking tech, not to mention hardware that is bound to make potential customers salivate.

Fresh out of CES 2023, the ProArt Studiobook 16 3D OLED (H7604 3D OLED) is part of the latest generation of professional laptops from Asus targeted at creators, especially in the audio/visual or design industries.

Read more
AMD might deal a huge blow to Intel with new 3D V-Cache CPUs
AMD Ryzen 7 5800X3D.

According to a new leak, AMD may be readying up some truly powerful Ryzen 7000 processors equipped with its 3D V-Cache technology.

The rumor suggests that this time around, at least three CPUs are in the works, and they're the top-of-the-line models, including the Ryzen 9 7950X3D, Ryzen 9 7900X3D, and Ryzen 7 7800XD. Equipping the already powerful flagships with 3D V-Cache would pose a real threat to Intel's top Raptor Lake CPUs. What can we expect from these gaming beasts?

Read more