Skip to main content

Richly rendered ‘Asteroids’ demo showcases power of Nvidia’s RTX graphics

Nvidia has released a new demo to showcase some of the advanced graphics capabilities of the company’s Turing architecture found on the latest RTX series graphics cards, like the flagship GeForce RTX 2080 Ti. The public demo, called Asteroids, showcases the new mesh shading capabilities, which Nvidia claims will improve image quality and performance when rendering a large number of complex objects in scenes in a game.

With Turing, Nvidia introduced a new programmable geometric shading pipeline, transferring some of the heavy workload from the processor onto the GPU. The GPU then applies culling techniques to render objects — in the case of this demo, the objects are asteroids — with a high level of detail and image quality.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“Turing introduces a new programmable geometric shading pipeline built on task and mesh shaders,” Nvidia graphics software engineer Manuel Kraemer explained in a detailed blog post explaining the benefits of mesh shading on Turing. “These new shader types bring the advantages of the compute programming model to the graphics pipeline. Instead of processing a vertex or patch in each thread in the middle of fixed function pipeline, the new pipeline uses cooperative thread groups to generate compact meshes (meshlets) on the chip using application-defined rules.”

In the demo, Nvidia showed that each asteroid contains 10 levels of details. Objects are segmented into smaller meshlets, and Turing allows the meshlets to be rendered in parallel with more geometry while fetching less data overall. With Turing, the task shader is employed first to check the asteroid and its position in the scene to determine which level of detail, or LoD, to use. Sub-parts, or meshlets, are then tested by the mesh shade, and the remaining triangles are culled by the GPU hardware. Before the Turing hardware was introduced, the GPU would have to cull each triangle individually, which produced congestion on the CPU and the GPU.

“By combining together efficient GPU culling and LOD techniques, we decrease the number of triangles drawn by several orders of magnitude, retaining only those necessary to maintain a very high level of image fidelity,” Kraemer wrote. “The real-time drawn triangle counters can be seen in the lower corner of the screen. Mesh shaders make it possible to implement extremely efficient solutions that can be targeted specifically to the content being rendered.”

In addition to using this technique to create rich scenes in a game, Nvidia said that the process could also be used in scientific computing.

“This approach greatly improves the programmability of the geometry processing pipeline, enabling the implementation of advanced culling techniques, level-of-detail, or even completely procedural topology generation,” Nvidia said.

Developers can download the Asteroids demo through Nvidia’s developer portal, and the company also posted a video showing how mesh shader can improve rendering.

Editors' Recommendations

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
RTX 4080 Super vs. RTX 4070 Ti Super vs. RTX 4070 Super: Nvidia’s new GPUs, compared
Nvidia RTX 4080 Super, RTX 4070 Ti Super, and RTX 4070 Super over a dark background.

Nvidia's RTX 40-series refresh is officially here, serving up three of the best graphics cards we've seen in a while -- the RTX 4080 Super, RTX 4070 Ti Super, and the RTX 4070 Super. The new GPUs, while still belonging to the RTX 40-series, deliver significant changes in specs, making them an interesting choice for PC enthusiasts.

While all three of these cards are on the upper end of the spec and price spectrum, they're not all the same. How do they stack up against each other? Let's find out with a thorough comparison of Nvidia's RTX 4080 Super, RTX 4070 Ti, and RTX 4070 Super.
Pricing and availability

Read more
How to watch Nvidia’s launch of the RTX 4000 Super today
A rendering of an RTX 40 Super GPU.

Nvidia's RTX 40 Super graphics cards are just around the corner, with Nvidia teasing an unveiling of its mid-generation refresh of its Ada Lovelace GPUs ahead of CES 2024. The cards are expected to update a number of midrange to high-end graphics cards from the RTX 4000-series with increased CUDA core counts, enhanced clock speeds, and potentially more competitive price tags.

But that's all speculation until Nvidia shows us what it's been working on. If you want to be there when the news drops about just how powerful (and maybe affordable?) these cards actually are, here's how to watch the RTX 40 Super launch.
How to watch Nvidia's RTX 40 Super launch
NVIDIA Special Address at CES 2024

Read more
I hope Nvidia never makes an RTX 5090
A hand grabbing MSI's RTX 4090 Suprim X.

For all the controversies around GPU prices, insane power requirements, and massive coolers, Nvidia's RTX 4090 remains one of the best graphics cards you can buy. It's uncompromising in its power, and it's the perfect halo product to sit on top of the current-gen GPUs. I don't think Nvidia should take another swing at this class of GPU, though.

For as powerful as the RTX 4090 is, it's only become more obvious that it's a problematic GPU. Trying to deliver such a high-end GPU in a consumer PC has some unforeseen consequences, all of which we've seen come to light over the past year. It's hard to imagine that Nvidia will skip making an RTX 5090, but I hope it shows some restraint in the next generation.
Melting connectors
The RTX 4090 isn't really built for gaming. It's a monster gaming GPU -- just read our RTX 4090 review -- but I'm talking about its construction. This is a 450-watt graphics card that has up to 600W of power on tap, and a size that commonly occupies three slots and sometimes even more. It's big, it's heavy, and it's crazy powerful.

Read more