Skip to main content

Nvidia’s Turing chip reinvents computer graphics (but not for gaming)

Image used with permission by copyright holder

Nvidia’s latest graphics chip design, called “Turing,” was rumored to be the foundation of the company’s next family of GeForce cards for gamers. Nope. When the company showcased the chips during the SIGGRAPH 2018 conference in Vancouver, British Columbia this week, it highlighted their application in Quadro RTX-branded cards for professionals: the Quadro RTX 8000, the RTX 6000 and the RTX 5000.

The new GPU architecture — Nvidia says it “reinvents computer graphics” — introduces “RT Cores” designed to accelerate ray tracing, a technique in graphics rendering that traces the path of light in a scene so that objects are shaded correctly, light reflects naturally, and shadows fall in their correct locations. Typically this job requires huge amounts of computational power for each frame, taking lots of time to render a photorealistic scene. But Nvidia promises real-time ray tracing, meaning there’s no wait for the cores to render the lighting of each frame.

For PC gaming, that’s a dramatic leap in visual fidelity. The current rendering method requires a technique called rasterization, which converts the 3D scene into 2D data that’s accepted by the connected monitor. To re-create the 3D environment, the program uses “shaders” to handle the different levels of light, darkness, and color.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“The Turing architecture dramatically improves raster performance over the previous Pascal generation with an enhanced graphics pipeline and new programmable shading technologies,” the company says. “These technologies include variable-rate shading, texture-space shading, and multi-view rendering, which provide for more fluid interactivity with large models and scenes and improved VR experiences.”

According to Nvidia, Turing is the next big leap since the introduction of CUDA. Not familiar with CUDA? Graphics cards and discrete GPUs once merely accelerated games for better visual fidelity. In 2006, Nvidia introduced the integrated CUDA platform that lets its chips handle general computing as well. In essence, this lets a graphics chip work in parallel with a PC’s main processor to handle larger loads at a faster pace. As Nvidia states, Turing promises to be another transition point in computing.

In addition to RT Cores for ray tracing, Turing also relies on Tensor Cores to accelerate artificial intelligence. (Nvidia, which apparently showers in money, handed out $3,000 Titan V graphics cards for free to A.I. researchers in June.) Tensor Cores will accelerate video re-timing, resolution scaling and more for creating “applications with powerful new capabilities.” Turing also includes a new streaming multiprocessor architecture capable of 16 trillion floating point operations along with 16 trillion integer operations each second.

The new Quadro RTX 8000 consists of 4,608 CUDA cores and 576 Tensor cores capable of rendering 10 GigaRays per second, which is a measurement of how many rays can be rendered per pixel each second at a specific frame rate. The card also includes 48GB of onboard memory but capable of using 96GB through NVLink.

Meanwhile, the RTX 6000 is similar save for the memory: 24GB of onboard memory and 48GB through NVLink. The RTX 5000 consists of 3,072 cores, 384 Tensor cores and 16GB of onboard memory (32GB via NVLink). It’s capable of six GigaRays per second.

Companies already on the Quadro RTX bandwagon include Adobe, Autodesk, Dell, Epic Games, HP, Lenovo, Pixar and more.

For gamers, Nvidia’s next big Turing-based reveal is expected to be the GeForce RTX 2080 — not the previously rumored GTX 1180 — during its pre-show Gamescom press event on the 20th of August. Clever.

Editors' Recommendations

Kevin Parrish
Former Digital Trends Contributor
Kevin started taking PCs apart in the 90s when Quake was on the way and his PC lacked the required components. Since then…
Nvidia’s AI game demo puts endless dialogue trees everywhere
An AI game demo produced by Nvidia.

Nvidia did what we all knew was coming -- it made an AI-driven game demo. In Convert Protocol, you play as a detective trying to track down a particular subject at a high-end hotel. The promise is sleuthing through conversations with non-playable characters (NPCs) to get what you need. Except in this demo, you use your microphone and voice to ask questions instead of choosing from a list of preset options.

I saw the demo with a few other journalists in a small private showing. As the demo fired up and Nvidia's Seth Schneider, senior product manager for ACE, took the reigns, I was filled with excitement. We could ask anything; we could do anything. This is the dream for this type of detective game. You don't get to play the role of a detective with a preset list of dialogue options. You get to ask what you want, when you want.

Read more
My most anticipated game of 2024 is getting the full Nvidia treatment
A character gearing up for battle in Black Myth: Wukong.

As if I wasn't already looking forward to Black Myth: Wukong enough, Nvidia just announced that the game is getting the full RTX treatment when it launches on August 20. We see new games with ray tracing and Nvidia's Deep Learning Super Sampling (DLSS) all the time, but Black Myth: Wukong is joining a very small list of titles that currently leverage the full suite of features Nvidia has available.

The game comes with, as Nvidia describes it, "full ray tracing." That undersells the tech a bit. As we've seen with games like Alan Wake 2, "full ray tracing" means path tracing. This is a more demanding version of ray tracing where everything uses the costly lighting technique. It's taxing, but in the new games that we've seen with path tracing, such as Cyberpunk 2077 and Portal with RTX, it looks stunning.

Read more
Why I’m feeling hopeful about Nvidia’s RTX 50-series GPUs
The RTX 4070 Super on a pink background.

I won't lie -- I was pretty scared of Nvidia's RTX 50-series, and I stand by the opinion that those fears were valid. They didn't come out of thin air; they were fueled by Nvidia's approach to GPU pricing and value for the money.

However, the RTX 40 Super refresh is a step in the right direction, and it's one I never expected to happen. Nvidia's most recent choices show that it may have learned an important lesson, and that's good news for future generations of graphics cards.
The price of performance
Nvidia really didn't hold back in the RTX 40 series. It introduced some of the best graphics cards we've seen in a while, but raw performance isn't the only thing to consider when estimating the value of a GPU. The price is the second major factor and weighing it against performance can often tip the scales from "great" to "disappointing." That was the case with several GPUs in the Ada generation.

Read more