Skip to main content

Nvidia’s Titan RTX is the first full Turing GPU with 24GB of memory

Image used with permission by copyright holder

It turns out Nvidia’s RTX 2080 Ti wasn’t a new breed of consumer-facing Titan; it was a Ti. And now, just weeks after the original lineup was unveiled, the real Titan RTX is here, and it’s a real monster. Although aimed more at enterprises than gamers, the Titan RTX is effectively just a bigger, badder 2080 Ti, with the same (but fully unlocked) Turing GPU at its core. It comes with a full complement of CUDA cores, more Tensor cores, a higher boost clock, and more than double the memory of its 2080 Ti counterpart.

As powerful as the 2080 Ti of Nvidia’s new Turing generation of RTX graphics cards is, the whole launch was a little underwhelming. Only the Ti offered any real gains in intergenerational performance, the cards were very expensive, and few games are available even now that support the new ray tracing and DLSS features. The Titan RTX isn’t going to change that, with a monstrous $2,500 price tag of its own, but its performance should be eye-watering all the same.

The Titan RTX has a gully unlocked TU102 graphics core with a total of 4,608 CUDA cores. That’s over 250 more than the 2080 Ti, though not quite as many as the Titan V’s 5,120. The Titan RTX also comes with 576 Tensor cores for DLSS and A.I. tasks, and its clock speeds are 1,350MHz at base, and 1,770MHz when boosted. Its memory is the same 14Gbps GDDR6 found in the 2080 Ti, but where that card has just 11GB to play with, the Titan RTX has a massive 24GBs, as per Anandtech. Nvidia also expanded its level-two cache from 5.5MB to 6MB.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Those hardware upgrades improve performance in single and double precision tasks by between 10 and 15 percent, but in tasks where the Tensor cores are brought to bear, the Titan RTX can be more than twice as fast as the 2080 Ti.

All of that does come at a cost though. TDP is 20w higher at 280w and the price tag of $2,500 makes it more than twice as expensive as the 2080 Ti. Not quite as expensive as the Titan V, mind you, nor even close to the $10,000 price tag on the Tesla V100 when it was announced, but still out of reach of just about any gamer all the same.

Although there are a few high-net-worth gamers who will kit their systems out with a couple of these cards and overclockers will leverage them to take the top spots on the 3Dmark leaderboards, these aren’t going to find much use among gamers. The fact that Tensor core performance has improved so dramatically shows where these cards are targeted: A.I. and rendering — as Titan GPUs typically are.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
How 8GB VRAM GPUs could be made viable again
Screenshot of full ray tracing in Cyberpunk 2077.

Perhaps there is still some hope for GPUs with low VRAM. According to a new patent published by Microsoft, the company worked out a method that could make ray tracing and path tracing more viable in terms of how much video memory (VRAM) they use. As of right now, without using upscaling techniques, seamless ray tracing requires the use of one of the best graphics cards—but this might finally change if this new method works out as planned.

This new patent, first spotted by Tom's Hardware, describes how Microsoft hopes to reduce the impact of ray tracing on GPU memory. It addresses the level of detail (LOD) philosophy, which is already something that's used in games but not in relation to ray tracing, and plans to use LOD to adjust ray tracing quality dynamically, thus lowering the load that the GPU -- particularly its memory -- has to bear.

Read more
Nvidia DLSS is amazing, but only if you use it the right way
Lies of P on the KTC G42P5.

Nvidia's Deep Learning Super Sampling, or DLSS, has become a cornerstone feature of modern PC games. It started as a way to boost your performance by rendering a game at a lower resolution, but the prominence and popularity of DLSS have prompted Nvidia to add even more features under the name.

Today, DLSS incorporates several different features, all of which leverage AI to boost performance and/or image quality. It can be intimidating if you're a new RTX user, so I'm here to break down all of the increases of DLSS in 2024 and how you can best leverage it in supported games.
The many features of DLSS

Read more
Nvidia just made GeForce Now so much better
Playing games with GeForce Now on a laptop.

Nvidia has just added adaptive refresh rates to GeForce Now, its cloud gaming service. The new tech, dubbed Cloud G-Sync, works on PCs with Nvidia GPUs first and foremost , but also on Macs. These include Macs with Apple Silicon, as well as older models with Intel CPUs and AMD GPUs. On the Windows PC side more broadly, Intel and AMD GPUs will not be supported right now. Nvidia has also made one more change to GeForce Now that makes it a lot easier to try out -- it introduced day passes.

Cloud G-Sync's variable refresh rate (VRR) feature will sync your monitor's refresh rate to match the frame rates you're hitting while gaming with GeForce Now. Nvidia's new cloud solution also uses Reflex to lower latency regardless of frame rates. Enabling VRR in GeForce Now should provide a major boost by reducing screen tearing and stuttering, improving the overall gaming experience on PCs and laptops that normally can't keep up with some titles. To pull this off, Nvidia uses its proprietary RTX 4080 SuperPODs.

Read more