Nvidia has fully revealed the Titan X at the GPU Technology Conference, and it’s a doozy of a video card.
The Titan X will reportedly boast 3,072 CUDA cores and is built on the Maxwell architecture using a 28nm process. In this sense, then, it’s not all new, but rather an improvement on the existing cards in the GTX 900 series.
It also offers 12GB of GDDR5 video memory, the most of any single-GPU video card. This is connected over a 384-bit interface. That distinguishes the card from its 900 series brethren, all of which have a 256-bit (or smaller) memory interface. That adds up to 336 gigabytes per second of memory bandwidth, over 100GB/s more than the GTX 980, and almost twice that of the PlayStation 4.
Though obviously very powerful, the card’s use of Maxwell architecture (rather than Kepler, found in previous Titan cards) allows it to hit a thermal design power of 250 watts. It uses less power than the old GTX 780 Ti and is fed with an 8+6 pin PCIe power connector arrangement.
Additionally, it will support up to 4-way SLI and is compatible with Nvidia GameStream and the Nvidia Shield. Standard video output support includes one DVI, one HDMI and three DisplayPort, though third-party card vendors may change that. The HDMI port conforms to the 2.0 standard. None of that information is particularly surprising, though. We’d expected it to retain all of the usual Nvidia features, and so it does.
Interestingly, unlike the Titan Z, the new card will not offer double precision support. That feature is often required by enterprise users to ensure data is absolutely accurate. However, Nvidia is still targeting certain research applications. It called out machine learning as a particular area of interest, stating it can cut research tasks that used to take a month and a half on a computer processor down to just a few days.
Nvidia’s CEO, Jen-Hsun Huang, also spent a significant amount of time talking about neural networks, specifically those used to analyze images. He highlighted how advanced video cards like the Titan X can accelerate these networks, making it possible to categorize images without human intervention, or even convert images to text descriptors.
We should have a first look at the card up tomorrow.
- Nvidia GeForce RTX 3080: News, rumors, and everything we know so far
- Nvidia RTX 3090 vs. RTX 3080: Here’s how they stack up
- Nvidia GeForce RTX 3000 event: Everything that’s been announced
- Intel Xe graphics: Everything you need to know about Intel’s dedicated GPUs
- Nvidia RTX 3080 vs. Microsoft Xbox Series X vs. Sony PlayStation 5