Skip to main content

Nvidia Dives Deeper into Supercomputing with Fermi

fermi-nvidiaGraphics technology developer Nvidia has formally introduced its next generation graphics architecture, codenamed Fermi, which Nvidia CEO Jen-Hsun Huang described as putting a supercomputer into a GPU. Unlike traditional GPUs, which are optimized for rendering and graphics tasks, the Fermi architecture is intended to put standard computing operations and graphics procedures on equal footing, enabling developers to leverage the computing power of the GPUs for things like physics simulation, supercomputing, and medical imaging…and, of course, there will still be some applications for gaming.

The Fermi chip currently sports some 3 billion transistors—which compares to 2.3 billion in Intel’s quad-core Itanium and eight-core Nehalem-EX CPUs—and 512 cores, offering eight times the double-precision floating-point performance of previous generations: fast floating point operations are key in many physics simulations. The card can support up to 6 GB of GDDR5 memory, and has six 64-bit memory partitions totaling a 384-bit memory interface. Nvidia is promising developers will be able to tap into the Fermi board using C++ and a Visual Studio development environment dubbed Nexus; the board also offers ECC memory support for catching memory read/write errors.

Nvidia is due to release a beta of Nexus on October 15, although it has not offered any release date or pricing for Fermi products.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Industry watchers generally see Nvidia’s move towards high-performance computing as a good one for the company; although it will face stiff competition from Intel, AMD, and system integrators like HP and Dell, Nvidia’s architecture is sufficiently unique that it could potentially carve out a niche for itself in specialized fiends like climate prediction and image processing and analysis—and margins in the high performance computing market are considerably higher than consumer graphics controllers.

Editors' Recommendations

Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
Intel may already be conceding its fight against Nvidia
Two intel Arc graphics cards on a pink background.

Nvidia continues to own the top-of-the-line GPU space, and the competition just hasn't been able to, well, compete. The announcement of the impressive-sounding RTX 40 Super cards cements the lead even further.

As a result, AMD is said to be giving up on the high-end graphics card market with its next-gen GPUs. And now, a new rumor tells us that Intel might be doing the same with Arc Battlemage, its anticipated upcoming graphics cards that are supposed to launch later this year. While this is bad news, it's not surprising at all.
Arc Battlemage leaks
First, let's talk about what's new. Intel kept quiet about Arc Battlemage during CES 2024, but Tom Petersen, Intel fellow, later revealed in an interview that it's alive and well. The cards might even be coming out this year, although given Intel's track record for not meeting GPU deadlines, 2025 seems like a safer bet. But what kind of performance can we expect out of these new graphics cards? This is where YouTuber RedGamingTech weighs in.

Read more
How to watch Nvidia’s launch of the RTX 4000 Super today
A rendering of an RTX 40 Super GPU.

Nvidia's RTX 40 Super graphics cards are just around the corner, with Nvidia teasing an unveiling of its mid-generation refresh of its Ada Lovelace GPUs ahead of CES 2024. The cards are expected to update a number of midrange to high-end graphics cards from the RTX 4000-series with increased CUDA core counts, enhanced clock speeds, and potentially more competitive price tags.

But that's all speculation until Nvidia shows us what it's been working on. If you want to be there when the news drops about just how powerful (and maybe affordable?) these cards actually are, here's how to watch the RTX 40 Super launch.
How to watch Nvidia's RTX 40 Super launch
NVIDIA Special Address at CES 2024

Read more
Intel just called out Nvidia
Intel CEO Pat Gelsinger.

There was a time when Intel and Nvidia could stay clear of each other's lanes more or less. That time is no more, especially with Intel entering the GPU race and both companies pushing forward with AI.

The new rivalry came to a head for Intel while announcing its new Core Ultra and 5th Gen Xeon chips at an event in New York City. Intel CEO Pat Gelsinger took an interesting jab at Nvidia’s CUDA technology. According to him, inference is going to surpass the importance of training for AI. He also questioned the long-term dominance of Nvidia's CUDA as an interface for training, considering it a “shallow moat that the industry is motivated to eliminate.“ Ouch. Those are fightin' words.

Read more