Skip to main content

Nvidia reveals first-ever CPU and Hopper GPU at GTC 2022

Nvidia CEO Jensen Huang kicked off the company’s Graphics Technology Conference (GTC) with a keynote speech full of announcements. The key reveals include Nvidia’s first-ever discrete CPU, named Grace, as well as its next-generation Hopper architecture, which will arrive later in 2022.

The Grace CPU Superchip is Nvidia’s first discrete CPU ever, but it won’t be at the heart of your next gaming PC. Nvidia announced the Grace CPU in 2021, but this Superchip, as Nvidia calls it, is something new. It puts together two Grace CPUs, similar to Apple’s M1 Ultra, connected through Nvidia’s NVLink technology.

A rendering of Nvidia's Grace Superchip.
Image used with permission by copyright holder

Unlike the M1 Ultra, however, the Grace Superchip isn’t built for general performance. The 144-core GPU is built for A.I., data science, and applications with high memory requirements. The CPU still uses ARM cores, despite Nvidia’s abandoned $40 billion bid to purchase the company.

In addition to the Grace Superchip, Nvidia showed off its next generation Hopper architecture. This isn’t the architecture powering the RTX 4080, according to speculation. Instead, it’s built for Nvidia’s data center accelerators. Nvidia is debuting the architecture in the H100 GPU, which will replace Nvidia’s previous A100.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Nvidia calls the H100 the “world’s most advanced chip.” It’s built using chipmaker TSMC’s N4 manufacturing process, packing in a staggering 80 billion transistors. As if that wasn’t enough, it’s also the first GPU to support PCIe 5.0 and HBM3 memory. Nvidia says just 20 H100 GPUs can “sustain the equivalent of the entire world’s internet traffic,” showing the power of PCIe 5.0 and HBM3.

Customers will be able to access the GPU through Nvidia’s fourth-generation DGX servers, which combine eight H100 GPUs and 640GB of HBM3 memory. These machines, according to Nvidia, provide 32 petaFLOPs of A.I. performance, which is six times as much as last-gen’s A100.

If the DGX doesn’t offer enough power, Nvidia is also offering its DGX H100 SuperPod. This builds on Nvidia renting out its SuperPod accelerators last year, allowing those without the budget for massive data centers to harness the power of A.I. This machine combines 32 DGX H100 systems, delivering a massive 20TB of HBM3 memory and 1 exoFLOP of A.I. performance.

Nvidia Hopper GPU family.
Image used with permission by copyright holder

Nvidia is debuting the new architecture with its own EOS supercomputer, which includes 18 DGX H100 SuperPods for a total of 4,608 H100 GPUs. Enabling this system is Nvidia’s fourth generation of NVLink, which provides a high bandwidth interconnect between massive clusters of GPUs.

As the number of GPUs scales up, Nvidia showed that the last-gen A100 would flatline. Hopper and fourth-gen NVLink don’t have that problem, according to the company. As the number of GPUs scales into the thousands, Nvidia says H100-based systems can provide up to nine times faster A.I. training than A100-based systems.

This next-gen architecture provides “game-changing performance benefits,” according to Nvidia. Although exciting for the world of A.I. and high-performance computing, we’re still eagerly awaiting announcements around Nvidia’s next-gen RTX 4080, which is rumored to launch later this year.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
Nvidia’s new Guardrails tool fixes the biggest problem with AI chatbots
Bing Chat saying it wants to be human.

Nvidia is introducing its new NeMo Guardrails tool for AI developers, and it promises to make AI chatbots like ChatGPT just a little less insane. The open-source software is available to developers now, and it focuses on three areas to make AI chatbots more useful and less unsettling.

The tool sits between the user and the Large Language Model (LLM) they're interacting with. It's a safety for chatbots, intercepting responses before they ever reach the language model to either stop the model from responding or to give it specific instructions about how to respond.

Read more
Oops — Nvidia has just accidentally revealed a brand new GPU
MSI RTX 4080 Suprim X installed in a PC.

Nvidia's RTX 4070 is reportedly right around the corner, but the chipmaker is yet to release any official announcement about the new GPU. It still managed to confirm its existence in a stealthy way.

The latest addition to Nvidia's lineup of the best graphics cards appeared in a slide that shows off Nvidia Reflex. The technology is coming to Counter-Strike 2, and the RTX 4070 is most likely coming to various retailers in just a few days. Did Nvidia really intend to reveal it in such a low-key way?

Read more
Nvidia finally made a tiny RTX 4000 graphics card (but you probably don’t want it)
RTX 4000 SFF going into a PC case.

After months of massive graphics cards like the RTX 4090, Nvidia is finally slimming things down at its GPU Technology Conference (GTC). The RTX 4000 SFF delivers the Ada Lovelace architecture in a tiny package, but you probably won't find it sitting among the best graphics cards.

Although the RTX 4000 SFF uses the same architecture in gaming GPUs like the RTX 4080, it's built for a very different purpose. It uses Nvidia enterprise drivers, and it's made to power computer-aided design (CAD), graphics design, AI applications, and software development, according to Nvidia. The card takes up two slots and includes a low-profile bracket for cases like the Hyte Y40.

Read more