Skip to main content

Nvidia GTC 2022: RTX 4090, DLSS 3, and everything else announced

Nvidia’s GTC (GPU Technology Conference) event has ended, and the keynote was packed to the brim with news. The new RTX 40-series graphics cards were obviously the standout announcements, kicking off an entirely new generation of graphics cards for PC gamers.

But as per usual, Nvidia’s announcement went far beyond consumer gaming, touching industries ranging from new products in the world of robotics and self-driving cars to advancements in the fields of medicine and science.

GeForce RTX 4090 and 4080

Specs for the RTX 4090 graphics card.
Image used with permission by copyright holder

The big announcement that kicked off the keynote, of course, was three new graphics cards. The RTX 4090 is the flagship model, which Nvidia says is up to 2x faster than the previous generation (RTX 3090 Ti) in traditional rasterized games. This massive card is based on Ada Lovelace, which is the foundation of everything announced at GTC. It comes with 16,384 CUDA cores and 24GB of GDDR6X memory. And, most notably, the same 450 watts of power as the RTX 3090 Ti.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The RTX 4090 will be available on October 12 and will cost $1,599.

RTX 4090 RTX 4080 16GB RTX 4080 12GB
CUDA cores 16,384 9,728 7,680
Memory 24GB GDDR6X 16GB GDDR6X 12GB GDDR6X
Boost clock speed 2520MHz 2505MHz 2610MHz
Bus width 384-bit 256-bit 192-bit
Power 450W 320W 285W

The RTX 4080 takes a step down from there and comes in two different configurations. The 16GB model will cost $1,199, and the 12GB model will cost $900.

These GPUs are a bit more manageable, with 320w or 285w, respectively. The RTX 4090 has well over twice the CUDA cores as the RTX 4090 12GB model, however, which is a massive gap. There’s a big difference in specs between these two models of RTX 4080, so it’ll be interesting to see how they stack up in terms of performance.

DLSS 3

Image used with permission by copyright holder

Along with the new GPUs, Nvidia also announced the third generation of RTX, which includes DLSS 3. Just like stepping up from DLSS to the second generation, this appears to be a substantial evolution of the technology. This time, the machine learning can predict actual frames, not just pixels, resulting in an even larger boost to frame rates in games.

How fast? Well, Nvidia says up to four times as fast, thanks to the optical flow accelerator. This tech actually bundles three different Nvidia innovations into one, including DLSS Super Resolution, DLSS Frame Generation, and Nvidia Reflex.

The other big feature of DLSS 3 is its ability to boost CPU-limited games. Nvidia showed a demo of Microsoft Flight Simulator, which fits the bill in an extreme way. Seeing DLSS 3 boost frame rates from 64 fps to 135 fps is seriously impressive, especially in this type of game.

Nvidia says DLSS 3 will be four times faster than conventional rendering, all in all. DLSS 3 comes out in October, and Nvidia says more than 35 games and applications will support it at launch.

Lastly, Nvidia announced Portal RTX, a modernized version of the popular PC game, now with DLSS 3 and ray tracing effects.

Nvidia Drive Thor

Nvidia's Drive Thor for self-driving cars.
Image used with permission by copyright holder

Nvidia has been trying to get its foot in the door for the future of self-driving cars for years, but its latest product feels like a significant step in the right direction. Nvidia Drive Thor is its next-generation superchip, based on the Hopper GPU architecture, paired with the Nvidia Grace CPU.

Drive Thor is the first automotive platform to have an inference transformer engine.

Nvidia says Drive Thor will be available for automakers’ 2025 car models as the true successor to Drive Orin, which is currently in production. Thor takes the place of Drive Atlan, which had been announced just last year.

Nvidia Drive Concierge

A rendering of Drive Concierge entertainment system.
Image used with permission by copyright holder

In addition to the new chip, Nvidia also showed off Drive Concierge, a complete in-vehicle infotainment system. It replaces the instrument cluster on a typical car dashboard with what Nvidia calls a “digital cockpit.” The system supports Android Automotive, which allows systems to be customized by automakers. Notably, the announcement comes just a few months after Apple’s landmark next-generation CarPlay announcement, which does a similar thing with its digital instrument cluster.

Of course, elsewhere in the vehicle, Drive Concierge also gives passengers features like video-conferencing, video streaming, digital assistants, cloud gaming through GeForce Now, and full visualizations of the car.

The entirety of the design was created in Omniverse, and Nvidia says designing in Omniverse allows carmakers to tweak all these aspects of the car long before they’re a physical reality.

Omniverse Cloud and Nvidia’s Graphics Delivery Network

Omniverse Cloud being used all over the world.
Image used with permission by copyright holder

Omniverse Cloud was announced earlier this year as Nvidia’s complete suite of cloud services for people building out the future of the metaverse — without the need for all that performance in your actual computer. New additions to the suite of services include the robotics simulation application, Nvidia Isaac Sim, as well as the autonomous vehicle simulation, Nvidia Drive Sim.

Interestingly, Omniverse Cloud was mentioned alongside something called the Nvidia Graphics Delivery Network (GDN), which is the distributed data center powering Omniverse Cloud. Built on the same capabilities as GeForce Now, the company’s cloud gaming service, the GDN is the network delivering all these high-performance, low latency graphics to anyone who needs it.

Nvidia spoke at length about all the ways Omniverse and digital twins are being used, even stating that every product in the future controlled by software, must have a digital twin for testing purposes.

Jetson Orin Nano

The two versions of the Jetson Orin Nano.
Image used with permission by copyright holder

Nvidia announced the Jetson Orin Nano modules, the latest addition to the Jetson family of small computers built for accelerating AI processes and robotics. These new “system-on-modules” claim to bring up to 40 trillion operations per second (TOPS) of AI performance, 80 times the performance of its predecessor the Orin NX modules.

The Orin Nano modules will be available starting in January, with the 8GB model starting at $199.

Everything else

  • Nvidia has announced two new large language models, the NeMo API for natural language AI applications, and BioNeMo for applications in chemistry and biology.
  • Lowe’s has announced it will be making its library of over 600 photorealistic 3D product assets free for other Omniverse creators. The company also discussed its exploration of using interactive digital twins and a Magic Leap 2 AR headset to give employees “superpowers.”
  • Germany’s Deutsche Bahn Rail Network has announced that it’s using digital twins in the Omniverse to develop its future railway system.
  • The second generation of Nvidia OVX has been announced, powered by the L40 GPU, meant for building “complex industrial digital twins.” L40 uses third-generation RT cores and fourth-generation Tensor cores for these intense Omniverse workloads and has already been equipped to companies like BMW and Jaguar Land Rover. These new OVX systems will be available from Lenovo, Inspur, and Supermicro by early 2023.
  • Nvidia has announced that the H100 Tensor Core GPU has entered full production, and is ready to roll out its first Hopper-based products in October
  • In the medical world, Nvidia has announced IGX, a combined hardware and software platform designed specifically for use cases such as robotic-assisted surgeries and patient monitoring.
  • Nvidia also demonstrated how IGX also has applications in the industrial world, specifically in the creation of safe autonomous factories where human-machine collaboration is involved.
  • A partnership between Nvidia and The Broad Institute of MIT and Harvard that brings the GPU-accelerated Clara Parabricks software to the Terra biomedical data platform, allowing researchers to speed up tasks like genome sequencing by as much as 24x. Nvidia says it’s contributing its own deep learning model to “help identify genetic variants that are associated with diseases.”
  • Nvidia and Booz Allen have announced an “expanded collaboration” to use AI through GPU acceleration of its cybersecurity platform, based on Nvidia’s Morpheus framework.

Editors' Recommendations

Luke Larsen
Senior Editor, Computing
Luke Larsen is the Senior editor of computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
This might be why the RTX 4090 is getting so expensive
Nvidia GeForce RTX 4090 GPU.

The RTX 4090, despite being one of the best graphics cards you can buy, has been the center of controversy over the last few months. The powerful consumer graphics card has seen price increases while GPU prices seem to be falling elsewhere, and we might finally know why.

Thousands of RTX 4090 graphics cards are being repurposed as AI chips in China, reports Wccftech. Just days before the report, a ban on AI chips sold to China from the U.S. went into effect, which includes the RTX 4090. The report states that Nvidia prioritized "a large chunk" of RTX 4090 graphics cards to China a few days before the ban went into effect.

Read more
Nvidia RTX 4090 prices are skyrocketing as stocks run seriously low
Nvidia GeForce RTX 4090 GPU.

If you were in the market for a graphics card during the pandemic, you would have noticed that PC component prices – especially those for graphics cards -- went through the roof. Now, GPU prices are surging once again, albeit for a very different reason.

It’s bad news if you’re looking to upgrade to one of the best graphics cards, as the high-end Nvidia RTX 4090 is easily the worst affected. That’ll be grim reading if you looking to take your PC build to the next level.

Read more
Even older RTX 4090s aren’t safe from melting connector
A melted connector on the Nvidia RTX 4090.

It's no news that the power connector on one of Nvidia's best graphics cards may sometimes melt, and yet, this is still new. Typically, the 12VHPWR connector found in the RTX 4090 would either melt quickly or not at all. Bad news for those who thought they were out of the woods and safe from the problem -- it appears that the RTX 4090 can still melt after a year of usage.

Will the saga of RTX 4090 melting connectors never end? It seems not. When the card was initially launched a year ago, the reports of melting cables at the GPU side started cropping up, but then, slowly died down. Now, however, Byogore on Reddit posted a photo of a melted connector on the PCMR subreddit. After working fine for a year, the user started experiencing issues and noticed that the connector had melted.

Read more