Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Nvidia’s next GPUs will be designed partially by AI

During the GTC 2022 conference, Nvidia talked about using artificial intelligence and machine learning in order to make future graphics cards better than ever.

As the company chooses to prioritize AI and machine learning (ML), some of these advancements will already find their way into the upcoming next-gen Ada Lovelace GPUs.

Nvidia logo made out of microchips.

Nvidia’s big plans for AI and ML in next-gen graphics cards were shared by Bill Dally, the company’s chief scientist and senior vice president of research. He talked about Nvidia’s research and development teams, how they utilize AI and machine learning (ML), and what this means for next-gen GPUs.

In short, using these technologies can only mean good things for Nvidia graphics cards. Dally discussed four major sections of GPU design, as well as the ways in which using AI and ML can drastically speed up GPU performance.

The goal is an increase in both speed and efficiency, and Dally’s example talks about how using AI and ML can lower a standard GPU design task from three hours to just three seconds.

Using artificial intelligence and machine learning can help optimize all of these processes.

This is allegedly possible by optimizing up to four processes that normally take a lot of time and are highly detailed.

This refers to monitoring and mapping power voltage drops, anticipating errors through parasitic prediction, standard cell migration automation, and addressing various routing challenges. Using artificial intelligence and machine learning can help optimize all of these processes, resulting in major gains in the end product.

Mapping potential drops in voltage helps Nvidia track the power flow of next-gen graphics cards. According to Dally, switching from using standard tools to specialized AI tools can speed this task up drastically, seeing as the new tech can perform such tasks in mere seconds.

Dally said that using AI and ML for mapping voltage drops can increase the accuracy by as much as 94% while also tremendously increasing the speed at which these tasks are performed.

Nvidia's slide on automated cell migration.
Nvidia

Data flow in new chips is an important factor in how well a new graphics card performs. Therefore, Nvidia uses graph neural networks (GNN) to identify possible issues in data flow and address them quickly.

Parasitic prediction through the use of AI is another area in which Nvidia sees improvements, noting increased accuracy, with simulation error rates dropping below 10 percent.

Nvidia has also managed to automate the process of migrating the chip’s standard cells, cutting back on a lot of downtime and speeding up the whole task. With that, 92% of the cell library was migrated through the use of a tool with no errors.

The company is planning to focus on AI and machine learning going forward, dedicating five of its laboratories to researching and designing new solutions in those segments. Dally hinted that we may see the first results of these new developments in Nvidia’s new 7nm and 5nm designs, which include the upcoming Ada Lovelace GPUs. This was first reported by Wccftech.

It’s no secret that the next generation of graphics cards, often referred to as RTX 4000, will be intensely powerful (with power requirements to match). Using AI and machine learning to further the development of these GPUs implies that we may soon have a real powerhouse on our hands.

Editors' Recommendations

Nvidia RTX 4080 vs RTX 4070 Ti: picking the lesser of two evils
The RTX 4080 in a running test bench.

Nvidia's RTX 4080 has been out for a while, but the RTX 4070 Ti is a new release, and the circumstances around it are quite strange. At first, the GPU giant called this graphics card the "RTX 4080 12GB," but it was a version with significantly worse specifications than its counterpart. After some backlash, Nvidia "unlaunched" it, only to release it again a while later -- this time under the name of RTX 4070 Ti.

While the road it took to get here was unusual, the RTX 4070 Ti has now arrived, and we've tested both of these graphics cards ourselves. Let's see how they stack up against each other.
Pricing and availability

Read more
Nvidia RTX 4070 Ti vs. AMD RX 7900 XT: Two odd choices for your next GPU
The RTX 4070 Ti graphics card on a pink background.

Nvidia's RTX 4070 Ti is here, which means that the AMD Radeon RX 7900 XT has a direct competitor now. Comparing AMD to Nvidia is never overly straightforward, but it can be done. Benchmark results speak for themselves, and we've got plenty of those, all based on our own thorough testing of both cards.

While the RTX 4070 Ti and the RX 7900 XT are not some of the best graphics cards on the current market, they certainly fill important roles by offering gaming at a more affordable price -- at least in theory. Below, we'll talk about which of these GPUs is the better option if you're looking to upgrade your PC.
Pricing and availability

Read more
AMD RX 7900 XTX vs. Nvidia RTX 4090: the ultimate flagship GPU battle
RX 7900 XTX lying on a textured background.

AMD's latest Radeon RX 7000-series is Team Red's response to Nvidia's RTX 40-series. For Nvidia, the current reigning leader of the pack is the RTX 4090, an expensive, but immensely powerful GPU. In the case of AMD, we have two cards sitting near the top -- the RX 7900 XT and the RX 7900 XTX. How does AMD's new headliner, the XTX, compete with Nvidia's flagship?

If you're wondering which GPU will reign as the best of the best, you've come to the right place. We've tested both cards extensively and we know the answer.
Pricing and availability

Read more