Skip to main content

AMD wants to build on AI with its next generation of graphics cards

AMD’s RDNA 3 lineup is still very new, but the company is already looking to the future. In a recent interview, AMD executives talked about the potential of RDNA 4. They also discussed AMD’s approach to artificial intelligence (AI) and how it differs from Nvidia’s.

It seems that AMD may be eager to increase its AI presence going forward. It’s a good call — so far, AMD has been losing the AI battle against Nvidia. Here’s what we know about its plans for RDNA 4.

AMD Radeon RX 7900 XTX over a galaxy-themed background.
AMD

Japanese website 4Gamer talked to AMD execs David Wang and Rick Bergman, mostly touching on the use of AI in AMD’s best graphics cards. While Nvidia has a longer history of AI, the interview hints that AMD may want its ROCm software suite to compete with Nvidia’s CUDA libraries. In fact, AMD claims that the two platforms are on par.

As far as the future of AI goes for AMD, the company simply seems to have a different approach to the use of AI in gaming GPUs. Where Nvidia has gone all-in, AMD has had a slower adoption of AI in consumer cards — but it claims that this was a matter of choice.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“The reason why Nvidia is actively trying to use AI technology even for applications that can be done without using AI technology is that Nvidia has installed a large-scale inference accelerator in the GPU. In order to make effective use of it, it seems that they are working on a theme that needs to mobilize many inference accelerators. That’s their GPU strategy, which is great, but I don’t think we should have the same strategy,” said Wang in the 4Gamer interview, which was machine translated from Japanese.

AMD notes that gamers don’t need or want to be paying for features they never use, which is why AMD thinks that inference accelerators should be used for improving games rather than other things. To that end, AMD notes that it was able to achieve competitive performance with FidelityFX SuperResolution (FSR) without using AI, whereas Nvidia’s DLSS does use AI.

Wang’s idea for something fun to implement with the help of AI is improving the behavior and movement of enemies and non-player characters (NPCs) in modern games. As far as image processing goes, AMD seems to have bigger plans.

Two AMD Radeon RX 7000 graphics cards on a pink surface.
Jacob Roach / Digital Trends

“Even if AI is used for image processing, AI should be in charge of more advanced processing. Specifically, a theme such as ‘neural graphics,’ which is currently gaining momentum in the 3D graphics industry, may be appropriate,” Wang added.

Wang talked at length about the technologies used within AMD’s latest flagships, the RX 7900 XT and RX 7900 XTX. The multi-draw indirect accelerators (MDIA) used in RDNA 3 contribute to a notable performance increase over the previous generation of up to 2.3 times. To that end, AMD wants to keep building on that tech and add even more advanced shaders to RDNA 4. Wang said that the company wants to make this the new standard specification for the GPU programming model.

We also got a brief, although not very specific, teaser of RDNA 4 from Rick Bergman, AMD’s executive vice president of the Computing and Graphics Business Group. Bergman said: “We promise to evolve to RDNA 4 with even higher performance in the near future.”

The use of “near future” is interesting here. AMD’s RDNA 3 lineup is still in its infancy. There are a lot of mobile GPUs coming to laptops, but the desktop range is still very small. It’s hard to imagine RDNA 4 coming out any sooner than the end of 2024 — there are still so many current-gen cards that need to be released. However, when the time comes, it will be interesting to see what AMD does to up its AI game.

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
AMD didn’t even need its best CPU to beat Intel
A render of a Ryzen 9000 CPU.

Looks like the competition between AMD and Intel is about to start heating up again. AMD's upcoming second-best processor, the Ryzen 9 9900X, was just spotted in an early benchmark -- and the results are shockingly good. If this is what AMD can do with a 12-core CPU, what's going to happen when the 16-core version of Zen 5 appears in tests?

The happy news (for AMD fans, at least) comes directly from the Geekbench 6.2 database, and it all comes down to a benchmark of what appears to be a retail sample of the Ryzen 9 9900X. The chip scored an impressive 3,401 points in the single-core score, and 19,756 points in the multi-core score. That puts it far above its predecessor, the Ryzen 9 7900X, but that's not its only success.

Read more
AMD just revealed a game-changing feature for your graphics card
AMD logo on the RX 7800 XT graphics card.

AMD is set to reveal a research paper about its technique for neural texture block compression at the Eurographics Symposium on Rendering (EGSR) next week. It sounds like some technobabble, but the idea behind neural compression is pretty simple. AMD says it's using a neural network to compress the massive textures in games, which cuts down on both the download size of a game and its demands on your graphics card.

We've heard about similar tech before. Nvidia introduced a paper on Neural Texture Compression last year, and Intel followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from farther away. Nvidia's claims about Neural Texture Compression are particularly impressive, with the paper asserting that the technique can store 16 times the data in the same amount of space as traditional block-based compression.

Read more
AMD’s multi-chiplet GPU design might finally come true
RX 7900 XTX installed in a test bench.

An interesting AMD patent has just surfaced, and although it was filed a while back, finding it now is all the more exciting because this tech might be closer to appearing in future graphics cards. The patent describes a multi-chiplet GPU with three separate dies, which is something that could both improve performance and cut back on production costs.

In the patent, AMD refers to a GPU that's partitioned into multiple dies, which it refers to as GPU chiplets. These chiplets, or dies, can either function together as a single GPU or work as multiple GPUs in what AMD refers to as "second mode." The GPU has three modes in total, the first of which makes all the chiplets work together as a single, unified GPU. This enables it to share resources and, as Tom's Hardware says, allows the front-end die to deal with command scheduling for all the shader engine dies. This is similar to what a regular, non-chiplet GPU would do.

Read more