AMD wants to build on AI with its next generation of graphics cards

AMD’s RDNA 3 lineup is still very new, but the company is already looking to the future. In a recent interview, AMD executives talked about the potential of RDNA 4. They also discussed AMD’s approach to artificial intelligence (AI) and how it differs from Nvidia’s.

It seems that AMD may be eager to increase its AI presence going forward. It’s a good call — so far, AMD has been losing the AI battle against Nvidia. Here’s what we know about its plans for RDNA 4.

AMD

Japanese website 4Gamer talked to AMD execs David Wang and Rick Bergman, mostly touching on the use of AI in AMD’s best graphics cards. While Nvidia has a longer history of AI, the interview hints that AMD may want its ROCm software suite to compete with Nvidia’s CUDA libraries. In fact, AMD claims that the two platforms are on par.

Recommended Videos

As far as the future of AI goes for AMD, the company simply seems to have a different approach to the use of AI in gaming GPUs. Where Nvidia has gone all-in, AMD has had a slower adoption of AI in consumer cards — but it claims that this was a matter of choice.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

“The reason why Nvidia is actively trying to use AI technology even for applications that can be done without using AI technology is that Nvidia has installed a large-scale inference accelerator in the GPU. In order to make effective use of it, it seems that they are working on a theme that needs to mobilize many inference accelerators. That’s their GPU strategy, which is great, but I don’t think we should have the same strategy,” said Wang in the 4Gamer interview, which was machine translated from Japanese.

AMD notes that gamers don’t need or want to be paying for features they never use, which is why AMD thinks that inference accelerators should be used for improving games rather than other things. To that end, AMD notes that it was able to achieve competitive performance with FidelityFX SuperResolution (FSR) without using AI, whereas Nvidia’s DLSS does use AI.

Wang’s idea for something fun to implement with the help of AI is improving the behavior and movement of enemies and non-player characters (NPCs) in modern games. As far as image processing goes, AMD seems to have bigger plans.

Jacob Roach / Digital Trends

“Even if AI is used for image processing, AI should be in charge of more advanced processing. Specifically, a theme such as ‘neural graphics,’ which is currently gaining momentum in the 3D graphics industry, may be appropriate,” Wang added.

Wang talked at length about the technologies used within AMD’s latest flagships, the RX 7900 XT and RX 7900 XTX. The multi-draw indirect accelerators (MDIA) used in RDNA 3 contribute to a notable performance increase over the previous generation of up to 2.3 times. To that end, AMD wants to keep building on that tech and add even more advanced shaders to RDNA 4. Wang said that the company wants to make this the new standard specification for the GPU programming model.

We also got a brief, although not very specific, teaser of RDNA 4 from Rick Bergman, AMD’s executive vice president of the Computing and Graphics Business Group. Bergman said: “We promise to evolve to RDNA 4 with even higher performance in the near future.”

The use of “near future” is interesting here. AMD’s RDNA 3 lineup is still in its infancy. There are a lot of mobile GPUs coming to laptops, but the desktop range is still very small. It’s hard to imagine RDNA 4 coming out any sooner than the end of 2024 — there are still so many current-gen cards that need to be released. However, when the time comes, it will be interesting to see what AMD does to up its AI game.

Editors' Recommendations

Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
AMD might have a new graphics card next month, too

We weren't expecting to hear much about AMD's graphics cards in January, but a new rumor suggests we'll see a new GPU in just a few weeks. AMD is prepping the RX 7600 XT, according to Benchlife's sources (via VideoCardz). It's apparently an updated version of AMD's budget-focused RX 7600, sporting more VRAM and perhaps a better die.

To understand the rumored card, we have to look at the RX 7600 we already have. It's an 8GB graphics card based on the Navi 33 GPU. The card already maxes out the capabilities of the GPU with 32 Compute Units (CUs), equaling 2,048 cores. If AMD is preparing an RX 7600 XT, there are two possibilities. Either it will use the same maxed-out Navi 33 GPU or a stripped-down version of the Navi 32 GPU we see in cards like the RX 7800 XT and RX 7700 XT. Hopefully, the latter is true. Although the RX 7600 is a solid 1080p graphics card, it remains about 30% slower than the next step up in AMD's lineup.

Read more
AMD is taking the gloves off in the AI arms race

AMD looks ready to fight. At its Advancing AI event, the company finally launched its Instinct MI300X AI GPU, which we first heard about first a few months ago. The exciting development is the performance AMD is claiming compared to the green AI elephant in the room: Nvidia.

Spec-for-spec, AMD claims the MI300X beats Nvidia's H100 AI GPU in memory capacity and memory bandwidth, and it's capable of 1.3 times the theoretical performance of H100 in FP8 and FP16 operations. AMD showed this off with two Large Language Models (LLMs) using a medium and large kernel. The MI300X showed between a 1.1x and 1.2x improvement compared to the H100.

Read more
Windows 11 will soon harness your GPU for generative AI

Following the introduction of Copilot, its latest smart assistant for Windows 11, Microsoft is yet again advancing the integration of generative AI with Windows. At the ongoing Ignite 2023 developer conference in Seattle, the company announced a partnership with Nvidia on TensorRT-LLM that promises to elevate user experiences on Windows desktops and laptops with RTX GPUs.

The new release is set to introduce support for new large language models, making demanding AI workloads more accessible. Particularly noteworthy is its compatibility with OpenAI's Chat API, which enables local execution (rather than the cloud) on PCs and workstations with RTX GPUs starting at 8GB of VRAM.

Read more