Skip to main content

AMD is taking the gloves off in the AI arms race

AMD's CEO presenting the MI300X AI GPU.
AMD

AMD looks ready to fight. At its Advancing AI event, the company finally launched its Instinct MI300X AI GPU, which we first heard about first a few months ago. The exciting development is the performance AMD is claiming compared to the green AI elephant in the room: Nvidia.

Spec-for-spec, AMD claims the MI300X beats Nvidia’s H100 AI GPU in memory capacity and memory bandwidth, and it’s capable of 1.3 times the theoretical performance of H100 in FP8 and FP16 operations. AMD showed this off with two Large Language Models (LLMs) using a medium and large kernel. The MI300X showed between a 1.1x and 1.2x improvement compared to the H100.

AMD's CEO presenting the performance of MI300X AI GPU.
AMD

In a moment that looks ripped straight out of an Nvidia conference, AMD’s CEO Lisa Su introduced the MI300X Platform, which combines eight of the GPUs on a single board. AMD says these boards offer the same training performance as Nvidia’s H100 HGX and 1.6 times the inference performance. In addition, AMD says you’re able to run two LLMs per system, while H100 HGX can only handle one.

To illustrate how big of a deal this is, AMD brought out Microsoft’s Chief Technology Officer Kevin Scott to talk about how AMD and Microsoft are working together on AI. It’s important to highlight that the wildly popular ChatGPT is run on Microsoft servers, and it was originally trained on thousands of Nvidia GPUs.

A comparison between AMD's MI300X and Nvidia's H100.
AMD

In addition to the MI300X, AMD announced the MI300A, which is the first APU for AI, according to AMD. This combines AMD’s CDNA 3 AI GPU architecture with 24 Zen 4 CPU cores, along with 128GB of HBM3 memory. AMD says this is a balance between AI performance and high-performance computing, offering the CPU prowess that we recently saw on Threadripper 7000 to data centers.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Despite jumping on the AI bandwagon with the rest of the tech world, AMD has taken a deep backseat to Nvidia up to this point. Nvidia started investing big in AI years ago, giving it a massive head start over AMD and Intel. That early investment has catapulted Nvidia into becoming a trillion-dollar company this year.

With a GPU cluster that’s positioned to challenge Nvidia’s H100, as well as partnerships with Meta and Microsoft, AMD is finally entering the ring. AMD may exit bruised and bloodied, but it still looks like AMD is ready to put up a fight.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
What is anti-aliasing? TAA, FXAA, DLAA, and more explained
Lies of P on the KTC G42P5.

Anti-aliasing is one of the most important graphics settings you'll find in a PC game. It's been around for decades in various different forms, and even today, you'll still find multiple different options for anti-aliasing on your gaming PC. We're here to help answer what anti-aliasing is, from TAA to FXAA, and explain how to turn it on.

You'll rarely find a game that doesn't have an anti-aliasing setting. Most PC games include not only different types of anti-aliasing but also different quality modes that can affect performance or image quality. We're going to break down the different forms of anti-aliasing so you can make an informed decision when adjusting your graphics settings.
What is anti-aliasing?

Read more
AMD just revealed a game-changing feature for your graphics card
AMD logo on the RX 7800 XT graphics card.

AMD is set to reveal a research paper about its technique for neural texture block compression at the Eurographics Symposium on Rendering (EGSR) next week. It sounds like some technobabble, but the idea behind neural compression is pretty simple. AMD says it's using a neural network to compress the massive textures in games, which cuts down on both the download size of a game and its demands on your graphics card.

We've heard about similar tech before. Nvidia introduced a paper on Neural Texture Compression last year, and Intel followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from farther away. Nvidia's claims about Neural Texture Compression are particularly impressive, with the paper asserting that the technique can store 16 times the data in the same amount of space as traditional block-based compression.

Read more
This free, open-source tool is the only AI app I constantly use
The Upscayl app displayed on a monitor.

I don't use a lot of AI applications. There's the generative fill-in apps like Photoshop and Lightroom, as well as the crop of online chatbots like ChatGPT and local bots like Chat with RTX. But there's only one AI tool that I find myself constantly reaching for when using my PC.

It's called Upscayl, and as the name implies, it's an AI-powered upscaling utility for Windows, macOS, and Linux. It supports a long list of AI models that run on your graphics card, and it allows you to scale otherwise unrecoverable images to insanely high resolutions. More than anything, though, Upscayl is easy to use -- and that's something that most AI apps struggle with.
Here's what Upscayl can do

Read more