Skip to main content

AMD just revealed a game-changing feature for your graphics card

AMD logo on the RX 7800 XT graphics card.
Jacob Roach / Digital Trends

AMD is set to reveal a research paper about its technique for neural texture block compression at the Eurographics Symposium on Rendering (EGSR) next week. It sounds like some technobabble, but the idea behind neural compression is pretty simple. AMD says it’s using a neural network to compress the massive textures in games, which cuts down on both the download size of a game and its demands on your graphics card.

We’ve heard about similar tech before. Nvidia introduced a paper on Neural Texture Compression last year, and Intel followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from farther away. Nvidia’s claims about Neural Texture Compression are particularly impressive, with the paper asserting that the technique can store 16 times the data in the same amount of space as traditional block-based compression.

AMD hasn’t revealed its research yet, so there aren’t a ton of details about how its method would work. The key with Nvidia’s approach is that it leverages the GPU to decompress textures in real time. This has been an issue in several games released in the past couple of years, from Halo Infinite to The Last of Us Part I to Redfall. In all of these games, you’ll notice low-quality textures if you run out of VRAM, which is particularly noticeable on 8GB graphics cards like the RTX 4060 and RX 7600.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

One detail AMD did reveal is that its method should be easier to integrate. The tweet announcing the paper reads, “unchanged runtime execution allows easy game integration.” Nvidia hasn’t said if its technique is particularly hard to integrate, nor if it will require specific hardware to work (though the latter is probably a safe bet). AMD hasn’t made mention of any particular hardware requirements, either.

We'll present "Neural Texture Block Compression" @ #EGSR2024 in London.

Nobody likes downloading huge game packages. Our method compresses the texture using a neural network, reducing data size.

Unchanged runtime execution allows easy game integration. https://t.co/gvj1D8bfBf pic.twitter.com/XglpPkdI8D

— AMD GPUOpen (@GPUOpen) June 25, 2024

At this point, neural compression for textures isn’t a feature available in any game. These are just research papers, and it’s hard to say if they’ll ever turn into features on the level of something like Nvidia’s DLSS or AMD’s FSR. However, the fact that we’re seeing AI-driven compression from Nvidia, Intel, and now AMD suggests that this is a new trend in the world of PC gaming.

It makes sense, too. Features like DLSS have become a cornerstone of modern graphics cards, serving as an umbrella for a large swath of performance-boosting features. Nvidia’s CEO has said the company is looking into more ways to leverage AI in games, from generating objects to enhancing textures. As features like DLSS and FSR continue become more prominent, it makes sense that AMD, Nvidia, and Intel would look to expand their capabilities.

If we do see neural texture compression as marketable features, they’ll likely show up with the next generation of graphics cards. Nvidia is expected to reveal its RTX 50-series GPUs in the second half of the year, AMD could showcase its next-gen RDNA 4 GPUs in a similar time frame, and Intel’s Battlemage architecture is arriving in laptops in a matter of months through Lunar Lake CPUs.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
This is the best PC gaming hardware I’ve reviewed this year — so far
Forza Horizon 5 running on an Asus gaming monitor.

We're only halfway through 2024, and I've already reviewed a ton of PC gaming hardware. Despite the most exciting launches coming in the back half of the year -- Ryzen 9000 and RTX 50-series GPUs chief among them -- there's already been a deluge of hardware built for PC gaming.

It's been a surprisingly packed year already, but there are six pieces of hardware that stand out from the crowd. From graphics cards to gaming monitors to a keyboard (of all things), here's all the PC gaming tech you can't afford to ignore.
Nvidia RTX 4070 Super

Read more
Intel just discontinued a CPU that’s only 2 years old
Core i9-12900KS processor socketed in a motherboard.

Intel is moving on. The company recently posted two Product Change Notifications (PCN) that announced the discontinuation of multiple processors, including the Core i9-12900KS that was released just over two years ago.

In addition to the special-edition version of the Core i9-12900K, Intel announced that it's discontinuing the remaining CPUs in its 10th-gen lineup. The main stack of Intel's 10th-gen lineup, including processors like the Core i9-10900K, has already been discontinued. The newest PCN includes less prominent models, such as Intel's Pentium and Celeron lineups. It also includes the Core i5-10400F, which has remained one of the more popular budget options among Intel's CPU options.

Read more
AMD Zen 6 chips could be here sooner than you think
The AMD Ryzen 7 5700 propped up against an action figure.

Last month at Computex, AMD announced its Zen 5-based desktop and mobile processors, set for launch later this month. Shortly after this announcement, details about their successor, code-named "Medusa," have emerged. According to leaks, Medusa will be part of the Zen 6 lineup and is expected to be released in late 2025, contrary to earlier rumors of a 2026 launch.

Sources cited by YouTuber Moore’s Law Is Dead suggest AMD plans to finalize the Zen 6 architecture by Q2 2025, with production possibly beginning later that year. Another source confirmed Medusa as a Zen 6 product, potentially targeting both laptops and the desktop AM5 platform. Additionally, Strix Halo and Medusa Halo, based on Zen 5 and Zen 6 architectures, are expected to use TSMC's N3E (enhanced 3nm process).

Read more