Skip to main content

AMD’s ‘Nvidia Killer’ GPU may be in America for testing, launches this year

AMD’s long-awaited “Nvidia Killer” graphics card, otherwise known as “Big Navi,” is expected to launch in 2020, following statements made by CEO Lisa Su. Backing up that claim are new rumors of its capabilities, as well as the suggestion that it has already reached U.S. partners and is currently undergoing testing to test out just how fast it is.

AMD’s RX 5000 Navi range has been off to a great start, with competitive cards in the midrange and budget segments of the market, but with the RX 5700 XT being only comparable to AMD’s last high-end card, the short-lived Radeon VII, fans have been waiting on something more impressive. For months, rumors of an internal GPU that could rival Nvidia’s best have been circulating, and that trickle has become a torrent in recent weeks. Following Lisa Su’s admission of Big Navi’s existence, we’re now hearing a lot of exciting things.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The latest rumor is that the card has made it to the shores of the U.S. for validation — an important step to take before the card is released to the public. With plans to debut the card later this year, perhaps at the midyear Computex show, that’s exciting news. Following rumors of the Navi 21 (Big Navi) GPU being twice the physical size of a Navi 10 GPU (the same one used in the RX 5700 XT and 5700), it’s also been reported by posters on the ChipHell forums (via Guru3D) that Navi 21 could have 80 compute units, double that of the Navi 10.

If that proves true, it would suggest one of two things: AMD’s new RDNA architecture has finally allowed it to go beyond the 40 Compute Unit limit of Graphics Core Next cards (everything before Navi), or that it’s using a pair of Navi 10s joined by something like the Infinity Fabric we’ve seen on its Zen 2 CPUs. Considering AMD has doubled up GPUs on high-end graphics cards before, like the 295×2, that’s not an impossibility.

In either case, such a card would have more transistors and more GPU processing cores than an Nvidia 2080 Ti. That may not mean more performance, but it should at least be comparable.

Guru3D also reports that the Navi 21 GPU will also include native ray tracing acceleration. Considering the next-generation PS5 and Xbox Series X consoles are slated to support ray tracing on AMD hardware, this is perhaps no surprise. It’s not yet clear, however, how AMD will implement this; whether it will attempt to support the kind of ray tracing enabled in games like Metro: Exodus and Control, or whether it will look to champion its own, more open standard, like that used in the Crytek Neon Noir demo, which can also work on standard compute GPUs without hardware acceleration.

What we shouldn’t expect, is Big Navi turning into a broad range of products, as recent rumors based on EEC filings suggested. While it’s true that GPU company AFOX did register product names for RX 5950 XT, 5950, 5900, and 5800 XT cards with the EEC, it did so without any urging from AMD. This was a preemptive guess by AFOX to get ahead of the game, as Igor’s Lab points out. AFOX doesn’t currently sell any contemporary AMD GPUs either, so it’s possible that even with these names secured, it won’t be able to sell anything like that if and when it becomes available.

Despite that hiccup, though, the rumor mill is spitting out a lot of exciting reports about AMD’s first high-end GPU in more than two years. That does raise the question of how well any of it will respond to Nvidia’s next-generation Ampere GPUs, which will allegedly launch in 2020 too. But even if Nvidia makes a further leap with Ampere and eclipses even Big Navi’s performance, increased competition is good for graphics card buyers. We’ve seen Nvidia repeatedly cut prices and introduce new cards (Super variants specifically) to counter AMD’s launches. A capable Big Navi debut later this year could see it create something even more impressive.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
What is VSync, and why do you need it?
HP Omen 40L Gaming PC on a table connected to a monitor.

If you’ve been playing PC games for a number of years, you’ve probably heard the term ‘VSync’ tossed around once or twice. Maybe you’ve also heard of G-Sync and FreeSync. For those unaware, VSync is actually short for ‘vertical synchronization’. This is a display feature that is designed to keep your gaming screen running in sync with your computer's GPU. VSync isn’t just important for PC gaming, but it’s one of the most important criteria that goes into a good gaming display.

In this article, we’re going to take a closer look at VSync (and its related technologies) to find out exactly how it works, if you should have it enabled, and how to disable it if you don’t like the optimization. 
What is VSync technology?

Read more
How 8GB VRAM GPUs could be made viable again
Screenshot of full ray tracing in Cyberpunk 2077.

Perhaps there is still some hope for GPUs with low VRAM. According to a new patent published by Microsoft, the company worked out a method that could make ray tracing and path tracing more viable in terms of how much video memory (VRAM) they use. As of right now, without using upscaling techniques, seamless ray tracing requires the use of one of the best graphics cards—but this might finally change if this new method works out as planned.

This new patent, first spotted by Tom's Hardware, describes how Microsoft hopes to reduce the impact of ray tracing on GPU memory. It addresses the level of detail (LOD) philosophy, which is already something that's used in games but not in relation to ray tracing, and plans to use LOD to adjust ray tracing quality dynamically, thus lowering the load that the GPU -- particularly its memory -- has to bear.

Read more
Nvidia DLSS is amazing, but only if you use it the right way
Lies of P on the KTC G42P5.

Nvidia's Deep Learning Super Sampling, or DLSS, has become a cornerstone feature of modern PC games. It started as a way to boost your performance by rendering a game at a lower resolution, but the prominence and popularity of DLSS have prompted Nvidia to add even more features under the name.

Today, DLSS incorporates several different features, all of which leverage AI to boost performance and/or image quality. It can be intimidating if you're a new RTX user, so I'm here to break down all of the increases of DLSS in 2024 and how you can best leverage it in supported games.
The many features of DLSS

Read more