Skip to main content

AMD Radeon R9 Fury X review

AMD's R9 Fury X graphics card swings hard, can't land a knockout punch

AMD Radeon Fury X
AMD Radeon R9 Fury X
MSRP $649.00
“The Fury X is the most impressive video card we’ve seen in years, but it doesn’t deliver the knock-out blow AMD needs.”
  • Compact, attractive
  • Stock liquid cooling
  • Competitive performance-per-watt
  • Liquid cooler’s radiator is unwieldy
  • Doesn’t beat the GTX 980 Ti

The year was 2007, and AMD’s back was against the wall. Its Athlon architecture, which challenged Intel at the turn of the century, was aging. It needed a savior, so it went big, designing and all-new processor called Phenom.

It was an impressive piece of engineering – but it failed to keep up with Intel’s latest, and early models contained a rare but nasty bug. Phenom’s failure to keep up the company’s fortunes was a turning point, and AMD processors haven’t been able to go toe-to-toe with Intel’s best ever since.

Now, AMD finds itself at another turning point. The company’s schedule of graphics architectures has fallen behind its chief competition, Nvidia. AMD’s answer is yet another dramatic all-new design; the Radeon R9 Fury X, the first video card to use High Bandwidth Memory.

Will it make up the widening gap? Or is HBM, like the Phenom, more brilliant in theory than in practice? The future of AMD may ride on this card. So, you know – no pressure!

It’s all about HBM

The Fury line’s spotlight on memory is unusual, as it’s rarely the focus of a new video card release. Aside from the amount of memory (in this case, 4GB), and the width of the memory interface used, there’s typically not much comment on RAM. That’s not to say it’s unimportant – but it is typically a known quantity.

Here, AMD has flipped the script. The GPU architecture, known as Fiji, is technically new, but it’s really a variation of the Hawaii chip in AMD’s Radeon R9 290X, which itself is a variation of its predecessors. The Fiji’s 4,096 Stream Processors are arranged into 64 compute units and four primary Shader Engines. Stream Processor count per Compute Unit has risen to 16, the most seen so far, and the overall SP count is way ahead of anything AMD has produced before. The result is a quoted compute performance of 8.6 Teraflops, which is way ahead of anything else on the market – the Nvidia Titan X quotes a peak compute power of 6.14 TFLOPS.

But the basics are the same as any other current Radeon. The Fury X’s chip still fits in the Graphics Core Next family, and is still built on the tried-and-true 28 nanometer production process.

AMD Radeon Fury X
Bill Roberson/Digital Trends
Bill Roberson/Digital Trends

The real news, then, is the memory. We’ve already written on the matter extensively, as AMD leaked out details of High Bandwidth Memory well in advanced, but here’s the summary of what you need to know. HBM stacks memory chips vertically and places them very close to the GPU itself. This is a more efficient use of both space and power. AMD says that, relative to a GDDR5 card, memory bandwidth per watt increases up to three and a half times, and overall bandwidth per chip improves almost four-fold.

Certainly, the on-paper results are impressive. The Fury X quotes overall memory bandwidth of 512 gigabytes per second, again well ahead of a GeForce GTX Titan X, which quotes 336 gigabytes per second (the GTX 980 Ti’s bandwidth is also 336GB/s). The real question is whether that bandwidth improvement – and the drastic increase in Stream Processor count – will be enough to put the card in contention with Nvidia’s latest.

Fury X card itself and pricing

While the Fury X seems to compete with the GTX Titan X, based on its specification sheet, on price it actually lines up with the GTX 980 Ti. Nvidia’s second-quickest video card carries an MSRP of $649, and the Fury X mimics it exactly. This makes comparisons between the two rather simple. There’s no need to handicap either because of pricing.

At a glance, that works to AMD’s advantage. For the first time in years the company’s build quality and design exceeds the green team – and the gap isn’t small. The GTX 980 Ti is a gorgeous card, but it’s also large and air-cooled. You need a lot of horizontal space and a case with solid air-flow.

The Fury X comes in at just under eight inches long – about 2.5 inches shorter than not only the Nvidia GTX 980 Ti, but also the GTX 980, 970 and 960 (the standard PCB length is the same for all three). AMD also ships its card with a liquid cooler. The GTX 980 Ti requires one 8-pin and one 6-pin power connector, while the Fury X needs two 8-pin connections.

It’s not all sunshine and roses, though, because the card’s smaller size and larger cooler cancel each other out. Yes, the Fury X itself is smaller than the GTX 980 Ti, but when the cooler’s mass is included, it’s a bit larger. It also demands a 120mm fan mount dedicated to it, and builders must find a way to route the flexible tubes that carry liquid to and from the radiator. Taken as a whole, the Fury X is arguably more difficult to install in a small case than the GTX 980 Ti – though the particulars will depend on the enclosure you use.


Clearly, the Fury X is quite a bit different from anything AMD has tried in recent years. Its new form of memory results in a card that’s smaller, and more refined. Yet design is only part of the equation. Nvidia has gained the upper hand in recent years because of performance and efficiency, so let’s see if the Fury can turn the tide.

Here are the cards we’re using for comparison. Not every card is used in every comparison. The focus is on how the Fury X stacks up against the GTX 980 Ti, so it’s in every match-up.

All of the testing was performed in our Falcon Northwest Talon test rig. It’s equipped with a Core i7-4770K processor (not over-clocked), 16GB of RAM and two 240GB solid state drives. We used the latest drivers for testing in all cases.

Let’s get to it.


I’ll start with 3DMark, our only synthetic test. While much can be argued about how accurate any synthetic test can be, I’ve found this benchmark to be a wonderfully accurate indicator of general performance. So, what does the demanding Fire Strike loop have to report?

There you have it. Sorry if it’s anti-climactic, but here, with one result, we can more or less gauge if the Fury X is up to beating the GTX 980 Ti. And it isn’t. Instead, it’s basically a tie.

I say that because the particular GTX 980 Ti we have available for this review is slightly overclocked. The base clock is 102MHz quicker, and the boost clock 114MHz quicker, than a standard GTX 980 Ti. That’s not enough to create a major gap, and accordingly the card retails for just $20 more than a standard 980 Ti. But it might be enough to create the tiny, five-percent difference between it and the Fury X.

This result is simultaneously impressive and disappointing. It’s nice to see a compact AMD card going head-to-head with Nvidia and holding its own, but after a couple years of disappointing releases, the red team needed to slug one out of the park.

Battlefield 4

DICE’s well-known and often controversial first-person shooter is no longer among the most demanding games available, but it’s still not easy to play at high resolution and with all the details turned on. Let’s start first with the game’s performance at 1080p and the Ultra preset detail. Note this is DirectX performance – Mantle unfortunately throws our framerate recording method for a loop, and so cannot be used here.

As you can see, the Fury X comes up a fair bit behind in this scenario. It runs at a whopping 30 percent deficient to the GTX 980 Ti. On the other hand, though it loses, the card still easily produced a framerate almost twice the preferred minimum of 60 frames per second. AMD has repeatedly stated the Fury X should prove most competitive at 4K, so let’s see if that’s true.

AMD wasn’t talking nonsense. At 4K the Fury X makes a huge leap forward. It’s still behind the GTX 980 Ti, but only by a handful of frames per second. We should also note it delivered extremely similar minimum framerates – the GTX 980 Ti hit a minimum of 39, and the Fury X bottomed out at 35. While neither card managed the ideal 60 FPS, gameplay was enjoyable on both cards.

Shadows of Mordor

This award-winning is a great technical benchmark because it’s among a new wave of cross-platform games designed with consoles in mind that are still capable of putting PCs to task when all the settings are kicked up to maximum. Again, we’ll start at 1080p.

In this comparison, we see a less extreme repetition of Battlefield 4. The Fury X is behind the GTX 980 Ti, but it’s not as far behind as before. All three of these high-end cards have no problem delivering an experience that well exceeds a 60 FPS average, even with the notoriously difficult Ultra texture pack installed.

Let’s move on to 4K.

Again, the Fury X is more competitive at 4K than it is at 1080p, which is important, because only the 4K result really matters. None of these three, not even the mighty Titan X, manage the ideal 60 FPS, but they all come quite close.

The story doesn’t end here, unfortunately. While the Fury X’s average is similar, it hits a minimum framerate of 29, while the GTX 980 Ti goes no lower than 40 FPS. That’s a significant difference, and it was noticeable in game, as the AMD card was prone to occasional fluctuations in framerate.

“I also noticed a strange artifact with the Fury X. Horizontal bands of static would occasionally appear, blinking for a fraction of a second. The problem appeared sporadically, and seemed to occur more often with higher framerates. AMD’s representatives informed me the issue is unusual, and I’ve been working with them towards a solution. This review will be updated if the problem is resolved.”

Grand Theft Auto V

Our latest addition to the benchmark suite, this game requires little introduction. You’re more likely to have played this than any other game we’ve benchmarked except, perhaps, League of Legends. The PC version is a surprisingly well-done port that looks beautiful with the details ramped up, but it’s also extremely demanding.

I didn’t have the chance to run the Titan X through this game, so the GTX 980 Ti and Fury X go heads-up in this match, starting with 1080p.

This game has no presets. For our “ultra” calibration we turned FXXA on, but left MSAA off. We used the highest level of detail available for every other setting.

If the performance story of these cards wasn’t clear already, Grand Theft Auto V fills out the missing details. Yet again, AMD’s latest loses at 1080p resolution – but yet again, the framerate is far higher than required to provide an enjoyable experience. Now it’s time for what really matters; 4K.

Once again, the Fury X closes as the resolution sky-rockets, though it’s not quite enough to catch up to the GTX 980 Ti. The six second gap represents a difference of about 12 percent, which is more substantial than the 3DMark results would lead you to think.

There is an upside, though. At 4K the Fury X had a minimum framerate of 23 FPS, while the GTX 980 Ti went as low as 16. Unlike Shadows of Mordor, where the Nvidia card proved more reliable, Grand Theft Auto V seems to prefer the Fury X. The result was occasionally noticeable in-game, as the GTX 980 Ti suffered from a more noticeable frame-rate dip in intense scenes.

Heat is a close call

Heat and power draw was a major issue for AMD in the past, and arguably the primary cause of its woes. Less efficient chips means the cards holding them need more power and bigger, noisier coolers. While the red team has managed to offer value by slashing prices and introducing ever more monstrous cards, many gamers have steered towards Nvidia for quieter, cooler high-performance desktops.

Obviously, High Bandwidth Memory is a play to fix that flaw, but whether it’d be successful wasn’t obvious at a glance. Yes, the memory uses less power, but the GPU itself hasn’t drastically changed. So the question was this – would the benefits of HBM offset AMD’s less efficient GPU?

The answer, it seems, is yes. Take a look for yourself.

AMD’s new Fury X and Nvidia’s GTX 980 Ti performed similarly across the board. The biggest difference is a mere 15 watts while playing Shadows of Mordor. Both cards also exhibited similar, and very quiet, fan operation. In fact, total system noise from our test rig remained the same no matter the load each card faced. What this really means is that video card noise was not a significant contributor to overall system noise, which is quite remarkable.

Internal operating temperatures were different, however. The GTX 980 Ti ran at 54 degrees Celsius at idle, and up to 76C at load. AMD’s liquid-cooled card, though, had an idle temperature of only 35C and a maximum load temperature of 60C. Those figures give a clear edge to the Fury’s cooling configuration, and suggest more over-clocking headroom.


So, here’s the moment of truth. Should you buy the Radeon R9 Fury X instead of the GTX 980 Ti?

Probably not.

The Fury X is an impressive card. Were it alone in its bracket, or priced slightly differently, it’d be easy to recommend. At $649, though, it doesn’t quite unseat the GTX 980 Ti. Nvidia’s card is a bit quicker across the board, has more RAM (which doesn’t seem to matter now, but perhaps it will, someday), and is easier to install.

If you think my reasoning is flimsy, you’d be right. In truth the difference between the Fury X and its green-team opposition is small. You could buy either card and be happy with the results, even if you own a 4K monitor. Yet there’s only reason to buy one, and the GTX 980 Ti’s slight advantages are enough to give it the nod. The Fury X is a technological tour-de-force, but it doesn’t deliver the clear-cut victory AMD needed.


  • Compact, attractive
  • Stock liquid cooling
  • Competitive performance-per-watt


  • Liquid cooler’s radiator is unwieldy
  • Doesn’t beat the GTX 980 Ti

Editors' Recommendations

Matthew S. Smith
Matthew S. Smith is the former Lead Editor, Reviews at Digital Trends. He previously guided the Products Team, which dives…
Nvidia's GTX 1080 and AMD's Radeon R9 Fury X tussle in the ultimate DX12 benchmark
A battleground in Ashes of the Singularity.

Benchmarking data was recently submitted to the Ashes of the Singularity benchmark leaderboard, comparing the Nvidia GeForce GTX 1080 with the Radeon R9 Fury X from AMD. Which graphics processor came out on top? Nvidia’s solution, with a score of 4,900 and an average framerate of 49.6 frames per second, compared to AMD’s score of 4,300 points and an average framerate of 44.4 frames per second. That's a victory margin of about 13 percent.

More specifically, the Nvidia card managed 56.8 frames per second during the normal batch, 50 frames per second during the medium batch, and 43.7 frames per second during the heavy batch. On the AMD front, the Radeon R9 Fury X cranked out 52.9 frames per second during the normal batch, 45.3 frames per second during the medium batch, and 37.7 frames per second during the heavy batch.

Read more
AMD’s new dual-GPU card is here, but it’s not built for gamers
amds 1500 radeon pro duo gpu is for vr content creators rather than consumers radeonproduo 2

Update 4/26 6:00 AM: AMD confirmed the Radeon Pro Duo's $1,500 price tag, at the same time announcing its availability from AMD product partners.
AMD had revealed that with its latest round of Fiji-based GPUs, it's now aiming to focus not only on VR content consumers, but on its creators as well. Apparently, this starts with the massive Radeon Duo Pro, which was expected to launch by the end of last year, though the company delayed it to release around the same time as the Oculus Rift and HTC Vive.
Essentially, AMD's Radeon Pro Duo is the GPU equivalent of two children on each other's shoulders wearing a trench coat -- except instead of children, they're two Radeon R9 Fury X GPUs housed in a larger enclosure. Individually, each GPU boasts 4,096 stream processor, 256 texture units, and 64 ROPs, along with 4GB of HBM (each) on a 4,096-bit bus. The clock speed will be up to 1GHz. Performance-wise, we're looking at 16.38 TFLOPS. That's four times more than a PlayStation 4.

On the downside of having such a powerful card, however, is the Radeon Pro Duo demands a massive 8-pin PCI-E power connector, which is needed to meet the board power requirement of 350 watts (that's just for the card -- not for an entire system).
In terms of physical build, the Radeon Duo Pro takes advantage of a closed loop liquid cooling system featuring a 120mm radiator. The company states that "there is more than sufficient cooling for maximum performance, all while staying quiet."
The Radeon Pro Duo will be the first product in AMD's VR Ready Creator lineup. By marrying it with its LiquidVR SDK solution, AMD appears to be devising a sort of ecosystem to keep creators tied to both its hardware and its software in their VR game/app development pursuits.
AMD's Radeon Pro Duo GPU is now available from AMD add in board partners and system integrator. AMD confirmed the $1,500 price tag in a statement today, which also included details of the release.

Read more
AMD's Radeon R9 280 is now the most affordable VR-ready graphics card
htc shareholders meeting 2016 emphasis on vr htcviveheadset

Virtual reality has some of the highest recommended specifications for a platform in recent memory, easily beating out most contemporary games - especially considering that "recommended" is really more like a recommended minimum. To try and solve that, Valve has been working on lowering the requirements for its Vive headset. And it seems to have done so, making it possible to get on with just a R9 280.

Previously the recommended specs for the Vive were almost identical to that of Oculus' Rift CV1 headset: a i5 4590 or better, 8GB of RAM or more and a GTX 970 or R9 290 or better. That's quite a tall order for many and in-fact less than 15 percent of all Steam gamers have a set up that powerful.

Read more