Intel XeSS vs. Nvidia DLSS vs. AMD Super Resolution: supersampling showdown

Dynamic upscaling is a major component in modern games and the latest and greatest graphics cards, but there are different modes and models to pick from. Intel’s Xe Super Sampling (XeSS), Nvidia’s Deep Learning Super Sampling (DLSS), and AMD’s Fidelity FX Super Sampling (FSR) all do things in their own way and aren’t always the same in performance, visual quality, game support, and hardware support.

Although there’s an argument to be made for just turning on whatever your hardware and games support, if you have the choice between them or are considering different graphics cards based on their XeSS, DLSS, and FSR support, it’s important to know the differences between them. Here’s a key breakdown of these supersampling algorithms and which one might be the best fit for you.

Recommended Videos

Image quality

Gearbox

In general, DLSS leads the pack in image quality thanks to its AI approach, but it’s not the clear leader anymore because of FSR 2.0. The original implementation of FSR was pretty mediocre, but the new 2.0 update puts it on nearly equal footing with DLSS. We really like FSR 2.0 for its hardware support, as it has worked on nearly every GPU made within the past five years.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

XeSS is a little different. Unlike DLSS and FSR, there’s not one definitive version. Instead, there’s an Intel Arc exclusive version of XeSS that takes advantage of the XMX cores on Arc GPUs, and also a vendor-agnostic version of XeSS. It may seem like FSR, but that’s not the case. The vendor-agnostic version actually uses AI through DP4a instructions. Recent GPUs are capable of handling these AI calculations, but they don’t have as much throughput as dedicated XMX or Tensor cores. So the DP4a version uses AI, but not to the extent that the full XMX version does.

This DP4a version is quite a bit behind DLSS in terms of image quality, though granted, it’s not a one-for-one comparison. You can see that in Shadow of the Tomb Raider below, where it looks like XeSS is simply running at a lower resolution. FSR 2.0, despite not using AI, is a bit further ahead than the DP4a version of XeSS in terms of image quality.

Image used with permission by copyright holder

For detailed images, have a look at our Shadow of the Tomb Raider XeSS Performance comparison (click, drag, resize).

The full XMX version is a bit better, but it still lose some image quality. You can see that in Hitman 3 below where the leaves wash out at the various quality modes.

Image used with permission by copyright holder

For detailed images, have a look at our Hitman 3 XeSS comparison (click, drag, resize).

Performance

Performance is the other side of the upscaling coin because it’s not worth greater performance if it looks terrible, but then it needs to have an impact on fps. Otherwise, you might as well go native. It basically comes down to how much image quality is worth sacrificing for a higher frame rate, which is why all these upscalers offer different modes so that you can tweak quality and performance to your tastes.

In our Intel Arc A770 and A750 review, we tested the RTX 3060 in Shadow of the Tomb Raider and Hitman 3 using all available quality modes for XeSS and DLSS, and the results are pretty conclusive.

Image used with permission by copyright holder

In Shadow of the Tomb Raider, XeSS was able to improve performance by up to 43% by utilizing the Performance mode, but DLSS was able to get a 67% frame rate increase with its own Performance mode. In Ultra Performance mode, DLSS was able to double the frame rate, a much larger improvement than XeSS was able to deliver.

Image used with permission by copyright holder

It’s a similar story in Hitman 3. The margins here are basically the same as in Tomb Raider except for DLSS’s Ultra Performance mode, which couldn’t double the frame rate. Even without Ultra Performance, though, DLSS is still the clear winner when it comes to performance.

Image used with permission by copyright holder

Surprisingly, XeSS with a proper Intel Arc GPU doesn’t scale as high as the DP4a version in terms of performance. Our testing showed about a 31% increase in Hitman 3 with the Balanced mode with an Intel Arc A750. With the RTX 3060 and the DP4a version, we saw about a 35% increase in this mode.

We should also note that DLSS stands to get even faster with the upcoming 3.0 version, which brings AI-generated frames into play. Nvidia promises big performance gains with DLSS 3, but game support will be limited for a while, and image quality takes a noticeable hit from our testing with the RTX 4090. DLSS 3 isn’t an existential threat to XeSS at the moment, but it’s not great for Intel to be lacking a feature that might be more useful in the future.

Image used with permission by copyright holder

As for FSR 2.0, usually, it’s about on par with DLSS’s performance, so while we haven’t tested it directly against XeSS, it’s pretty likely we’d see FSR in the lead and XeSS slightly behind, as we do with XeSS versus DLSS. FSR doesn’t have AI-generated frames like DLSS 3, however, and it’s not clear how AMD will bridge this gap in the future since its GPUs have no AI hardware, at least for now.

Still, FSR 2.0 was good enough at launch that we started to consider whether DLSS was even necessary anymore. DLSS 3 might change that if you can afford an RTX 4000 series graphics card, but considering most can’t, that may leave FSR as the upscaling king long-term.

Game support

DLSS is the oldest of the three upscaling technologies, and unsurprisingly, it supports the most games. It’s available in dozens of titles, including Cyberpunk 2077, Marvel’s Avengers, and Outriders, and Nvidia is constantly adding support for new games. It is tiered, however, with the greatest number of games supporting DLSS 1 and 2, with DLSS 3 support still limited for now.

Nvidia

FSR is much newer, but that hasn’t held it back from growing an impressive list of supported titles. At the time of publication, the heavy hitters are God of WarDeathloop, and Red Dead Redemption II. FSR 2.0 support is also planned for Hitman 3Microsoft Flight Simulator, and upcoming games like Forspoken and Uncharted‘s PC port.

Generally speaking, if a game has FSR, it’ll have DLSS, and vice versa, though older titles that came out before DLSS will often only have DLSS. It seems like we’ll see a similar trend with XeSS, as several games that have XeSS or will support it in the near future also have DLSS and FSR 2.0. For example, both of the games we tested for image quality support (Shadow of the Tomb Raider and Hitman 3) support DLSS and XeSS.

Hardware support

The biggest difference between DLSS, FSR, and XeSS is hardware support — and it may be the difference that defines which is the best upscaling option. DLSS requires an Nvidia RTX graphics card. Not only is the feature limited to Nvidia hardware, but it’s also limited to the latest generations of Nvidia hardware: specifically, you need at least an RTX 2000 card to use DLSS at all and an RTX 4000 to use DLSS 3.

Jacob Roach / Digital Trends

That’s because DLSS requires the Tensor cores on recent Nvidia graphics cards, which handle the AI calculations. FSR doesn’t use AI, so it doesn’t require any particular hardware. The strength of FSR isn’t that a lot of games support it or that it has better image quality compared to DLSS, because it has neither of those; it’s that anyone can use it.

Outside of graphics cards from AMD and Nvidia, FSR also works on integrated graphics, APUs, and graphics cards that are older than a couple of generations. There’s a quality trade-off, but most gamers don’t have a recent Nvidia graphics card. The majority of people are still using older GPUs, an AMD card, or integrated graphics.

XeSS has a nice compromise between the two. Like DLSS, XeSS uses dedicated cores — called XMX cores on Intel graphics cards — to handle the AI calculations. XeSS requires these cores to work, so the full version of XeSS will only work on Intel graphics cards. But Intel is making two versions.

This is something we wanted to see out of DLSS. Essentially, Intel is offering developers two different versions of XeSS: one that requires the dedicated XMX cores and another that’s a general-purpose solution for a “wide range of hardware.” The second version uses AI, but it instead handles the calculations with DP4a instructions, which are commonly supported on recent GPUs. This version uses a simpler upscaling model, but it still allows XeSS to work on a wide range of hardware.

Editors' Recommendations

Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
This might be why AMD’s FSR 3 isn’t picking up momentum

AMD's platform-agnostic FSR 3 is a great feature, but months after releasing, it's only available in a small list of titles. Now, we might finally have a clue as to why.

Developers of the upcoming open world survival game Nightingale posted a development update stating that it was removing FSR 3 due to crashes. "After reviewing crash data from the Server Stress Test, a significant number of them seemed to point to FSR3 integrations, whether or not users had the setting turned on," a prelaunch update post reads.

Read more
This underrated AMD GPU beats the RTX 4070 Ti Super

The RTX 4070 Ti Super and RX 7900 XT are unlikely rivals when shopping for the best graphics card in 2024. Price shifting and market repositioning have placed these two GPUs in direct competition, both targeting gamers looking for a premium 4K gaming experience without spending over $1,000.

Both graphics cards hit the mark, offering a smooth frame rate at 4K with all the settings cranked up. The differences between them lie in features and pricing.

Read more
Nvidia did the unthinkable with the RTX 4080 Super

I'm floored. We all knew that Nvidia was releasing new Super graphics cards at CES 2024, but I never expected Nvidia to drop the price with the release of the RTX 4080 Super. The updated model, which will replace the RTX 4080 that released at $1,200, is arriving for $1,000. Nvidia has been the primary driving force behind high GPU prices, but the RTX 4080 Super is a strange change of course that actually brings prices down.

A big reason why is that the RTX 4080 Super isn't too different from the RTX 4080. It's using the same AD103 GPU, but it comes with the full die, equaling 10,240 CUDA cores. It's just a 5% bump over the base model, and the Super version comes with the same 16GB of GDDR6X memory and the same 320 watts of power draw.

Read more