Skip to main content

3 things Intel XeSS needs to nail to beat Nvidia DLSS

With features like ray tracing and increasingly complex visuals, graphics cards need a supersampling solution to maintain playable frames. Nvidia pioneered this concept with Deep Learning Super Sampling (DLSS), and Intel’s upcoming graphics cards will use a similar feature called XeSS.

In short, XeSS uses machine learning to upscale an image from a smaller internal resolution to a larger external one. DLSS does the same thing, but it requires a recent Nvidia RTX graphics card with dedicated Tensor cores. XeSS doesn’t. Instead, Intel is making two versions available — one that will leverage the dedicated hardware inside its upcoming Arc Alchemist graphics cards, and another that will serve as a general-purpose solution for a wide range of hardware.

That alone gives XeSS a leg up over DLSS, but it’s not enough. Here are three things XeSS needs to take the supersampling crown from Nvidia and cement Intel’s place in the graphics card market.

Quality and performance

XeSS really doesn’t matter if it can’t hit the performance and quality marks set by DLSS. Intel hasn’t demoed the feature running in any games, only in a 4K demonstration that provided vague performance hints. From that, we have an idea of what XeSS is capable of, but Intel still needs to show more.

Intel XeSS quality comparison.
Image used with permission by copyright holder

Right now, Intel says XeSS can provide up to a 2x performance improvement over native 4K, and that it can upscale 1080p to an effective 4K with virtually no quality loss. That matches DLSS 2.0, but Nvidia has verifiable performance numbers and comparative screenshots in games you can play.

Claiming virtually no quality loss doesn’t mean much for Intel. Nvidia’s first DLSS implementation, for example, was abhorrent in terms of quality, smudging out too much detail to justify any performance gains. Intel can’t afford to release XeSS in a similar state, so it needs to nail the performance gains while maintaining as much quality as possible.

To be clear, a 2x performance improvement with virtually no quality loss is the minimum that Intel needs to achieve. That’s the bar DLSS set, and Intel has made it clear that XeSS is squarely targeting Nvidia’s tech. In an ideal world, Intel would push XeSS even further.

XeSS also needs multiple quality modes. DLSS comes with up to four modes that allow you to tweak the balance of quality and performance. These modes shrink the internal render size, essentially giving the upscaling algorithm less information to work with.

Intel XeSS rendering pipeline demonstration.
The rendering pipeline for Intel XeSS. Image used with permission by copyright holder

Quality modes help DLSS work across a wide range of hardware. A flagship GPU might turn to the Quality mode for a performance boost, but the Performance mode is there for low-end options without a lot of power. Intel hasn’t said if XeSS will support multiple quality modes, but it needs to in order to compete with DLSS.

Easy implementation

One of the benefits of AMD’s FidelityFX Super Resolution (FSR) is how easy it is to add into games. Following its launch, the developer for Edge of Eternity said that it only took “a few hours” to add into the game, contrasting that with the lengthy process DLSS required.

AMD FidelityFX Super Resolution
Image used with permission by copyright holder

Intel already understands this point, it seems. Following the launch of FSR, Nvidia made the decision to make DLSS available to all developers. Previously, developers would need to apply and be approved before adding the feature to their games. Intel is launching XeSS with the software development kit (SDK) freely available, which alone is a big deal.

The question is how long the A.I. model takes to produce quality results, and what developers need to do to get XeSS up and running in their games. The easier XeSS is to add into games, the more games will support it. We’ve already seen that with FSR, which experienced rapid adoption following its launch.

Game support is what matters for the long-term success of XeSS, and game support comes through a simple, easy to add SDK. This is all the more important for XeSS because Intel is offering two SDKs. If one is hard enough to implement, good luck getting developers to add two.

Ideally, developers will be able to port the implementation from one SDK to the other. As mentioned, XeSS comes in two forms, each of which require their own SDK. Intel needs to make it easy for developers to add both. If developers are forced to choose, that defeats XeSS’ main claim to fame — support for a wide range of hardware.

Ray tracing

Godfall screenshot with simulated FSR effect
Gearbox

Nvidia usually bundles DLSS with ray tracing, and it’s easy to see why. Supersampling features like DLSS are absolutely great for low-end to midrange hardware that struggles to run new games at high resolutions and stable frame rates. High-end hardware doesn’t need the feature as much, especially since a lot of recent AAA games run great on recent flagship cards.

Enter ray tracing, the complex and realistic lighting calculation that wants nothing more than to bring your high-end GPU to its knees. Ray tracing alone is too demanding, and supersampling alone doesn’t mean much across all hardware in all games. To hit the most users, you need to bundle both together.

With high-end cards, the option becomes running the game at native resolution or turning on ray tracing with supersampling enabled. Intel confirmed that its upcoming graphics cards will support hardware-accelerated ray tracing. However, that only makes a difference if it comes along with XeSS.

Ideally, Intel will go after titles that already support DLSS, as well as upcoming titles that plan to use ray tracing. Regardless, the two features should always arrive together. DLSS combined with ray tracing is greater than the sum of its parts, and that’s something AMD hasn’t caught onto with FSR. Intel can’t afford to make that same mistake.

The multibillion dollar underdog

LEDs forming a graphics card.
Image used with permission by copyright holder

Intel is a massive company — it generates far more revenue than AMD and Nvidia. In the world of discrete graphics cards, however, it’s starting at zero. Even if Alchemist cards come out and perform better than their competition (preferably at a lower price), Intel has a long road ahead to establish itself against AMD and Nvidia. It will take several years, and that’s assuming everything goes according to plan.

Performance isn’t enough to enter a market that’s been dominated by two brands for decades. XeSS looks like a feature to separate Intel from the competition, offering supersampling that functions a lot like DLSS without requiring proprietary hardware. With resolutions pushing higher and visual glitter like ray tracing becoming more common, it’s the feature that will help Intel stand apart.

Existing isn’t enough, though. Wide adoption, consistently high quality, and smart feature pairing will make the difference for XeSS. And if Nvidia continues to rest on its laurels, Intel has a shot to establish its supersampling feature as the go-to option.

Editors' Recommendations

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Some of our worst fears about Starfield may now be confirmed
Screenshot from Starfield, a Bethesda Studios game.

We saw it coming, but we're still not happy. According to a recent leak, gamers who want to play Starfield may be unhappy if they have an Nvidia or Intel graphics card. It's increasingly looking like the upcoming Bethesda game will only support AMD's upscaling tech, FidelityFX Super Resolution 2.0. Such assumptions have been floating around for a while, and a look into the game files seems to have confirmed them, bar any unexpected findings closer to the release date.

Sebastian Castellanos on Twitter dug through the Starfield preload files on PC, and there's been no sign of any support for Nvidia's DLSS or Intel's XeSS upscaling technologies. We've seen it coming ever since AMD announced an "exclusive partnership" with Bethesda -- and Starfield in particular. Both the game studio and the GPU and CPU maker have been worryingly silent on the matter ever since. While this might come as no surprise, it's still disappointing if it turns out to be true.

Read more
Intel’s Arc graphics cards have quietly become excellent
The backs of the Arc A770 and Arc A750 graphics cards.

Intel's Arc A770 and A750 were decent at launch, but over the past few months, they've started to look like some of the best graphics cards you can buy if you're on a budget. Disappointing generational improvements from AMD and Nvidia, combined with high prices, have made it hard to find a decent GPU around $200 to $300 -- and Intel's GPUs have silently filled that gap.

They don't deliver flagship performance, and in some cases, they're just straight-up worse than the competition at the same price. But Intel has clearly been improving the Arc A770 and A750, and although small driver improvements don't always make a splash, they're starting to add up.
Silently improving

Read more
Intel’s upcoming iGPU might destroy both Nvidia and Apple M2
A render of Intel's H-series mobile processors.

Intel Meteor Lake might not see the light of day on desktops (not anytime soon, at least), but it seems that the mobile chips are going strong.

According to inside sources, laptops equipped with Meteor Lake chips may not even need a discrete graphics card -- the integrated GPU is going to be powerful enough to rival Nvidia's GTX 1650. That's not all, though. It appears that Intel might even be able to compete against Apple's M2 chip, but in a different way.

Read more