With features like ray tracing and increasingly complex visuals, graphics cards need a supersampling solution to maintain playable frames. Nvidia pioneered this concept with Deep Learning Super Sampling (DLSS), and Intel’s upcoming graphics cards will use a similar feature called XeSS.
In short, XeSS uses machine learning to upscale an image from a smaller internal resolution to a larger external one. DLSS does the same thing, but it requires a recent Nvidia RTX graphics card with dedicated Tensor cores. XeSS doesn’t. Instead, Intel is making two versions available — one that will leverage the dedicated hardware inside its upcoming Arc Alchemist graphics cards, and another that will serve as a general-purpose solution for a wide range of hardware.
That alone gives XeSS a leg up over DLSS, but it’s not enough. Here are three things XeSS needs to take the supersampling crown from Nvidia and cement Intel’s place in the graphics card market.
Quality and performance
XeSS really doesn’t matter if it can’t hit the performance and quality marks set by DLSS. Intel hasn’t demoed the feature running in any games, only in a 4K demonstration that provided vague performance hints. From that, we have an idea of what XeSS is capable of, but Intel still needs to show more.
Right now, Intel says XeSS can provide up to a 2x performance improvement over native 4K, and that it can upscale 1080p to an effective 4K with virtually no quality loss. That matches DLSS 2.0, but Nvidia has verifiable performance numbers and comparative screenshots in games you can play.
Claiming virtually no quality loss doesn’t mean much for Intel. Nvidia’s first DLSS implementation, for example, was abhorrent in terms of quality, smudging out too much detail to justify any performance gains. Intel can’t afford to release XeSS in a similar state, so it needs to nail the performance gains while maintaining as much quality as possible.
To be clear, a 2x performance improvement with virtually no quality loss is the minimum that Intel needs to achieve. That’s the bar DLSS set, and Intel has made it clear that XeSS is squarely targeting Nvidia’s tech. In an ideal world, Intel would push XeSS even further.
XeSS also needs multiple quality modes. DLSS comes with up to four modes that allow you to tweak the balance of quality and performance. These modes shrink the internal render size, essentially giving the upscaling algorithm less information to work with.
Quality modes help DLSS work across a wide range of hardware. A flagship GPU might turn to the Quality mode for a performance boost, but the Performance mode is there for low-end options without a lot of power. Intel hasn’t said if XeSS will support multiple quality modes, but it needs to in order to compete with DLSS.
Easy implementation
One of the benefits of AMD’s FidelityFX Super Resolution (FSR) is how easy it is to add into games. Following its launch, the developer for Edge of Eternity said that it only took “a few hours” to add into the game, contrasting that with the lengthy process DLSS required.
Intel already understands this point, it seems. Following the launch of FSR, Nvidia made the decision to make DLSS available to all developers. Previously, developers would need to apply and be approved before adding the feature to their games. Intel is launching XeSS with the software development kit (SDK) freely available, which alone is a big deal.
The question is how long the A.I. model takes to produce quality results, and what developers need to do to get XeSS up and running in their games. The easier XeSS is to add into games, the more games will support it. We’ve already seen that with FSR, which experienced rapid adoption following its launch.
Game support is what matters for the long-term success of XeSS, and game support comes through a simple, easy to add SDK. This is all the more important for XeSS because Intel is offering two SDKs. If one is hard enough to implement, good luck getting developers to add two.
Ideally, developers will be able to port the implementation from one SDK to the other. As mentioned, XeSS comes in two forms, each of which require their own SDK. Intel needs to make it easy for developers to add both. If developers are forced to choose, that defeats XeSS’ main claim to fame — support for a wide range of hardware.
Ray tracing
Nvidia usually bundles DLSS with ray tracing, and it’s easy to see why. Supersampling features like DLSS are absolutely great for low-end to midrange hardware that struggles to run new games at high resolutions and stable frame rates. High-end hardware doesn’t need the feature as much, especially since a lot of recent AAA games run great on recent flagship cards.
Enter ray tracing, the complex and realistic lighting calculation that wants nothing more than to bring your high-end GPU to its knees. Ray tracing alone is too demanding, and supersampling alone doesn’t mean much across all hardware in all games. To hit the most users, you need to bundle both together.
With high-end cards, the option becomes running the game at native resolution or turning on ray tracing with supersampling enabled. Intel confirmed that its upcoming graphics cards will support hardware-accelerated ray tracing. However, that only makes a difference if it comes along with XeSS.
Ideally, Intel will go after titles that already support DLSS, as well as upcoming titles that plan to use ray tracing. Regardless, the two features should always arrive together. DLSS combined with ray tracing is greater than the sum of its parts, and that’s something AMD hasn’t caught onto with FSR. Intel can’t afford to make that same mistake.
The multibillion dollar underdog
Intel is a massive company — it generates far more revenue than AMD and Nvidia. In the world of discrete graphics cards, however, it’s starting at zero. Even if Alchemist cards come out and perform better than their competition (preferably at a lower price), Intel has a long road ahead to establish itself against AMD and Nvidia. It will take several years, and that’s assuming everything goes according to plan.
Performance isn’t enough to enter a market that’s been dominated by two brands for decades. XeSS looks like a feature to separate Intel from the competition, offering supersampling that functions a lot like DLSS without requiring proprietary hardware. With resolutions pushing higher and visual glitter like ray tracing becoming more common, it’s the feature that will help Intel stand apart.
Existing isn’t enough, though. Wide adoption, consistently high quality, and smart feature pairing will make the difference for XeSS. And if Nvidia continues to rest on its laurels, Intel has a shot to establish its supersampling feature as the go-to option.