Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Intel Xe-HPG: Everything you need to know about Intel’s first gaming GPU

With Xe, Intel has numerous integrated and discrete graphics options, but none explicitly designed for PC gaming. But soon, Intel will launch its first discrete gaming GPU, code-named, Xe-HPG. With a new DG2 graphics architecture, Intel’s graphics cards could finally have the power to appeal to enthusiast gamers.

While details about the card are still scarce, Intel’s gaming-focused GPU represents another premium option in the graphics card market, which, if successful, could help alleviate some of the shortages we’ve been seeing in 2021.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Here’s everything we know about Intel’s graphics journey to court PC gamers.

Pricing and availability

Image used with permission by copyright holder

During its earnings call in October 2020, Intel provided some guidance on its graphics endeavors. At the time, the Xe maker revealed that it had begun volume shipment of its DG1 silicon and started work on its DG2 GPU, which will power the high-performance gaming graphics card.

“Our first discrete GPU DG1 is shipping now and will be in systems for multiple OEMs later in [the fourth quarter]. We also powered on our next-generation GPU for client DG2,” Former Intel CEO Bob Swan said during the call with investors and analysts, according to a transcript provided by Seeking Alpha. “Based on our Xe high-performance gaming architecture, this product will take our discrete graphics capability up the stack into the enthusiast segment.”

At the time of the call, the DG2 architecture was said to still be in alpha form. Intel is expected to ship its DG2, and by extension its Xe-HPG graphics card, sometime this year, though an exact date was not given. Given that the GPU industry is currently being ravaged by shortages and extended shipping delays, an early 2022 launch could also be possible, according to Reuters.

We expect Intel to price its gaming GPU in the range of $300 to $600 to make it competitive against offerings from AMD and Nvidia. Because performance details about Intel’s Xe-HPG card are still unknown at this time, it’s difficult to predict how Intel will price its GPU. For reference, Nvidia’s RTX 3000 series starts at just $329 for the entry-level GeForce RTX 3060 model, while the flagship GeForce RTX 3080 is priced at a more modest $699. At the ultra-premium end of the spectrum, the GeForce RTX 3090, which replaces the Titan RTX from the previous year, is priced at $1,499.

Architecture and performance

Image used with permission by copyright holder

Intel’s new DG2 architecture will not only succeed the DG1 from last year, but it will also bring new capabilities and enhancements to make it appealing to enthusiast gamers. Some of the features that the new Xe-HPG card will support that have been confirmed by Intel include hardware-based ray tracing and mesh shading. Ray tracing will make the HPG GPU competitive against Nvidia’s GeForce RTX 3000 series and AMD’s latest Radeon RX 6000 series, while mesh shading means that the card will support Microsoft’s DirectX Ultimate framework.

Early rumors suggest that the Xe-HPG GPU will come with 512 execution units, up from a paltry 80 EUs on the DG1. This means that DG2 will have, if rumors are accurate, 4,096 cores. And like AMD’s cards and Nvidia’s midrange offerings, the DG2 will come with GDDR6 memory support, which means that the GDDR6X memory will remain exclusive to Nvidia’s flagship cards for some time longer.

With DG2, Intel may be moving to a 7nm process, which should make it competitive against the Radeon RX 6000 cards from AMD, and the company will likely rely on an outside fabrication lab for manufacturing. According to Reuters, Taiwan Semiconductor Manufacturing Company has been tapped, and DG2 will be based on an “enhanced version of its nanometer process,” the publication reported, citing two unnamed people familiar with Intel’s plans.

This would make the DG2 more advanced than the 8nm Samsung node used by rival Nvidia for its RTX 3000 series.

Most recently, Raja Koduri , senior vice president of Intel’s architecture, software and graphics division and a graphics veteran who worked at AMD, teased that DG2 will have strong performance when it comes to mesh shading.

“Xe HPG mesh shading in action, with the UL 3DMark Mesh Shader Feature test that is coming out soon,” Koduri tweeted alongside a fully rendered game image with complicated details (below).

Image used with permission by copyright holder

The mesh shaders that Koduri referenced are part of Microsoft’s DirectX 12 Ultimate APIs that should help systems more efficiently render complex scenes in a video game by making geometry processing act more like compute shaders, according to Microsoft’s description of the technology. This means that gamers will be able to experience detailed and dynamic worlds without any compromise in performance.

Koduri didn’t reveal the 3DMark benchmark results, so it’s still too early to say how Xe-HPG’s performance stacks up against competing GPUs.

Ray tracing onboard

Image used with permission by copyright holder

Intel previously detailed that Xe-HPG will come with native support for real-time ray tracing, but it’s unclear how the company will implement this feature. Nvidia uses a feature called DLSS to improve speeds and performance when ray tracing is enabled in a game. DLSS uses artificial intelligence to learn the game and the scenes, allowing the GPU to conserve resources by rendering the image at a lower 1080p resolution before upscaling it to 4K on your monitor without any degradation to fidelity.

With its second-generation ray tracing cores on its RTX 3000 series cards, Nvidia had improved ray tracing performance even further.

Image used with permission by copyright holder

Lacking its own implementation of DLSS, reviewers noted that although the AMD Radeon RX 6000 cards were strong competitors, performance with ray tracing was no where near as good as Nvidia’s competing offerings. Given Intel’s recent bets in artificial intelligence — baked-in A.I. processing was a key part of Intel’s most recent generations of processors and integrated graphics — we can infer that A.I. could be implemented on Xe-HPG in some meaningful manner to help speed up ray tracing.

If Intel does use some sort of A.I.-powered DLSS, gamers can experience ray-traced games without significant drops in frame rates.

When it launches, Intel’s Xe-HPG will not only have to take on AMD and Nvidia, but also Apple’s custom silicon that’s powering a whole new generation of Macs. The current M1 processor on the M1-powered MacBook Pro comes with Apple’s very capable integrated graphics solution.

Editors' Recommendations

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
What is NVMe? Everything you need to know about high-speed storage
A woman holding the Samsung 870 Pro NVMe PCIe SSD in her hands.

NVMe drives are typically the most cutting-edge and fastest of modern PC and laptop internal storage devices. They're small and slim, they require no additional power or data cables and if you own a desktop PC or laptop built in the past few years, you almost certainly have one as your boot drive.

But what is NVMe? What makes it different from SATA or M.2 drives? Just how fast can NVMe go?

Read more
AMD FSR 3: everything you need to know about Fluid Motion Frames
An announcement slide showing FSR 3.

AMD's FidelityFX Super Resolution (FSR) is growing up. The new FSR 3 feature promises to generate frames and multiply your performance, finally providing Nvidia's Deep Learning Super Sampling (DLSS) tech with a challenge.

The feature was announced nearly a year ago, but we've just seen the first games to support FSR 3 arrive. To bring you up to speed, here's everything you need to know about FSR 3, including how it works and what GPUs are supported.
What is FSR 3?
FSR 3 is the third and latest version of AMD’s FidelityFX Super Resolution. Building upon the foundation of FSR 2's upscaling capabilities, FSR 3 introduces frame generation that allows the system to create entirely new in-game frames and present them to the user, which ultimately enhances the frames per second (fps). It is claimed to offer up to twice the frame rate as its predecessor.

Read more
What is Display Stream Compression? Everything you need to know about DSC
HDR demo on the Samsung Odyssey Neo G8.

Display Stream Compression (DSC) is a compression algorithm that lets monitors and TVs display resolutions and refresh rates that they wouldn't be otherwise capable of handling. It improves the capabilities of HDMI and DisplayPort cables, letting them too, serve displays that are higher resolution and refresh rate than they are natively capable of doing. DSC does all this without almost anyone knowing. You've probably had it enabled without realizing it.

That's because DSC is a visually lossless compression format. While not mathematically lossless, you'd be very hard-pressed to see any kind of difference with DSC enabled. That's a good thing because it means that your 4K 240Hz monitor can handle all that data it needs without compromising on quality.

Read more