Skip to main content

Intel Xe graphics: Everything you need to know about Intel’s dedicated GPUs

Intel is back in the graphics game after a nearly 20-year hiatus. Originally announced for a 2020 release, Intel’s Xe graphics will be the first add-in GPU Intel has produced since 1998, and we expect to see them this year.

The technology behind this GPU and its potential performance remain shrouded in mystery, but as time goes on, details are coming to light. If it proves a viable alternative to Nvidia and AMD, this will be one of the most important events in the graphics card industry this century.

Here’s what we know so far.

Pricing and availability

Image used with permission by copyright holder

Intel originally stated that we can expect a summer 2020 debut and that it was targeting a broad selection of graphics options for the mainstream, gamers, professionals, and data centers. 2020 has come and gone, though, so now Intel is targeting a 2021 release.

Intel revealed the first of its business-centric GPUs during the Intel HPC Developer Conference in November 2019, called Ponte Vecchio. During his keynote, Intel executive Raja Koduri said that Ponte Vecchio will first appear in the Aurora supercomputer at Argonne National Laboratory. However, the Ponte Vecchio GPU won’t arrive until 2021 given it will be built on a second-generation of Xe graphics.

In December 2019, however, rumors began to circulate that the Xe graphics development was not progressing fast enough. A user from the Chiphell forums (via WCCFTech) claimed that Xe graphics development was not going well and that we may not see any Intel GPUs before the end of 2020 (and sure enough, we didn’t). They also claimed that this would push Ponte Vecchio back to 2022 at the earliest.

Right now, it seems both GPUs are targeting a 2021 release. However, we may see a delay on Ponte Vecchio to 2022.

We will set our graphics free. #SIGGRAPH2018 pic.twitter.com/vAoSe4WgZX

— Intel Graphics (@IntelGraphics) August 15, 2018

Intel has been making progress, though. Its Tiger Lake processors, which use on-board Iris Xe graphics, launched in 2020. Intel also released its first dedicated GPU in decades: the Iris Xe Max, though the slightly underpowered graphics unit has only shown up in a few laptops so far: The Acer Swift 3, Asus VivoBook Flip TP470, and Dell Inspiron 15 7000.

Intel is starting to launch this same Iris Xe Max chip, known as Discrete Graphics 1, or DG1, as an OEM-only part for desktop. Rumors suggested that the DG1 would be a low-powered discrete GPU for laptops, not an add-in card for desktops. It now appears to be both.

An Asus DG1 model. Image used with permission by copyright holder

DG1 cards will offer support for up to three 4K monitors, video acceleration, and support for adaptive sync (Freesync). They won’t be super powerful for gaming, but could offer comparable performance to Tiger Lake onboard Xe GPUs, which should make them fine for entry-level gaming and if priced well, could be a good way to upgrade old business PCs for some gaming in the off hours.

Other cards are coming, too, though Intel has yet to announce anything concrete. The Xe HPG range will take “discrete graphics capability up the stack into the enthusiast segment,” according to a 2020 earnings call. Intel also called out the DG2 GPU, which is rumored to launch with 16GB of GDDR6 memory, targeting performance similar to the RTX 3070.

The DG2 is in the “power on” phase, according to the earnings call. That’s one of Intel’s last internal steps before sending the card off for external validation. Given that, we expect a late 2021 or early 2022 launch, though Intel hasn’t announced anything yet.

Architecture and performance

Image used with permission by copyright holder

When Intel made its official announcement about the new graphics card technology it was developing, the company made clear that it was building a discrete GPU for add-in graphics cards. This would be its first discrete graphics card since the i740 AGP-based card launched in 1998. This card used Intel’s first-generation GPU architecture, a technology that moved to chipsets for the next three generations followed by integration into the CPU starting with fifth-generation.

While the announcement of a new discrete graphics card suggests Intel is building something separate from its existing integrated graphics for CPUs, that’s not entirely the case. They will be based on the same 12th-generation architecture used at the core of its integrated graphics for CPU generations like Tiger Lake but scaled appropriately.

Intel previously announced that there will be three distinct micro-architectures as part of the Xe range:

  • Xe HPC – High-performance GPUs designed for data centers and supercomputing, like Ponte Vecchio.
  • Xe HP – High-performance GPUs targeting PC gamers, enthusiasts, workstations, and professionals.
  • Xe LP – Low-performance GPUs for entry-level cards and integrated graphics.

In each case, Intel will leverage the “Xe” branding for all future graphics endeavors, whether onboard or discrete, though there will be severe performance differences between them.

The best look at the architecture behind these new cards came in our reporting on an obtained Intel slide that indicated the TDPs and “tile” (or “slice”) architecture of these first Xe cards.

Intel ATS Specs
Image used with permission by copyright holder

Intel’s GPU architecture uses a tile-based design, though documents also refer to slices. Among other components, each tile contains a set number of “sub-tiles” (or sub-slices) packing eight execution units each. This base design isn’t anything new.

For instance, the 11th-generation GT2 features one tile with eight sub-tiles at eight EUs each, totaling 64 EUs. Intel’s GT1.5 design totals 48 EUs while the low-end GT1 totals 32 EUs. The biggest count thus far is the ninth-generation GT3 with two tiles packing 96 EUs each. Translated into core counts, the ninth-gen GT3 has 768 cores and the 11th-gen GT2 has 512 cores.

What is new, however, is the introduction of a four-tile design in 12th-gen. Up until now, we’ve only seen two at the most — in integrated graphics, at that. A driver leak from July 2019 also suggests that Intel’s Gen12 design will double the EU count for each tile, totaling 128. Even more, they could be interconnected with Intel’s embedded multi-die interconnected bridge (EMIB).

Elsewhere in the presentation, Intel claimed to be developing graphics cards throughout the entire price and performance spectrum. Included are designs ranging from a thermal design power (TDP) of 75 watts up to 500 watts. At the high end, that’s around 200 watts more than Nvidia’s RTX 3080. With the above chart confirming that GPUs will use a 48-volt input voltage, that makes it almost certainly a data center card.

It’s possible that this 500-watt, four-tile card is the Ponte Vecchio GPU revealed in November 2019. Although the Ponte Vecchio is not set to debut until 2021, Intel called it the “first exascale GPU” and stated that it would use a Compute eXpress Link (CXL) interface running over a PCI-Express 5.0 connection.

According to the chart, other single and dual tile GPUs will have TDPs of 75, 150, and 300 watts. While the latter is still high, those are much more typical TDPs that could well equate to gaming graphics cards for mainstream desktops. Comparable cards that already exist at those levels include the GTX 1650 at 75 watts, the RTX 3060 at 170 watts, and the RTX 3080 at 320 watts

However, keep in mind that the numbers provided on the chart could change before the official Xe launch. They’re based on motherboards (Reference Validation Platform) and special systems (Software Development Vehicle) supplied to OEMs, system integrators, and software developers.

The only card released so far is the DG1, which doesn’t seem to match any of the cards in the chart above with its 30w TDP. We don’t have any benchmarks for the DG1 yet — at least not from the upcoming desktop card — but gamers shouldn’t hold their breath. Leaked benchmarks aren’t great, and Intel has already been clear that this GPU is targeting general use and light gaming, falling in-line with low-end dedicated GPUs like the Nvidia 1010.

The DG1 sports 4GB of video memory and 80 EUs, and it comes with HDR support, AV1 decode support, and adaptive sync.

Iris Xe mobile graphics

Image used with permission by copyright holder

Although add-in Xe cards aren’t here yet, we’ve had a chance to test Intel’s mobile Iris Xe graphics, featured in Tiger Lake chips for thin and light laptops. Unlike many previous on-board solutions, Iris Xe can actually run modern games with stable framerates.

During our testing, we were able to run Civilization VI, Battlefield V, and Fortnite at Medium settings at 1080p. And, we got relatively smooth framerates. Civilization VI averaged 45 frames per second (fps), while Battlefield V pushed even further: 51 fps. Performance isn’t always consistent, however. In Fortnite, we averaged just 34 fps at Medium settings.

You can pick up laptops with Iris Xe graphics now. However, this design is built for thin and light laptops, so performance is secondary to thermals and noise.

Intel also has a few laptops with a dedicated Iris Xe Max GPU, but Iris Xe integrated graphics actually perform better (by between 10% and 30%). At 720p, the integrated Iris Xe GPU maintains a lead over the Iris Xe Max discrete GPU in most AAA games, according to testing done by PCMag. More demanding games, such as Rise of the Tomb Raider and Far Cry 5 show less of a difference at 1080p as both of the GPUs become a bottleneck.

In e-sports titles like Counter-Strike: Global Offensive, the integrated GPU still outperforms the dedicated one at 1080p. Between a low-powered dedicated GPU and an impressive integrated GPU targeting thin and light laptops, Intel isn’t going after the gaming crowd with Iris Xe Max right now.

NotebookCheck’s benchmarks tell a similar story. Mobile GPUs targeting a similar price bracket, such as the Nvidia GTX 1050 mobile and MX350, perform better than the Xe Max GPU on average. However, these benchmarks show the Xe Max GPU outperforming the integrated Tiger Lake GPU in games like Final Fantasy XV and The Witcher 3 at 1080p. The Iris Xe Max card is only hitting around 15-20 fps in these games at 1080p, but it’s still technically a lead. Even so, the integrated Iris Xe GPU shoots ahead at 720p (and even maintains playable framerates).

You don’t need to worry about that right now, though. The Iris Xe Max dedicated GPU is only available alongside a Tiger Lake processor with an integrated GPU. According to PCMag, Intel says certain games will automatically switch to the integrated GPU. The list includes Far Cry 5, Assassin’s Creed Odyssey, and Bioshock Infinite, all of which should perform better on the integrated graphics.

That’s not to say the dedicated Iris Xe Max GPU is pointless. In some games, such as Metro Exodus, the dedicated card actually performs better than the integrated one. There’s also Intel Deep Link, which can leverage the GPU for other, non-gaming tasks like video encoding. The tech is still developing, but it’s intended as a way to add high-end GPU acceleration to thin and light laptops.

HBM and PCI-Express 4.0

In a (since removed) interview, Raja Koduri suggested that these new graphics cards might not use the more typical GDDR6 for memory. Instead, he indicated that these GPUs could use high-bandwidth memory (HBM), which is far more expensive. This bit of news shouldn’t be surprising given Raja Koduri previously ran AMD’s Radeon division, which previously used HBM2 memory, like with the Radeon VII.

Since then, Intel has downplayed and mostly debunked the suggestion, but the Ponte Vecchio GPU announcement did show HBM memory used in the design. The internal documents Digital Trends obtained in February 2020 also pointed to the use of high bandwidth memory in its Xe GPUs, specifically HBM2e, which is the most efficient, and fastest, of the HBM generations. It is reportedly 3D-stacked directly onto the GPU itself using Intel’s Foveros technology.

That doesn’t mean we’ll see such expensive memory used in consumer graphics, but it’s an interesting note that whatever Intel is working on, it must be able to leverage HBM.

Most consumer graphics cards stick with GDDR6, including everything as high-end as Nvidia’s RTX 3070. AMD’s more recent RX 5700 graphics cards ditched HBM2 in favor of GDDR6, as did its RX 6800 cards. The only exception is Nvidia’s new RTX 3080 and 3090, both of which use slightly faster GDDR6X memory.

Another note in the internal Intel documents we’ve seen was a mention of PCI Express 4.0 support. Although current graphics cards don’t even get close to saturating PCI Express 3.0 x16 slots, PCIe 4.0 support does hint at the performance potential of the new Intel cards. It could also be that they are better optimized to require fewer lanes, opening up more lanes for storage drives and other add-in cards.

We will say, however, the latest cards from Nvidia and AMD support PCIe 4.0, too. Although there’s definitely a generational performance improvement, even the latest cards struggle to saturate a PCIe 3.0 connection.

Ray tracing?

Image used with permission by copyright holder

Nvidia has gone all-in on real-time ray tracing, making it a key feature of its current RTX graphics cards. Despite its slow start, the technology has the potential to become the most important new feature in computer graphics over the coming years. The problem is that the increase in lifelike lighting and shadows can be costly in terms of performance. AMD has been more hesitant about diving into the world of ray tracing for that exact reason, though it has started to dabble, especially on consoles like the Playstation 5.

Well before the launch of Xe, Intel has already come out of the gate touting support for ray tracing in its future GPUs. Jim Jeffers, Intel’s senior principal engineer (and senior director of advanced rendering and visualization), made the following statement on the matter:

Intel Xe architecture roadmap for data center optimized rendering includes ray-tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.

We don’t yet know what that statement means for ray tracing in games, but if the hardware acceleration has been implemented, we’d be surprised if Intel didn’t also bring that to gamers.

Drivers and software

Both Nvidia and AMD have their respective driver software suites that do more than just help the GPU communicate properly with the system at a large. Features like image sharpening, Reshade filters, game recording, lower latency inputs, and dynamic resolution adjustment, have all improved the offerings of the major GPU manufacturers. Intel will want to do something similar when it launches its Xe graphics cards and has already begun to lay the groundwork.

In March 2019, Intel debuted its Graphics Command Center. It only works with Intel’s onboard graphics solutions at this time, but includes options for launching games, game optimization, and the chance to tweak global GPU options across all applications.

Iris Xe Max graphics users can take advantage of Intel Deep Link, which is designed to dynamically adjust power routing to the CPU and GPU to boost performance based on what you’re doing. Intel is also working with companies like Blender, HandBrake, and Topaz Labs to enhance content creation tasks like video editing and encoding using AI. We’re still waiting on third-party testing to assess the effect Deep Link has on real-world performance.

Alongside hardware development, Intel is reportedly putting a lot of time and energy into its driver development, with some reports suggesting that they need heavy optimization before they see the light of day.

AMD alumni are helping to make it

Intel Raja Koduri
Image used with permission by copyright holder

Intel hasn’t released a dedicated graphics card in 22 years. It did develop what became a co-processor, in Larabee, in the late 2000s, but that proved to be far from competitive with modern graphics cards, even if it found some intriguing use cases in its own right.

Intel hired several industry experts to develop its graphics architecture into something worthy of a dedicated graphics card, most notably Raja Koduri from AMD.

Koduri assisted in developing Apple’s Retina displays and oversaw the development of AMD’s Navi and Vega architectures. Additionally, Radeon Technology Group hired Koduri as chief architect, all before joining the Intel team over two and a half years ago.

Kodury was joined by another familiar face from AMD and Apple when Intel decided to hire one hundred more graphic designers in mid-2018. Jim Keller also moved to Intel; he had previously worked as the lead architect on AMD’s Zen architecture.

In his previous roles, Keller had helped design A4/A5 processors for both Athlon and Apple. According to Intel itself, Keller now leads Intel’s silicon development and will help “change the way [Intel] builds silicon.” Since they’ve hired Jim Keller, Intel might be attempting 10nm production.

Chris Hook is another former AMD employee who was recently hired by Intel. Hook previously worked for AMD for 17 years as the director of global product marketing and will benefit Intel in that capacity.

Along with Hook, Intel hired another AMD employee, Darren McPhee. The company hired McPhee in a marketing role for discrete graphics. McPhee had over 18 years of experience with product management at different levels and also with launching global campaigns.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
AMD FSR 3: everything you need to know about Fluid Motion Frames
An announcement slide showing FSR 3.

AMD's FidelityFX Super Resolution (FSR) is growing up. The new FSR 3 feature promises to generate frames and multiply your performance, finally providing Nvidia's Deep Learning Super Sampling (DLSS) tech with a challenge.

The feature was announced nearly a year ago, but we've just seen the first games to support FSR 3 arrive. To bring you up to speed, here's everything you need to know about FSR 3, including how it works and what GPUs are supported.
What is FSR 3?
FSR 3 is the third and latest version of AMD’s FidelityFX Super Resolution. Building upon the foundation of FSR 2's upscaling capabilities, FSR 3 introduces frame generation that allows the system to create entirely new in-game frames and present them to the user, which ultimately enhances the frames per second (fps). It is claimed to offer up to twice the frame rate as its predecessor.

Read more
What is Display Stream Compression? Everything you need to know about DSC
HDR demo on the Samsung Odyssey Neo G8.

Display Stream Compression (DSC) is a compression algorithm that lets monitors and TVs display resolutions and refresh rates that they wouldn't be otherwise capable of handling. It improves the capabilities of HDMI and DisplayPort cables, letting them too, serve displays that are higher resolution and refresh rate than they are natively capable of doing. DSC does all this without almost anyone knowing. You've probably had it enabled without realizing it.

That's because DSC is a visually lossless compression format. While not mathematically lossless, you'd be very hard-pressed to see any kind of difference with DSC enabled. That's a good thing because it means that your 4K 240Hz monitor can handle all that data it needs without compromising on quality.

Read more
Intel’s new integrated graphics could rival discrete GPUs
The Intel Meteor Lake chip.

Intel has just announced an interesting update to its upcoming Meteor Lake chips: The integrated graphics are about to receive an unprecedented boost, allowing them to rival low-end, discrete GPUs. Now equipped with hardware-supported ray tracing, these chips have a good chance of becoming the best processors if you're not buying a discrete graphics card. Performance gains are huge, and it's not just the gamers who stand to benefit.

The information comes from a Graphics Deep Dive presentation hosted by Intel fellow Tom Petersen, well-known to the GPU market for working on Intel Arc graphics cards. This time, instead of discrete graphics, Petersen focused on the integrated GPU (iGPU) inside the upcoming Meteor Lake chip. Petersen explained the improvements at an architectural level, introducing the new graphics as Intel Xe-LPG.

Read more