Intel Xe graphics: Everything you need to know about Intel’s dedicated GPUs

Intel is finally back in the graphics game, with the blue team suggesting that it will debut its first Xe graphics solutions in 2020 across a broad selection of platforms, ecosystems, and prices. This will be Intel’s first new add-in graphics card in over twenty years.

The technology behind this GPU and its potential performance remain shrouded in mystery, but as time goes on details are coming to light. If it proves a viable alternative to Nvidia and AMD, this will be one of the most important events in the graphics card industry this century.

Here’s what we know so far.

Pricing and availability

Intel originally stated that we can expect a summer 2020 debut and that it was targeting a broad selection of graphics options for the mainstream, gamers, professionals, and data centers.

In mid-2019, Raja Koduri tweeted a picture of his Tesla, with the “ThinkXE” name on the license plate and paired with a tag showing the month of June and the year 2020. This could be teasing a June 2020 release for Xe graphics cards for consumers, perhaps in time for Computex.

Meanwhile, Intel revealed the first of its business-centric GPUs during the Intel HPC Developer Conference in November 2019. During his keynote, Intel executive Raja Koduri said that Ponte Vecchio will first appear in the Aurora supercomputer at Argonne National Laboratory. However, the Ponte Vecchio GPU won’t arrive until 2021 given it will be built on a second-generation of Xe graphics.

In December 2019, however, rumors began to circulate that the Xe graphics development was not progressing fast enough. A user from the Chiphell forums (via WCCFTech) claimed that Xe graphics development was not going well and that we may not see any Intel GPUs before the end of 2020. They also claimed that this would push Ponte Vecchio back to 2022 at the earliest.

If true, this would be problematic for many reasons, most immediately because Xe graphics are slated to feature on Intel’s upcoming Tiger Lake CPUs, currently slated for a 2020 release. This will likely be a priority for Intel, so it could result in dedicated GPU shortages if they launch at a similar time.

There is also a rumor that Intel will be the only company selling its GPUs at launch, as it has struggled to define new relationships with add-in board partners to create custom and overclocked variants.

As for the types of cards we can expect when they do launch, an Intel driver leak in summer 2019 referenced four different discrete graphics cards. For gamers and hardware fans, that suggests there will be a relatively broad selection of graphics cards available.

Finally, a recent slide displayed during a presentation for the Exascale Computing Project indicates that Xe DG1 — Intel’s official name for its discrete GPU for consumers — will not be an add-in card for the desktop. Instead, it will be a low-power discrete GPU for laptops. The add-in card version only appears in the Software Development Vehicle kit supplied to software developers.

Although recent tweets from Raja Koduri show actual hardware, he later clarified that all images are based on GPUs targeting the data center. These images include the Xe-HPC Ponte Vecchio GPU and one based on Intel’s Xe-HP design.

At this point, there’s no indication that Xe will appear as an add-in card during 2020. Instead, we’ll likely see Intel follow its CPU launch pattern and introduce Xe in laptops first. Since we already know Xe will launch for data centers in 2021, assuming that desktop GPUs will appear in the same timeframe is a safe bet.

To drive that idea home, Raja Koduri specifically called out Intel’s current focus on integrated graphics:

Architecture and performance

When Intel made its official announcement about the new graphics card technology it was developing, the company made clear that it was building a discrete GPU for add-in graphics cards. This would be its first discrete graphics card since the i740 AGP-based card launched in 1998. This card used Intel’s first-generation GPU architecture (Gen1), a technology that moved to chipsets for the next three generations followed by integration into the CPU starting with Gen5.

While the announcement of a new discrete graphics card suggests Intel is building something separate from its existing integrated graphics for CPUs, that’s not entirely the case. They will be based on the same 12th-generation architecture used at the core of its integrated graphics for upcoming CPU generations like Tiger Lake but scaled appropriately.

Intel previously announced that there will be three distinct micro-architectures as part of the Xe range:

  • Xe HPC – High-performance GPUs designed for data centers and supercomputing, like Ponte Vecchio.
  • Xe HP – High-performance GPUs targeting PC gamers, enthusiasts, workstations, and professionals.
  • Xe LP – Low-performance GPUs for entry-level cards and integrated graphics.

In each case, Intel will leverage the “Xe” branding for all future graphics endeavors, whether onboard or discrete, though there will be severe performance differences between them.

The best look at the architecture behind these new cards came in our reporting on an obtained Intel slide that indicated the TDPs and “tile” (or “slice”) architecture of these first Xe cards.

Intel ATS Specs

Intel’s GPU architecture uses a tile-based design, though documents also refer to slices. Among other components, each tile contains a set number of “sub-tiles” (or sub-slices) packing eight execution units each. This base design isn’t anything new.

For instance, the 11th-generation GT2 features one tile with eight sub-tiles at eight EUs each, totaling 64 EUs. Intel’s GT1.5 design totals 48 EUs while the low-end GT1 totals 32 EUs. The biggest count thus far is the ninth-generation GT3 with two tiles packing 96 EUs each. Translated into core counts, the 9th Gen GT3 has 768 cores and the 11th Gen GT2 has 512 cores.

What is new, however, is the introduction of a four-tile design in Gen12. Up until now, we’ve only seen two at the most — in integrated graphics, at that. A driver leak from July 2019 also suggests that Intel’s Gen12 design will double the EU count for each tile, totaling 128. Even more, they could be interconnected with Intel’s embedded multi-die interconnected bridge (EMIB).

Elsewhere in the presentation, Intel claimed to be developing graphics cards throughout the entire price and performance spectrum. Included are designs ranging from a thermal design power (TDP) of 75 watts up to 500 watts. At the high end, that’s double the TDP of Nvidia’s most powerful gaming graphics card, the 2080 Ti. With the above chart confirming that GPUs will use a 48-volt input voltage, that makes it almost certainly a data center card.

It’s possible that this 500-watt, four-tile card is the Ponte Vecchio GPU revealed in November 2019. Although the Ponte Vecchio is not set to debut until 2021, Intel called it the “first exascale GPU” and stated that it would use a Compute eXpress Link (CXL) interface running over a PCI-Express 5.0 connection.

According to the chart, other single and dual tile GPUs will have TDPs of 75, 150, and 300 watts. While the latter is still high, those are much more typical TDPs that could well equate to gaming graphics cards for mainstream desktops. Comparable cards that already exist at those levels include the GTX 1650 at 75 watts, the RX 5600 XT at 160 watts, and the Titan RTX at 280 watts.

However, keep in mind that the numbers provided on the chart could change before the official Xe launch. They’re based on motherboards (Reference Validation Platform) and special systems (Software Development Vehicle) supplied to OEMs, system integrators, and software developers.

Finally, a recent benchmark shows that DG1 is supposedly based on Intel’s Xe-LP design packing only 96 EUs (768 shader cores), which roughly translates to 2.3 teraflops. That puts it in league with Nvidia’s GTX 1050 Ti GPU. Listed as “Sandra,” the benchmark also lists 3GB of VRAM and 1MB of L2 cache.

HBM and PCI-Express 4.0

In a (since removed) interview, Raja Koduri suggested that these new graphics cards might not use the more typical GDDR6 for memory. Instead, he indicated that these GPUs could use high-bandwidth memory (HBM), which is far more expensive. This bit of news shouldn’t be surprising given Raja Koduri previously ran AMD’s Radeon division, which currently uses HBM2 memory, like with the Radeon VII.

Since then, Intel has downplayed and mostly debunked the suggestion, but the Ponte Vecchio GPU announcement did show HBM memory used in the design. The internal documents Digital Trends obtained in February 2020 also pointed to the use of high bandwidth memory in its Xe GPUs, specifically HBM2e, which is the most efficient, and fastest of the HBM generations. It is reportedly 3D stacked directly onto the GPU itself using Intel’s Foveros technology.

That doesn’t mean we’ll see such expensive memory used in consumer graphics, but it’s an interesting note that whatever Intel is working on, it must be able to leverage HBM. AMD’s next-generation “big Navi” GPU is rumored to have an option of HBM memory alongside GDDR6, so it may be that Intel offers something similar with its GPUs, giving the higher-end options HBM, and the lower-end options GDDR6.

Most consumer graphics cards stick with GDDR6, including everything as high-end as Nvidia’s RTX 2080 Ti. The only recent exception has been AMD’s Fury-X, Vega 56, 64, and Radeon VII. AMD’s more recent RX 5700 graphics cards ditched HBM2 in favor of GDDR6.

Another note in the internal Intel documents we’ve seen was a mention of PCI Express 4.0 support. Although current graphics cards don’t even get close to saturating PCI Express 3.0 x16 slots, PCIe 4.0 support does hint at the performance potential of the new Intel cards. It could also be that they are better optimized to require fewer lanes, opening up more lanes for storage drives and other add-in cards.

Ray tracing?

Nvidia has gone all-in on real-time ray tracing, making it a key feature of its current RTX graphics cards. Despite its slow start, the technology has the potential to become the most important new feature in computer graphics over the coming years. The problem is that the increase in lifelike lighting and shadows can be costly in terms of performance. AMD has been more hesitant about diving into the world of ray tracing for that exact reason, though it has plans to support it in the future, especially on consoles like the Playstation 5.

Well before the launch of Xe, Intel has already come out of the gate touting support for ray tracing in its future GPUs. Jim Jeffers, Intel’s senior principal engineer (and senior director of advanced rendering and visualization), made the following statement on the matter:

Intel Xe architecture roadmap for data center optimized rendering includes ray-tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.

We don’t yet know what that statement means for ray tracing in games, but if the hardware acceleration has been implemented, we’d be surprised if Intel didn’t also bring that to gamers.

Drivers and software

Both Nvidia and AMD have their respective driver software suites which do more than just help the GPU communicate properly with the system at a large. Features like image sharpening, Reshade filters, game recording, lower latency inputs, and dynamic resolution adjustment, have all improved the offerings of the major GPU manufacturers. Intel will want to do something similar when it launches its Xe graphics cards and has already begun to lay the groundwork.

In March 2019, Intel debuted its Graphics Command Center. It only works with Intel’s onboard graphics solutions at this time, but includes options for launching games, game optimization, and the chance to tweak global GPU options across all applications. It’s pretty barebones for now and offers basic functionality for onboard Intel GPU users, but the foundations have been laid for a more comprehensive Intel GPU software suite when Xe does debut in the future.

Alongside hardware development, Intel is reportedly putting a lot of time and energy into its driver development, with some reports suggesting that they need heavy optimization before they see the light of day.

AMD alumni are helping to make it

Intel Raja Koduri

Intel hasn’t released a dedicated graphics card in 22 years. It did develop what became a co-processor, in Larabee, in the late 2000s, but that proved to be far from competitive with modern graphics cards, even if it found some intriguing use cases in its own right.

To develop its graphics architecture into something worthy of a dedicated graphics card, Intel hired several industry experts, most notably Raja Koduri. He was hired straight from AMD where he had spent several years as a chief architect of the Radeon Technology Group, heading up development on AMD’s Vega and Navi architectures.

He’s been at Intel for two and a half years and was even joined in mid-2018 by Jim Keller, the lead architect on AMD’s Zen architecture. He now leads Intel’s silicon development and will, according to Intel itself, help “change the way [Intel] builds silicon.” That could be considered additional evidence of Intel’s push towards viable 10nm production.

Other ex-AMD employees that Intel picked up over the past year include the former director of global product marketing at AMD, Chris Hook, who spent 17 years working at the company, and Darren McPhee, who now heads up Intel’s product marketing for discrete graphics.

Editors' Recommendations

AMD vs. Intel

AMD Ryzen 9 3900x