Intel is back into the graphics game. The company will debut Xe graphics cards in 2020 — Intel’s first new add-in graphics card since the Intel740 released back in 1998.
The technology behind this GPU and its potential performance remain shrouded in mystery, but as time goes on details are coming to light. If it proves a viable alternative to Nvidia and AMD, this will be one of the most important events in the graphics card industry this century.
Here’s what we know so far.
Pricing and availability
We got our first hint at how Intel might price its upcoming Xe graphics cards at the beginning of August. In an interview with a YouTube channel translated from Russian, which has now been taken down, Intel Chief Architect Raja Koduri said its new cards will debut at around $200.
The translation has since been corrected by Intel, however, clarifying that “Raja was making the point that not all users will buy a $500-$600 graphics card, and that Intel strategy revolves around going for the full stack that ranges from Client to the Data Center. The $200 reference in the interview was an example of general entry pricing for Client dGPUs [discreet GPUs] – and not a confirmation of Intel dGPU pricing.”
The company has stated that it intends to have a wide target; Intel Xe will show up in every segment from integrated graphics to top-shelf data-center solutions. That includes midrange and enthusiast discrete graphics, as well.
A recent Intel driver leak referenced four different discrete graphics cards, suggesting that for gamers and hardware fans, there will be a relatively broad selection of graphics cards to pick from.
Intel has slated the graphics cards for a 2020 release and has remained firm on that, so as we edge close to the end of 2019 we have less than a year to see these graphics cards launched. Intel executive Raja Koduri also recently tweeted a picture of his Tesla, with the “ThinkXE” name on the license plate. Paired with a tag showing the month of June and the year 2020, many have been lead to believe this could be teasing a June 2020 release for Xe graphics card, perhaps in time for Computex.
Architecture and performance
When Intel made its official announcement about the new graphics card technology it was developing, the company made clear that it was developing a dedicated graphics card. While that would suggest that it was building something distinct from its existing onboard GPU ventures, these cards will be based on the same 12th-generation architecture at the core of its onboard graphics solutions for upcoming CPU generations.
The driver leak in late July 2019 suggested that different models of discrete Xe graphics cards would feature varying numbers of “execution units,” as per Toms Hardware. The more modest high-power model would sport 128, with two more impressive cards sporting 256 and 512. This suggests these cards will target the midrange of graphics performance, but we’ll need to learn more before we can make an educated guess on actual performance.
Intel will transition to using the “Xe” branding for all future graphics endeavors, whether on board or discrete, though there will be severe performance differences between them.
— Intel Graphics (@IntelGraphics) August 15, 2018
To give that some context, Intel’s 9th-gen graphics architecture includes GPU cores like its UHD Graphics 630, which can be found in everything from entry-level Pentium Gold G5500 CPUs to the fantastically powerful Core i7-9700K. Intel’s more recent 11th-generation (it skipped the 10th generation) is found in the graphics cores in its new Ice Lake mobile CPUs.
Gen11 isn’t Intel Xe, but it already makes improvements that put Intel on the right path. It targets a teraflop of power, much greater than past Intel IGPs. It also adds support for HDR and Adaptive Sync, two popular features found on discrete cards from AMD and Nvidia. But we’re told that the 12th-generation will be an even grander leap in graphical performance, to put Intel in competition with AMD and Nvidia.
A rumored move to HBM
From the same (now removed) interview mentioned above, Koduri also suggested that this new graphics cards might not use the more typical GDDR6 for memory. Instead, it was suggested that these GPUs could use higher-spec, high-bandwidth memory (HBM), a more expensive and less common type of memory.
That too was discredited once Intel issued a clarification on the interview. The actual translation instead says this: “So the strategy we’re taking is we’re not really worried about the performance range, the cost range, and all because eventually our architecture as I’ve publicly said, has to hit from mainstream, which starts even around $100, all the way to Data Center-class graphics with HBM memories and all, which will be expensive.”
In other words, no, it doesn’t sound like HBM will be used on entry-level graphics cards. Seeing that sort of memory pop up in data center-class cards is a bit more what we’d expect.
Most consumer graphics cards stick with GDDR6, including everything as high-end as Nvidia’s RTX 2080 Ti. The only recent exception has been AMD’s high-end cards, which have used HBM in the past. More recently, AMD moved to HBM2 for its Radeon VII, though that was ditched on its newer, midrange Radeon cards. Entry-level Xe cards with HBM2 memory would be an interesting proposition, and again position Intel’s chips more toward non-gaming use cases that require the high bandwidth.
Nvidia has gone all-in on real-time ray tracing, making it a key feature of its current RTX graphics cards. Despite its slow start, the technology has the potential to become the most important new feature in computer graphics over the coming years. The problem is that the increase in lifelike lighting and shadows can be costly in terms of performance. AMD has been more hesitant about diving into the world of ray tracing for that exact reason, though it has plans to support it in the future, especially on consoles like the Playstation 5.
Well before the launch of Xe, Intel has already come out of the gate touting support ray tracing in its future GPUs. Jim Jeffers, Intel’s senior principal engineer (and senior director of advanced rendering and visualization), made the following statement on the matter: “Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.” We don’t yet know what that statement means for ray tracing in games, but if the hardware acceleration has been implemented, we’d be surprised if Intel didn’t also bring that to gamers.
AMD alumni are helping to make it
Intel hasn’t released a dedicated graphics card in 20 years. It did develop what became a co-processor, in Larabee, in the late 2000s, but that proved to be far from competitive with modern graphics cards, even if it found some intriguing use cases in its own right. To develop its graphics architecture into something worthy of a dedicated graphics card, Intel hired on some industry experts, most notably Raja Koduri. He was hired straight from AMD where he had spent several years as a chief architect of the Radeon Technology Group, heading up development on AMD’s Vega and Navi architectures.
He’s been there for over a year, and he was even joined in mid-2018 by Jim Keller, the lead architect on AMD’s Zen architecture. He is heading up Intel’s silicon development and will, according to Intel itself, help “change the way [Intel] builds silicon.” That could be considered additional evidence of Intel’s push towards viable 10nm production.
Other ex-AMD employees that Intel has picked up over the past few months include former director of global product marketing at AMD, Chris Hook, who spent 17 years working at the company, and Darren McPhee, who now heads up Intel’s product marketing for discrete graphics.
- AMD vs. Intel
- AMD’s next-gen Navi 23 graphics chip may support ray tracing
- Nvidia GTX 1650 Super vs. GTX 1650: A budget battle
- Nvidia RTX 2080 Super vs. RTX 2080 vs. RTX 2070 Super
- Intel 10th-gen Ice Lake CPUs: Everything we know so far