Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

How Intel could use AI to tackle a massive issue in PC gaming

Intel is making a big push into the future of graphics. The company is introducing seven new research papers to Siggraph 2023, an annual graphics conference, one of which tries to address VRAM limitations in modern GPUs with neural rendering.

The paper aims to make real-time path tracing possible with neural rendering. No, Intel isn’t introducing a DLSS 3 rival, but it is looking to leverage AI to render complex scenes. Intel says the “limited amount of onboard memory [on GPUs] can limit practical rendering of complex scenes.” Intel is introducing a neural level of detail representation of objects, and it says it can achieve compression rates of 70% to 95% compared to “classic source representations, while also improving quality over previous work.”

Ellie holds a gun in The last of Us Part I.
Image used with permission by copyright holder

It doesn’t seem dissimilar from Nvidia’s Neural Texture Compression, which it also introduced through a paper submitted to Siggraph. Intel’s paper, however, looks to tackle complex 3D objects, such as vegetation and hair. It’s applied as a level of detail (LoD) technique for objects, allowing them to look more realistic from further away. As we’ve seen from games like Redfall recently, VRAM limitations can cause even close objects to show up with muddy textures and little detail as you pass them.

In addition to this technique, Intel is also introducing an efficient path-tracing algorithm that it says, in the future, will make complex path-tracing possible on mid-range GPUs and even integrated graphics.

Path tracing is essentially the hard way of doing ray tracing, and we’ve already seen it be used to great effect in games like Cyberpunk 2077 and Portal RTX. For as impressive as path tracing is, though, it’s extremely demanding. You’d need a flagship GPU like the RTX 4080 or RTX 4090 to even run these games at higher resolutions, and that’s with Nvidia’s tricky DLSS Frame Generation enabled.

Intel’s paper is introducing a way to make that process more efficient. It’s doing so by introducing a new algorithm that is “simpler than the state-of-the-art and leads to faster performance,” according to Intel. The company is building upon the GGX mathematical function, which Intel says is “used in every CGI movie and video game.” The algorithm reduces this mathematical distribution to a hemispherical mirror that is “extremely simple to simulate on a computer.”

Screenshot of full ray tracing in Cyberpunk 2077.
Nvidia

The idea behind GGX is that surfaces are made up of microfacets that reflect and transmit light in different directions. This is expensive to calculate, so Intel’s algorithm essentially reduces the GGX distribution to a simple-to-calculate slope based on the angle of the camera, making real-time rendering possible.

Based on Intel’s internal benchmarks, it leads to upwards of a 7.5% speed up in rendering path-traced scenes. That may seem like a minor bump, but Intel seems confident that more efficient algorithms could make all the difference. In a blog post, the company says it will demonstrate how real-time path tracing can be “practical even on mid-range and integrated GPUs in the future” at Siggraph.

As for when that future arrives, it’s tough to say. Keep in mind this is a research paper right now, so it might be some time before we see this algorithm widely deployed in games. It would certainly do Intel some favors. Although the company’s Arc graphics cards have become excellent over the past several months, Intel still focused on mid-range GPUs and integrated graphics where path tracing isn’t currently possible.

We don’t expect you’ll see these techniques in action any time soon, though. The good news is that we’re seeing new techniques to push visual quality and performance in real-time rendering, which means these techniques should, eventually, show up in games.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
The surprising reason your powerful PC still can’t handle the latest games
Nvidia's RTX 3070 Ti graphics card.

We're off to a rocky start with PC releases in 2023. Hogwarts Legacy, Resident Evil 4 Remake, Forspoken, and most recently and notably The Last of Us Part One have all launched in dire states, with crashes, hitches, and lower performance despite a minor increase in visual quality. And a big reason why is that the graphics cards of the last few years aren't equipped to handle the demands of games today.

The GPUs themselves are powerful enough; games haven't suddenly gotten more demanding for no reason. The problem is video memory or VRAM. Many of the most powerful GPUs from the previous generation weren't set up to handle the VRAM demands of modern games, which may explain why your relatively powerful PC can't handle the latest and most exciting new games.
What does your VRAM do anyway?

Read more
How Unreal Engine 5 is tackling the biggest problem in PC gaming
unreal engine 5 tackling biggest problem pc graphics respec

During its State of Unreal address at GDC 2023, Epic announced a wide-ranging suite of features for Unreal Engine 5.2. But perhaps the most important feature coming in the updated engine doesn't relate to lighting, geometry detail, or ray tracing. It's all about performance.

Unreal Engine games, rightly or wrongly, have been associated with stuttering and hitches over the past few years. With the new release, Epic is finally tackling the problem head-on, so I thought it was high time to break down why Unreal games so commonly show stutter, what Epic is doing to solve the problem, and when we can expect to see those efforts show up in new releases.
Remember the stutter
These frame time spikes manifest as severe stutters in Gotham Knights. Image used with permission by copyright holder

Read more
Here’s how Intel doubled Arc GPUs’ performance with a simple driver update
intel arc alchemist driver update doubled performance a770 logo respec featured

As newcomers in the world of discrete graphics cards, the best hope for Intel's Arc A770 and A750 was that they wouldn't be terrible. And Intel mostly delivered in raw power, but the two budget-focused GPUs have been lagging in the software department. Over the course of the last few months, Intel has corrected course.

Through a series of driver updates, Intel has delivered close to double the performance in DirectX 9 titles compared to launch, as well as steep upgrades in certain DirectX 11 and DirectX 12 games. I caught up with Intel's Tom Petersen and Omar Faiz to find out how Intel was able to rearchitect its drivers, and more importantly, how it's continuing to drive software revisions in the future.
The driver of your games

Read more