Skip to main content

It’s time to stop ignoring the CPU in your gaming PC

A hand holding an AMD Ryzen CPU.
Jacob Roach / Digital Trends
Jacob Roach in a promotional image for ReSpec
This story is part of Jacob Roach's ReSpec series, covering the world of PC gaming and hardware.

There’s one thing that will strike fear into the heart of any PC gamer: a CPU bottleneck. It’s unimaginable that you wouldn’t get the full power of your GPU when playing games, which is often the most expensive component in your rig. And although knowledge of what CPU bottlenecks are and how to avoid them is common, we’re in a new era of bottlenecks in 2024.

It’s time to reexamine the role your CPU plays in your gaming PC, not only so that you can get the most performance out of your rig but also to understand why the processor has left the gaming conversation over the last few years. It’s easy to focus all of your attention on a big graphics card, but ignore your CPU and you’ll pay a performance price.

Recommended Videos

The common knowledge

Intel's 14900K CPU socketed in a motherboard.
Jacob Roach / Digital Trends

Let’s start with a high-level definition. A CPU bottleneck happens when your processor becomes the limiting factor in performance. When playing games, your CPU and GPU work together to render the final frame you see. The CPU handles some minor tasks like audio processing, but it mainly gets work ready for your GPU. Your GPU executes that work, and then it takes a new batch from your CPU. If you ever reach a point where your GPU is waiting on more work, you have a CPU bottleneck.

The solution is to increase the load on your GPU. Running at higher graphics settings or playing at a higher resolution means your GPU will take longer to render each frame, and in most cases, that means it won’t be waiting on your CPU for more work. You can see that in action in Cyberpunk 2077 below. The Ryzen 9 7950X is 7% faster than the Ryzen 5 7600X at 1080p, but when your GPU is forced to render the huge amount of pixels required for 4K, there’s a negligible 2% difference in performance.

Note: I’m using percentages throughout the charts in this article for the sake of illustration. You can find the full performance results in a table farther down the page. 

CPU performance in Cyberpunk 2077 at different resolutions.
Jacob Roach / Digital Trends

That’s where the conversation usually ends, but that can lead to some misconceptions about how to apply the knowledge. You might think, for example, that you don’t need some monstrous processor if you plan on playing games at 4K. After all, why would you spend $550 on a Ryzen 9 7950X when the $200 Ryzen 5 7600X offers basically identical performance at 4K?

The idea behind a CPU bottleneck is the same as it has always been, but the practical application can get messy quickly. With modern games, you’re often not rendering the game at your output resolution, and you have to contend with large, open worlds that put a lot more strain on your processor. Taking the decades-old lessons of CPU bottlenecks and applying them to modern games doesn’t hold up.

Challenging status quo

I wanted to start with Cyberpunk 2077 because it’s very heavy on your graphics card. The Red Engine that the game uses is remarkably well-optimized for CPUs, easily scaling down to six cores like you find on the Ryzen 5 7600X and up to 16 on the Ryzen 9 7950X. In nearly every situation, you’ll be GPU-limited in the game, which is exactly where you want to be for the highest frame rates.

CPU performance in Spider-Man Miles Morales at different resolutions.
Jacob Roach / Digital Trends

Let’s look at the flipside. Spider-Man Miles Morales is very intensive on your CPU, so it shouldn’t come as a surprise that there’s a 36% jump in performance with the Ryzen 9 7950X at 1080p. However, there’s still a jump of 17% all the way up at 4K. That’s enough of a performance jump that you could argue spending $350 more for the Ryzen 9 7950X.

CPU performance in Cyberpunk 2077 with ray tracing and DLSS.
Jacob Roach / Digital Trends

It’s not just Spider-Man Miles Morales, either. In Cyberpunk 2077, if we flip on the Ultra RT preset and enable DLSS to Performance mode — a way you might actually play the game — there’s a performance gap of around 26% at both 1080p and 1440p. That disappears at 4K, but the difference compared to native resolution is staggering.

When you play a demanding game in 2024, you often won’t be playing at native resolution. You’ll turn on the highest graphics settings your GPU can muster and flip on upscaling either through DLSS or AMD’s FSR. The result is that you’re generally putting more strain on your CPU. By rendering the game at a lower resolution and upscaling, your graphics card is able to generate frames faster. And, in the process, it’s left waiting on more work from your CPU.

CPU performance in Spider-Man Miles Morales with DLSS and ray tracing turned on.
Jacob Roach / Digital Trends

This can lead to some very strange situations, and Cyberpunk 2077 already shows some of that behavior. The performance gap is almost identical at 1080p and 1440p, but it completely disappears at 4K. In Spider-Man Miles Morales the performance gap with DLSS and all of the ray tracing sliders turned up stays consistent.

From 1080p up to 4K, the Ryzen 9 7950X is around 25% faster than the Ryzen 5 7600X. In this situation, we’re completely constrained by the CPU all the way up to 4K with the most demanding graphics settings the game is capable of. If that doesn’t convince you that your CPU is important for gaming performance, I’m not sure what will.

A balancing act

The bottom of the AMD Ryzen 9 7950X3D
Jacob Roach / Digital Trends

The idea behind a CPU bottleneck holds up today, but tools like upscaling challenge PC gamers to think more critically about the role their CPU plays in gaming performance. Every system is different and every game is different, but you can dig deeper into your system’s performance to understand how it reacts to different games.

First, let’s look at the circle of life between your GPU and CPU. Instead of thinking about one component waiting on the other, you can think of them as individual pieces of your PC with a certain capacity for performance. You could say that your CPU and GPU are both capable of a certain frame rate, or that they both take a certain amount of time to complete their work.

Ryzen 5 7600X Ryzen 9 7950X
Cyberpunk 2077 1080p 169 fps 180 fps
Cyberpunk 2077 1440p 119 fps 122 fps
Cyberpunk 2077 4K 55 fps 56 fps
Cyberpunk 2077 RT 1080p w/ DLSS 100 fps 127 fps
Cyberpunk 2077 RT 1440p w/ DLSS 100 fps 126 fps
Cyberpunk 2077 RT 4K w/ DLSS 84 fps 85 fps
Spider-Man Miles Morales 1080p 112 fps 152.6 fps
Spider-Man Miles Morales 1440p 112.6 fps 143.2 fps
Spider-Man Miles Morales 4K 112.7 fps 132.2 fps
Spider-Man Miles Morales 1080p w/ DLSS 86.1 fps 106.7 fps
Spider-Man Miles Morales 1440p w/ DLSS 84.8 fps 104.7 fps
Spider-Man Miles Morales 4K w/ DLSS 82.9 fps 103.3 fps

Thinking about bottlenecks this way is helpful for identifying them within your own system, and my full results above show why. In Spider-Man Miles Morales with DLSS on, you can see that the Ryzen 5 7600X is only capable of about 85 frames per second (fps), while the Ryzen 9 7950X is capable of about 105 fps. And we can confidently tie the performance to those parts because there’s virtually no change in performance from 1080p up to 4K.

Cyberpunk 2077 with ray tracing and DLSS provides an even more clear example. The Ryzen 5 7600X is capable of about 100 fps while the Ryzen 9 7950X is capable of about 127 fps. In both cases, the GPU is able to render more frames than what the CPU can. It’s only at 4K where the GPU drops below what the CPU is capable of, eliminating the CPU bottleneck.

The latency widget in Special K.
Jacob Roach / Digital Trends

You don’t need a bunch of spare hardware and a spreadsheet to monitor performance in your own games. I like to use Special K to see how my CPU and GPU are interacting and what frame rate they’re producing. This app, which I’ve written about previously, includes a latency widget that sits over the top of your games while you play. It shows the latency — or time — that your CPU and GPU are taking to render frames, in real-time. It even has a line showing if you’re CPU-limited.

I like to turn on the widget whenever I’m playing a new game. I’ll turn it off eventually, but it’s remarkably helpful for understanding how the game works with the hardware I have. That knowledge can guide you a long way while you’re tweaking settings. In a game like Spider-Man Miles Morales, for example, I can immediately tell that turning on DLSS to Performance mode at 4K will net me basically no improvement in frame rate, so I could either turn upscaling off or opt for a higher quality mode.

Between games that are taxing on your CPU like Spider-Man Miles Morales and Dragon’s Dogma 2, upscaling features, and broader ray tracing support, it’s important to examine what role your CPU is actually playing in performance. There are plenty of cases where an old, underpowered CPU is perfectly fine if you’re playing at a high enough resolution; but there are just as many cases where the opposite is true, especially with how you’ll play most games in 2024.

Jacob Roach
Former Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
3 GPUs you should buy instead of the Intel Arc B580
The back of the Intel Arc B580 graphics card.

There's no doubt that that Intel's new Arc B580 is one of the best graphics cards you can buy. Clocking in at just $250, it's a powerhouse GPU that delivers performance we normally only see out of GPUs north of $300. As you can read in my Intel Arc B580 review, it's a fantastic option at 1080p and it scales up surprisingly well to 1440p. There's just one problem -- the Arc B580 is sold out everywhere.

Intel clearly didn't anticipate the demand, but you don't have to succumb to scalpers or patiently wait while the Arc B580 comes back in stock. There are some excellent alternatives available around the same price that you can pick up right now.
Intel Arc B570

Read more
This new DirectX feature could completely change how PC games work
A scene from Fortnite running in Unreal Engine 5.

Microsoft has announced that neural rendering capabilities are coming to DirectX soon. Cooperative vector support, as it's called, will lead to "cross-platform enablement of neural rendering techniques," according to Microsoft, and it will usher in "a new paradigm in 3D graphics programming."

It sounds buzzy, but that's not without reason. This past week, Nvidia announced its new range of RTX 50-series graphics cards, and along with them, it revealed a slate of neural rendering features. Neural shaders, as Nvidia calls them, allow developers to execute small neural networks from shader code, running them on the dedicated AI hardware available on Nvidia, AMD, Intel, and Qualcomm GPUs. Microsoft is saying that it will enable these features on all GPUs, not just those sold by Nvidia, through the DirectX API.

Read more
Nvidia’s DLSS 4 isn’t what you think it is. Let’s debunk the myths
DLSS 4 in Cyberpunk 2077.

Nvidia stole the show at CES 2025 with the announcement of the RTX 5090, and despite plenty of discourse about the card's $2,000 price tag, it ushers in a lot of new technology. Chief among them is DLSS 4, which brings multi-frame generation to Nvidia's GPUs, offering a 4X performance boost in over 75 games right away when Nvidia's new RTX 50-series GPUs hit the streets.

I've seen way too much misunderstanding about how DLSS 4 actually works, though. Between misleading comments from Nvidia's CEO and a radical redesign to how DLSS works, it's no wonder there's been misinformation floating around about the new tech, what it's capable of, and, critically, what limitations it has.

Read more