Skip to main content

Why multi-chip GPUs are the future of graphics power

Multi-chip module (MCM) graphics cards just might be the future, bringing upcoming generations of GPUs to a whole new level. Combining multiple dies instead of relying on a single chip could provide better performance without being hindered by the current hardware limitations.

According to recent leaks, both AMD and Intel may currently be working on MCM graphics cards. AMD’s rendition of this technology may be just around the corner — in the upcoming RDNA 3 GPUs.

Patent: Position-based rendering apparatus and method for multi-die/GPU graphics processing – Intel

Intel MCM GPUs is coming…

More details: https://t.co/GIkfwrXGzV pic.twitter.com/sXGt9nbJ1S

— Underfox (@Underfox3) February 3, 2022

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The term “multi-chip module” describes the architecture used to assemble electronic components, such as semiconductor dies or other types of chips. MCM involves integrating multiple GPU modules into a single package. This could potentially result in a lot more GPU power without drastically increasing the size of the chip, thus hopefully improving manufacturability and power consumption.

Today’s leaks come from two different sources and both point (somewhat strongly) toward AMD and Intel both exploring MCM architecture for their upcoming consumer-level graphics cards. A new Intel patent has been uncovered by Underfox on Twitter and later shared by Wccftech, talking about using several GPUs combined via MCM for image rendering and the benefits this would provide.

Intel’s patent talks about using tile-based checkerboard rendering to achieve far more efficient scaling on multi-chip graphics cards. Although it’s unlikely (if not impossible) to see this technology in first-generation Intel Arc Alchemist graphics cards, we may see MCM GPUs from Intel in one of its next-gen discrete graphics cards.

The leak regarding AMD (first shared by PCGamer) comes from an unsuspecting AMD engineer who shared their current projects on their LinkedIn profile and was spotted by blueisviolet on Twitter. The engineer is a principal member of technical staff and works on Infinity Fabric. Information pulled from the profile strongly indicates that some of the next-gen AMD RDNA 3 graphics cards will feature an MCM design.

this why on some my previous twit
i said rdna3 will probably start with 5nm
amd pssst linkedin pic.twitter.com/ZfdfrvgwTO

— blue nugroho (@blueisviolet) February 4, 2022

Multi-chip module GPU technology is something that Intel and AMD have previously dabbled in, but Nvidia is also no stranger to the architecture. As early as 2017, Nvidia has published a paper titled “Multi-Chip-Module GPUs for Continued Performance Scalability.” Since then, rumors have emerged indicating that Nvidia’s next-gen Hopper graphics cards will feature multi-chip GPU designs.

AMD has already released an MCM graphics card before, the monstrous Instinct MI200 HPC GPU, designed for high-performance applications (such as data centers.) The card offers up to 3.2TB/s of bandwidth, 128GB of HBM2e memory, and up to 14,080 processors. However, it’s likely that MCM technology will now make it into the consumer market with AMD’s upcoming RDNA 3 graphics cards. Intel also has a data center MCM GPU in the works. Dubbed Ponte Vecchio, the card has not yet been released.

MCM architecture certainly has a lot to offer, and even if we’re not quite there yet, it seems that we may soon start seeing its benefits on the consumer market as part of the best graphics cards. AMD RDNA 3 graphics cards are set to launch later this year. The same can be said of Intel Arc Alchemist, and although we don’t have any certain release dates for either, Intel is rumored to launch the first Alchemist GPUs before the end of this quarter. Meanwhile, AMD is also rumored to release a refresh of previous RDNA 2 cards within the next few months.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
5 GPUs you should buy instead of the RTX 4070
RTX 4070 logo on a graphics card.

Nvidia's RTX 4070 is one of the best graphics cards you can buy, make no mistake about that. Some recent price drops, combined with excellent 1440p performance and features like DLSS 3.5, make it the go-to GPU for a high-end gaming experience in 2024. There are several other GPUs to keep in mind around this price, however.

The market around for graphics cards that cost $500 to $600 is hotly contested among AMD and Nvidia, and there are some other excellent options to keep in mind when shopping for a new GPU. Here are five GPUs to consider if you're in the market for the RTX 4070.
Nvidia RTX 4070 Super

Read more
What is VSync, and why do you need it?
HP Omen 40L Gaming PC on a table connected to a monitor.

If you’ve been playing PC games for a number of years, you’ve probably heard the term ‘VSync’ tossed around once or twice. Maybe you’ve also heard of G-Sync and FreeSync. For those unaware, VSync is actually short for ‘vertical synchronization’. This is a display feature that is designed to keep your gaming screen running in sync with your computer's GPU. VSync isn’t just important for PC gaming, but it’s one of the most important criteria that goes into a good gaming display.

In this article, we’re going to take a closer look at VSync (and its related technologies) to find out exactly how it works, if you should have it enabled, and how to disable it if you don’t like the optimization. 
What is VSync technology?

Read more
How 8GB VRAM GPUs could be made viable again
Screenshot of full ray tracing in Cyberpunk 2077.

Perhaps there is still some hope for GPUs with low VRAM. According to a new patent published by Microsoft, the company worked out a method that could make ray tracing and path tracing more viable in terms of how much video memory (VRAM) they use. As of right now, without using upscaling techniques, seamless ray tracing requires the use of one of the best graphics cards—but this might finally change if this new method works out as planned.

This new patent, first spotted by Tom's Hardware, describes how Microsoft hopes to reduce the impact of ray tracing on GPU memory. It addresses the level of detail (LOD) philosophy, which is already something that's used in games but not in relation to ray tracing, and plans to use LOD to adjust ray tracing quality dynamically, thus lowering the load that the GPU -- particularly its memory -- has to bear.

Read more