Skip to main content

The future of fast PC graphics? Connecting directly to SSDs

Performance boosts are expected with each new generation of the best graphics cards, but it seems that Nvidia and IBM have their sights set on greater changes.

The companies teamed up to work on Big accelerator Memory (BaM), a technology that involves connecting graphics cards directly to superfast SSDs. This could result in larger GPU memory capacity and faster bandwidth while limiting the involvement of the CPU.

A chart breaks down Nvidia and IBM's BaM technology.
Image source: Arxiv Image used with permission by copyright holder

This type of technology has already been thought of, and worked on, in the past. Microsoft’s DirectStorage application programming interface (API) works in a somewhat similar way, improving data transfers between the GPU and the SSD. However, this relies on external software, only applies to games, and only works on Windows. Nvidia and IBM researchers are working together on a solution that removes the need for a proprietary API while still connecting GPUs to SSDs.

The method, amusingly referred to as BaM, was described in a paper written by the team that designed it. Connecting a GPU directly to an SSD would provide a performance boost that could prove to be viable, especially for resource-heavy tasks such as machine learning. As such, it would mostly be used in professional high-performance computing (HPC) scenarios.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The technology that is currently available for processing such heavy workloads requires the graphics card to rely on large amounts of special-purpose memory, such as HBM2, or to be provided with efficient access to SSD storage. Considering that datasets are only growing in size, it’s important to optimize the connection between the GPU and storage in order to allow for efficient data transfers. This is where BaM comes in.

“BaM mitigates the I/O traffic amplification by enabling the GPU threads to read or write small amounts of data on-demand, as determined by the compute,” said the researchers in their paper, first cited by The Register. “The goal of BaM is to extend GPU memory capacity and enhance the effective storage access bandwidth while providing high-level abstractions for the GPU threads to easily make on-demand, fine-grain access to massive data structures in the extended memory hierarchy.”

An Nvidia GPU core sits on a table.
Niels Broekhuijsen / Digital Trends

For many people who don’t work directly with this subject, the details may seem complicated, but the gist of it is that Nvidia wants to rely less on the processor and connect directly to the source of the data. This would both make the process more efficient and free up the CPU, making the graphics card much more self-sufficient. The researchers claim that this design would be able to compete with DRAM-based solutions while remaining cheaper to implement.

Although Nvidia and IBM are undoubtedly breaking new ground with their BaM technology, AMD worked in this area first: In 2016, it unveiled the Radeon Pro SSG, a workstation GPU with integrated M.2 SSDs. However, the Radeon Pro SSG was intended to be strictly a graphics solution, and Nvidia is taking it a few steps further, aiming to deal with complex and heavy compute workloads.

The team working on BaM plans to release the details of their software and hardware optimization as open source, allowing others to build on their findings. There is no mention as to when, if ever, BaM might find itself implemented in future Nvidia products.

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
This surprising new AMD GPU came out of nowhere
Biostar's AMD RX 580.

As we're all on the lookout for AMD's RDNA 4 graphics cards, I'm telling you right out of the gate: They're still not here. However, Biostar launched a new AMD GPU that's fairly ... unexpected, to say the least. In fact, I'd sooner expect AMD to just drop RDNA 4 into our laps with no warning than for Biostar to launch this GPU. Which card am I talking about? Why, the RX 580, of course -- a GPU that's now seven years old.

The new RX 580 comes in a stylish white shroud, but on the inside, it's still the same GPU that's in no danger of competing against some of the best graphics cards. The RX 580 sports 2,048 stream processors (SPs), 8GB of GDDR5 VRAM across a 256-bit bus, and a maximum clock speed of 1,750MHz. The card supports the PCIe 3.0 interface and comes with two DisplayPort 1.4a ports as well as one HDMI 2.0. Those specs are pretty outdated for 2024.

Read more
The viral ‘GPU purse’ costs $1,024 — but you can make your own for $40
A purse made out of a GT 730 GPU.

I never thought the best graphics cards would become a fashion statement, much less some of the worst, but here we are. Over the weekend, a website called GPU Purse went live with a listing for a discarded Nvidia GT 730 GPU -- a $20 used GPU -- that had been turned into a handbag. You'll just need to shell out $1,024 for the bag, which, according to the product page, fits a phone and comes complete with a long or short chain.

One look at the website sets off alarm bells, especially for a product that's over $1,000, but it appears there's some legitimacy behind it. Financial Times reports that the GPU Purse is the brainchild of Tessa Barton, a New York Times alum and current pretraining engineer at Databricks. Barton reportedly set up a Shopify store in haste after a post on X (formerly Twitter) went viral last week with over 1.4 million impressions.

Read more
I tried to settle the dumbest debate in PC gaming
settling borderless and fullscreen debate dt respec vs

Borderless or fullscreen? It's a question every PC gamer has run up against, either out of curiosity or from friends trying to get the best settings for their PC games. Following surface-level advice, such as what we lay out in our no-frills guide on borderless versus fullscreen gaming, will set you on the right path. Borderless is more convenient, but it might lead to a performance drop in some games. In theory, that's all you need to know. But the question that's plagued my existence still rings: Why? 

If you dig around online, you'll get wildly different advice about whether borderless or fullscreen is better for your performance. Some say there's no difference. Others claim huge improvements with fullscreen mode in games like PlayerUnkown's Battlegrounds. More still say you'll get better performance with borderless in a game like Fallout 4. You don't need to follow this advice, and you probably shouldn't on a more universal basis, but why are there so many different claims about what should be one of the simplest settings in a graphics menu?

Read more