Skip to main content

Nvidia bringing CUDA platform to x86 processors

Image used with permission by copyright holder

For years, rumors have been circulating that graphics developer Nvidia might start making its own x86-compatible systems—perhaps not for mainstream computers (at first), but for high-end graphics systems and supercomputing applications. Now, Nvidia has announced it is taking a step into the x86 world—but it’s not building its own processors. Instead, the company has announced the Portland Group (PGI), a consulting subsidiary of STMicro, is developing a CUDA C compiler that will enable CUDA applications to run in industry-standard x86 processors—no Nvidia hardware required.

The compiler will enable developers developing CUDA applications—which tap into the massively parallel processing capabilities in Nvidia graphics hardware—to deploy their applications to standard x86 processors from the likes of Intel and AMD. Although this move probably doesn’t have any tremendous impact for gamers—who will still need graphics hardware to push all those pixels to their displays—the move could have significant ramifications for programmers making parallel computing applications that need to be deployed on servers and computing clusters that do not have Nvidia graphics hardware installed. Instead, the applications will be able to tap into multicore processors from Intel and AMD for executing parallel tasks—maybe not as much parallel oomph as Nvidia hardware, but that’s better than not being able to run at all.

“In less than three years, CUDA has become the most widely used massively parallel programming model,” said Nvidia CPU computing general manager Sanford Russell, in a statement. “With the CUDA for x86 CPU compiler, PGI is responding to the need of developers who want to use a single parallel programming model to target many core GPUs and multi-core CPUs.”

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

CUDA has developed a significant following supercomputing application developers: Nvidia launched the CUDA development platform in 2007 as one of its first major steps into supercomputing. The non-proprietary alternative to CUDA is the more recent OpenCL, which by most accounts still has to catch up to CUDA in technical capabilities as well as adoption by developers of high-performance applications. CUDA also faces competition from Microsoft’s DirectCompute, an API in Windows Vista and Windows 7 that enables developers to leverage the parallel processing capabilities of Nvidia graphics hardware.

Extending the CUDA platform to x86 processors not only broadens the hardware that can run CUDA applications, it also lowers the barriers to getting started writing CUDA apps: with an x86 compiler, any developer with a standard Intel or AMD processor can at least get started.

None of this rules out the notion that Nvidia might develop its own x86 processors. Intel has increasingly worked to hamstring the discrete graphics market, with its forthcoming Sandy Bridge CPUs integrating graphics controllers, making it impossible for system makers to build computers that don’t include Intel graphics. At a certain point, Nvidia may decide it’s better off making its own processors rather than being held captive to the Intels and AMDs of the world.

Editors' Recommendations

Topics
Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
You should be using these 5 forgotten Nvidia GPU features
A hand grabbing MSI's RTX 4090 Suprim X.

Nvidia makes some of the best graphics cards you can buy, but the company says it actually spends about 80% of its time working on software. That includes a wide-ranging list of features on Nvidia GPUs to expand what your graphics card is capable of.

You may already know about some of these features, but in my experience, the software for both Nvidia and AMD graphics cards is woefully underused. If you have an Nvidia GPU, keep these features in your back pocket.
Instant Replay

Read more
Nvidia built a massive dual GPU to power models like ChatGPT
Nvidia's H100 NVL being installed in a server.

Nvidia's semi-annual GPU Technology Conference (GTC) usually focuses on advancements in AI, but this year, Nvidia is responding to the massive rise of ChatGPT with a slate of new GPUs. Chief among them is the H100 NVL, which stitches two of Nvidia's H100 GPUs together to deploy Large Language Models (LLM) like ChatGPT.

The H100 isn't a new GPU. Nvidia announced it a year ago at GTC, sporting its Hopper architecture and promising to speed up AI inference in a variety of tasks. The new NVL model with its massive 94GB of memory is said to work best when deploying LLMs at scale, offering up to 12 times faster inference compared to last-gen's A100.

Read more
The best games to show off Nvidia’s GeForce RTX 4090
The RTX 4090 graphics card sitting on a table with a dark green background.

Nvidia's monstrous RTX 4090 is finally here, and it's powerful (just read our RTX 4090 review). It's so powerful, in fact, that there aren't a lot of games that truly showcase the GPU's power. The RTX 4090 is the best graphics card you can buy, but you'll want to install a few key games to show off what the GPU is capable of.

There are a few games that still push the RTX 4090, though many of our recommendations come on the back of Nvidia's Deep Learning Super Sampling (DLSS). DLSS 3 includes a frame generation setting that's only available on RTX 40-series GPUs, which helps a lot of games that are traditionally limited by your processor hit a high frame rate.

Read more