Skip to main content

Nvidia bringing CUDA platform to x86 processors

Image used with permission by copyright holder

For years, rumors have been circulating that graphics developer Nvidia might start making its own x86-compatible systems—perhaps not for mainstream computers (at first), but for high-end graphics systems and supercomputing applications. Now, Nvidia has announced it is taking a step into the x86 world—but it’s not building its own processors. Instead, the company has announced the Portland Group (PGI), a consulting subsidiary of STMicro, is developing a CUDA C compiler that will enable CUDA applications to run in industry-standard x86 processors—no Nvidia hardware required.

Recommended Videos

The compiler will enable developers developing CUDA applications—which tap into the massively parallel processing capabilities in Nvidia graphics hardware—to deploy their applications to standard x86 processors from the likes of Intel and AMD. Although this move probably doesn’t have any tremendous impact for gamers—who will still need graphics hardware to push all those pixels to their displays—the move could have significant ramifications for programmers making parallel computing applications that need to be deployed on servers and computing clusters that do not have Nvidia graphics hardware installed. Instead, the applications will be able to tap into multicore processors from Intel and AMD for executing parallel tasks—maybe not as much parallel oomph as Nvidia hardware, but that’s better than not being able to run at all.

“In less than three years, CUDA has become the most widely used massively parallel programming model,” said Nvidia CPU computing general manager Sanford Russell, in a statement. “With the CUDA for x86 CPU compiler, PGI is responding to the need of developers who want to use a single parallel programming model to target many core GPUs and multi-core CPUs.”

CUDA has developed a significant following supercomputing application developers: Nvidia launched the CUDA development platform in 2007 as one of its first major steps into supercomputing. The non-proprietary alternative to CUDA is the more recent OpenCL, which by most accounts still has to catch up to CUDA in technical capabilities as well as adoption by developers of high-performance applications. CUDA also faces competition from Microsoft’s DirectCompute, an API in Windows Vista and Windows 7 that enables developers to leverage the parallel processing capabilities of Nvidia graphics hardware.

Extending the CUDA platform to x86 processors not only broadens the hardware that can run CUDA applications, it also lowers the barriers to getting started writing CUDA apps: with an x86 compiler, any developer with a standard Intel or AMD processor can at least get started.

None of this rules out the notion that Nvidia might develop its own x86 processors. Intel has increasingly worked to hamstring the discrete graphics market, with its forthcoming Sandy Bridge CPUs integrating graphics controllers, making it impossible for system makers to build computers that don’t include Intel graphics. At a certain point, Nvidia may decide it’s better off making its own processors rather than being held captive to the Intels and AMDs of the world.

Topics
Geoff Duncan
Former Digital Trends Contributor
Geoff Duncan writes, programs, edits, plays music, and delights in making software misbehave. He's probably the only member…
Outlook typing lag will finally get a fix from Microsoft
A Dell laptop connected to a hard drive on a couch.

If you use classic Outlook to handle your emails, then you're most likely familiar with the annoying bug that causes huge CPU spikes while typing. It can be difficult to finish emails when your system resources jump by as much as 50 percent (and increase power usage with it), but Microsoft has finally announced that a fix is on the way. The downside? It won't arrive until late May for most users, although some might see it in early or mid May if they're part of the beta program. Until then, there is a workaround.

Rolling classic Outlook back to version 2405 seems to fix the issue, but it comes with a not-insignificant tradeoff. Updates since version 2405 have patched several security flaws, so if you opt to go this route, be aware that it opens your system to vulnerabilities.

Read more
YouTube’s AI Overviews want to make search results smarter
YouTube App

YouTube is experimenting with a new AI feature that could change how people find videos. Here's the kicker: not everyone is going to love it.

The platform has started rolling out AI-generated video summaries directly in search results, but only for a limited group of YouTube Premium subscribers in the U.S. For now, the AI Overviews are focused on things like product recommendations and travel ideas. They're meant to give quick highlights from multiple videos without making users look at each item they're interested in.

Read more
OpenAI’s GPT-4 might be coming to an end. Here’s why that’s actually good news
OpenAI's new typeface OpenAI Sans

OpenAI has seen many changes in recent weeks, and more are quickly coming. The AI company has yet to confirm the launch of its upcoming GPT-5 AI model. However, it is making room for its planned model by ending support for other models in its lineup. OpenAI recently announced that it is retiring its GPT-4 AI model as of April 30. GPT-4 stood as one of the brand’s most popular and longest-running large language models. However, the company has already shifted its focus away from its original large language model technology and more toward its series of reasoning models and other technologies in recent months. 

The brand has also made some interesting moves by introducing a new GPT 4.1 model family, strictly as an API for developers, while simultaneously indicating plans to sunset the recently launched GPT-4.5 model and also releasing the o3 and o4 reasoning models. While not yet confirmed, these moves appear to propel the GPT-5 timeline closer to launch.

Read more