Skip to main content

Nvidia Workbench lets anyone train an AI model

Nvidia CEO showing the RTX 4060 Ti at Computex 2023.
Nvidia

Nvidia has just announced the AI Workbench, which promises to make creating generative AI a lot easier and more manageable. The workspace will allow developers to develop and deploy such models on various Nvidia AI platforms, including PCs and workstations. Are we about to be flooded with even more AI content? Perhaps not, but it certainly sounds like the AI Workbench will make the whole process significantly more approachable.

In the announcement, Nvidia notes that there are hundreds of thousands of pretrained models currently available; however, customizing them takes time and effort. This is where the Workbench comes in, simplifying the process. Developers will now be able to customize and run generative AI with minimal effort, utilizing every necessary enterprise-grade model. The Workbench tool supports various frameworks, libraries, and SDKs from Nvidia’s own AI platform, as well as open-source repositories like GitHub and Hugging Face.

Once customized, the models can be shared across multiple platforms with ease. Devs running a PC or workstation with an Nvidia RTX graphics card will be able to work with these generative models on their local systems, but also scale up to data center and cloud computing resources when necessary.

“Nvidia AI Workbench provides a simplified path for cross-organizational teams to create the AI-based applications that are increasingly becoming essential in modern business,” said Manuvir Das, Nvidia’s vice president of enterprise computing.

Nvidia has also announced the fourth iteration of its Nvidia AI Enterprise software platform, which is aimed at offering the tools required to adopt and customize generative AI. This breaks down into multiple tools, including Nvidia NeMo, which is a cloud-native framework that lets users build and deploy large language models (LLMs) like ChatGPT or Google Bard.

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

Nvidia is tapping into the AI market more and more at just the right time, and not just with the Workbench, but also tools like Nvidia ACE for games. With generative AI models like ChatGPT being all the rage right now, it’s safe to assume that many developers might be interested in Nvidia’s one-stop-shop easy solution. Whether that’s a good thing for the rest of us still remains to be seen, as some people use generative AI for questionable purposes.

Let’s not forget that AI can get pretty unhinged all on its own, like in the early days of Bing Chat, and the more people who start creating and training these various models, the more instances of problematic or crazy behavior we’re going to see out in the wild. But assuming everything goes well, Nvidia’s AI Workbench could certainly simplify the process of deploying new generative AI for a lot of companies.

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
The U.S. government is investigating Nvidia over AI dominance
Nvidia CEO Jensen in front of a background.

Nvidia is the target of a new U.S. Department of Justice (DOJ) investigation. The DOJ is looking into Nvidia's dominance in the AI market through its graphics cards, and specifically looking at if it has leveraged its commanding lead over 80% of that market to lock out competitors from entering it, The Information reports.

On July 30, multiple U.S. groups urged the DOJ to launch an investigation into Nvidia, including democratic senator Elizabeth Warren. The letter to the DOJ cites Nvidia's command of 80% of all GPU chips in the world, and specifically its 98% dominance in the data center market. "Nvidia's size means it now holds control over the world's computing destiny, which gives it dangerous leverage over the global economy," the letter reads.

Read more
Meta’s next AI model to require nearly 10 times the power to train
mark zuckerberg speaking

Facebook parent company Meta will continue to invest heavily in its artificial intelligence research efforts, despite expecting the nascent technology to require years of work before becoming profitable, company executives explained on the company's Q2 earnings call Wednesday.

Meta is "planning for the compute clusters and data we'll need for the next several years," CEO Mark Zuckerberg said on the call. Meta will need an "amount of compute… almost 10 times more than what we used to train Llama 3," he said, adding that Llama 4 will "be the most advanced [model] in the industry next year." For reference, the Llama 3 model was trained on a cluster of 16,384 Nvidia H100 80GB GPUs.

Read more
We just learned something surprising about how Apple Intelligence was trained
Apple Intelligence update on iPhone 15 Pro Max.

A new research paper from Apple reveals that the company relied on Google's Tensor Processing Units (TPUs), rather than Nvidia's more widely deployed GPUs, in training two crucial systems within its upcoming Apple Intelligence service. The paper notes that Apple used 2,048 Google TPUv5p chips to train its AI models and 8,192 TPUv4 processors for its server AI models.

Nvidia's chips are highly sought for good reason, having earned their reputation for performance and compute efficiency. Their products and systems are typically sold as standalone offerings, enabling customers to construct and operate them as the best see fit.

Read more