Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Nvidia’s new Guardrails tool fixes the biggest problem with AI chatbots

Nvidia is introducing its new NeMo Guardrails tool for AI developers, and it promises to make AI chatbots like ChatGPT just a little less insane. The open-source software is available to developers now, and it focuses on three areas to make AI chatbots more useful and less unsettling.

The tool sits between the user and the Large Language Model (LLM) they’re interacting with. It’s a safety for chatbots, intercepting responses before they ever reach the language model to either stop the model from responding or to give it specific instructions about how to respond.

Bing Chat saying it wants to be human.
Jacob Roach / Digital Trends

Nvidia says NeMo Guardrails is focused on topical, safety, and security boundaries. The topical focus seems to be the most useful, as it forces the LLM to stay in a particular range of responses. Nvidia demoed Guardrails by showing a chatbot trained on the company’s HR database. When asked a question about Nvidia’s finances, it gave a canned response that was programmed with NeMo Guardrails.

Recommended Videos

This is important due to the many so-called hallucinations we’ve seen out of AI chatbots. Microsoft’s Bing Chat, for example, provided us with several bizarre and factually incorrect responses in our first demo. When faced with a question the LLM doesn’t understand, it will often make up a response in an attempt to satisfy the query. NeMo Guardrails aims to put a stop to those made-up responses.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

The safety and security tenets focus on filtering out unwanted responses from the LLM and preventing it from being toyed with by users. As we’ve already seen, you can jailbreak ChatGPT and other AI chatbots. NeMo Guardrails will take those queries and block them from ever reaching the LLM.

A diagram of Nvidia's NeMo Guardrails tool.
Image used with permission by copyright holder

Although NeMo Guardrails to built to keep chatbots on-topic and accurate, it isn’t a catch-all solution. Nvidia says it works best as a second line of defense, and that companies developing and deploying chatbots should still train the model on a set of safeguards.

Developers need to customize the tool to fit their applications, too. This allows NeoMo Guardrails to sit on top of middleware that AI models already use, such as LangChain, which already provides a framework for how AI chatbots are supposed to interact with users.

In addition to being open-source, Nvidia is also offering NeMo Guardrails as part of its AI Foundations service. This package provides several pre-trained models and frameworks for companies that don’t have the time or resources to train and maintain their own models.

Jacob Roach
Former Digital Trends Contributor
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Chatbots are going to Washington with ChatGPT Gov
glasses and chatgpt

In an X post Monday commenting on DeepSeek's sudden success, OpenAI CEO Sam Altman promised to "pull up some releases" and it appears he has done so. OpenAI unveiled its newest product on Tuesday, a "tailored version of ChatGPT designed to provide U.S. government agencies with an additional way to access OpenAI’s frontier models," per the announcement post. ChatGPT Gov will reportedly offer even tighter data security measures than ChatGPT Enterprise, but how will it handle the hallucinations that plague the company's other models?

According to OpenAI, more than 90,000 federal, state, and local government employees across 3,500 agencies have queried ChatGPT more than 18 million times since the start of 2024. The new platform will enable government agencies to enter “non-public, sensitive information” into ChatGPT while it runs within their secure hosting environments -- specifically, the Microsoft Azure commercial cloud or Azure Government community cloud -- and cybersecurity frameworks like IL5 or CJIS. This enables each agency to "manage their own security, privacy and compliance requirements,” Felipe Millon, Government Sales lead at OpenAI told reporters on the press call Tuesday.

Read more
How DeepSeek flipped the tech world on its head overnight
The DeepSeek website.

DeepSeek, the chatbot made by a Chinese startup that seemingly dethroned ChatGPT, is taking the world by storm. It's currently the number one topic all over the news, and a lot has happened in the past 24 hours. Among other highlights, Nvidia's stock plummeted as a response to DeepSeek; President Donald Trump commented on the new AI; Mark Zuckerberg is assembling a team to find an answer to DeepSeek. Below, we'll cover all the latest news you need to know about DeepSeek.
Nvidia gets hit by the rise of DeepSeek

Although ChatGPT is the chatbot that quickly lost its public favorite status with the rise of DeepSeek, Nvidia is the company that suffered the greatest losses. In fact, Nvidia's market loss following the launch of DeepSeek's large language model (LLM) marks the greatest one-day stock market drop in history, says Forbes. Nvidia lost nearly $600 billion as a result of the Chinese company behind DeepSeek revealing just how cheap the new LLM is to develop in comparison to rivals from Anthropic, Meta, or OpenAI.

Read more
OpenAI’s big, new Operator AI already has problems
OpenAI logo on a white board

OpenAI has announced its AI agent tool, called Operator, as a research preview as of Thursday, but the launch isn’t without its minor hiccups.

The artificial intelligence brand showcased features of the new tool in an online demo, explaining that Operator is a Computer Using Agent (CUA) based on the GPT-4o model, which enables multi-modal functions, such as the ability to search the web and being able to understand the reasoning of the search results.

Read more