Skip to main content

How to use Nvidia’s new Chat with RTX AI bot

Ever since the earth-shattering release of ChatGPT, the computing world has been waiting on a local AI chatbot that can run disconnected from the cloud. Nvidia now has an answer with Chat with RTX, which is a local AI chatbot that allows you to harness an AI model to skim through your offline data.

Difficulty

Easy

Duration

30 minutes

What You Need

  • Nvidia RTX 30-series or 40-series GPU

  • At least 100GB of disk space

In this guide, we'll show you how to set up and use Chat with RTX. This is just a demo, so expect some bugs as you work with the tool. But hopefully it will open the door to more local AI chatbots and other local AI tools.

How to download Chat with RTX

The first step is to download and configure Chat with RTX, which is actually a bit more complicated than you might expect. All you need to do is run an installer, but the installer is prone to fail, and you'll need to satisfy some minimum system requirements.

You need an RTX 40-series or 30-series GPU with at least 8GB of VRAM, along with 16GB of system RAM, 100GB of disk space, and Windows 11.

Step 1: Download the Chat with RTX installer from Nvidia's website. This compressed folder is 35GB, so it may take a while to download.

Step 2: Once it's finished downloading, right-click the folder and select Extract all.

Extracting the Chat with RTX installation folder.
Jacob Roach / Digital Trends
Get your weekly teardown of the tech behind PC gaming
Check your inbox!

Step 3: In the folder, you'll find a couple files and folders. Choose setup.exe and walk through the installer.

Step 4: Before installation begins, the installer will ask where you want to store Chat with RTX. Make sure you have at least 100GB of disk space in the location you select, as Chat with RTX actually downloads the AI models.

Chat with RTX installation.
Jacob Roach / Digital Trends

Step 5: The installer can take upwards of 45 minutes to complete, so don't worry if you see it hanging briefly. It can also slow down your PC, especially while configuring the AI models, so we recommend stepping away for a moment while the installation finishes.

Step 6: The installation may fail. If it does, simply rerun the installer, choosing the same location for the data as before. The installer will resume where it left off.

Step 7: Once the installer is finished, you'll get a shortcut to Chat with RTX on your desktop and the app will open in a browser window.

How to use Chat with RTX with your data

The big draw to Chat with RTX is that you can use your own data. It uses something called retrieval-augmented generation, or RAG, to flip through documents and give you answers based off of those documents. Instead of answering any question, Chat with RTX is good at answering specific questions about a particular set of data.

Nvidia includes some sample data so you can try out the tool, but you need to add your own data to unlock the full potential of Chat with RTX.

Step 1: Create a folder where you'll store your dataset. Note the location, as you'll need to point Chat with RTX toward that folder. Currently, Chat with RTX supports .txt, .pdf, and .doc files.

Step 2: Open Chat with RTX and select the pen icon in the Dataset section.

The icon to select data in Chat with RTX.
Jacob Roach / Digital Trends

Step 3: Navigate to the folder where you stored your data and select it.

Step 4: In Chat with RTX, select the refresh icon in the Dataset section. This will regenerate the model based on the new data. You'll want to refresh the model each time you add new data to the folder or select a different dataset.

The refresh icon in Chat with RTX.
Jacob Roach / Digital Trends

Step 5: With your data added, select the model you want to use in the AI model section. Chat with RTX includes Llama 2 and Mistral, with the latter being the default. Experiment with both, but for new users, Mistral is best.

Step 6: From there, you can start asking questions. Nvidia notes that Chat with RTX doesn't take context into account, so previous responses don't influence future responses. In addition, specific questions will generally yield better results than general questions. Finally, Nvidia notes that Chat with RTX will sometimes reference the wrong data when providing a response, so keep that in mind.

Step 7: If Chat with RTX stops working, and a restart doesn't fix it, Nvidia says you can delete the preferences.json file to solve the problem. This is located at C:\Users\\AppData\Local\NVIDIA\ChatWithRTX\RAG\trt-llm-rag-windows-main\config\preferences.json.

How to use Chat with RTX with YouTube

In addition to your own data, you can use Chat with RTX with YouTube videos. The AI model goes by the transcript from a YouTube video, so there are some natural limitations.

First, the AI model doesn't see anything not included in the transcript. You can't ask, for example, what someone looks like in a video. In addition, YouTube transcripts aren't always perfect. In videos with messy transcripts, you may not get the responses you want.

Step 1: Open Chat with RTX, and in the Dataset section, select the dropdown and choose YouTube.

Step 2: In the field below, paste a link to a YouTube video or playlist. Next to this field, you'll find a number that notes the maximum number of transcripts you want to download.

A field in Chat with RTX to download transcripts.
Jacob Roach / Digital Trends

Step 3: Select the download button next to this field and wait until the transcripts have finished downloading. When they're done, click on the refresh button.

Step 4: Once the transcript is done, you can chat just like you did with your own data. Specific questions are better than general questions, and if you're chatting about multiple videos, Chat with RTX may get the reference wrong if your question is too general.

Step 5: If you want to chat about a new set of videos, you'll need to manually delete the old transcripts. You'll find a button to open an Explorer window next to the refresh button. Head there and delete the transcripts if you want to chat about other videos.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
Apple finally has a way to defeat ChatGPT
A MacBook and iPhone in shadow on a surface.

OpenAI needs to watch out because Apple may finally be jumping on the AI bandwagon, and the news doesn't bode well for ChatGPT. Apple is reportedly working on a large language model (LLM) referred to as ReALM, which stands for Reference Resolution As Language Modeling. Made to give Siri a boost and help it understand context, the model comes in four variants, and Apple claims that even its smallest model performs on a similar level to OpenAI's ChatGPT.

This tantalizing bit of information comes from an Apple research paper, first shared by Windows Central, and it appears to be an early peek into what Apple has been cooking for a while now. ReALM is Apple's own LLM that was reportedly made to enhance Siri's capabilities; these improvements include a greater ability to understand context in a conversation.

Read more
GPT-4 vs. GPT-3.5: how much difference is there?
Infinix Zero 30 5G Android phone in gold color with ChatGPT virtual assistant.

The ChatGPT chatbot is an innovative AI tool developed by OpenAI. As it stands, there are two main versions of the software: GPT-4 and GPT-3.5. Toe to toe in more ways than one, there are a couple of key differences between both versions that may be deal-breakers for certain users. But what exactly are these differences? We’re here to help you find out. 

We’ve put together this side-by-side comparison of both ChatGPT versions, so when you’re done reading, you’ll know what version makes the most sense for you and yours.
What are GPT 3.5 and GPT-4?

Read more
ChatGPT AI chatbot can now be used without an account
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

ChatGPT, the AI-powered chatbot that went viral at the start of last year and kicked off a wave of interest in generative AI tools, no longer requires an account to use.

Its creator, OpenAI, launched a webpage on Monday that lets you begin a conversation with the chatbot without having to sign up or log in first.

Read more