Skip to main content

Nvidia’s supercomputer may bring on a new era of ChatGPT

Nvidia's CEO showing off the company's Grace Hopper computer.
Nvidia

Nvidia has just announced a new supercomputer that may change the future of AI. The DGX GH200, equipped with nearly 500 times more memory than the systems we’re familiar with now, will soon fall into the hands of Google, Meta, and Microsoft.

Recommended Videos

The goal? Revolutionizing generative AI, recommender systems, and data processing on a scale we’ve never seen before. Are language models like GPT going to benefit, and what will that mean for regular users?

Describing Nvidia’s DGX GH200 requires the use of terms most users never have to deal with. “Exaflop,” for example, because the supercomputer provides 1 exaflop of performance and 144 terabytes of shared memory. Nvidia notes that this means nearly 500 times more memory than in a single Nvidia DGX A100 system.

Let’s circle back to the 1 exaflop figure and break it down a little. One exaflop equals a quintillion floating-point operations per second (FLOPs). For comparison, Nvidia’s RTX 4090 can hit around 100 teraflops (TFLOPs) when overclocked. A TFLOP equals one trillion floating-point operations per second. The difference is staggering, but of course, the RTX 4090 is not a data center GPU. The DGX GH200, on the other hand, integrates a substantial number of these high-performance GPUs that don’t belong anywhere near a consumer PC.

Nvidia's Grace Hopper superchip.
Nvidia

The computer is powered by Nvidia’s GH200 Grace Hopper superchips. There are 256 of them in total, which, thanks to Nvidia’s NVLink interconnect technology, are all able to work together as a unified system, essentially creating one massive GPU.

The GH200 superchips used here also don’t need a traditional PCIe connection between the CPU and the GPU. Nvidia says that they’re already equipped with an ARM-based Nvidia Grace CP,U as well as an H100 Tensor Core GPU. Nvidia’s got some fancy chip interconnects going on here too, this time using the NVLink-C2C. As a result, the bandwidth between the processor and the graphics card is said to be significantly improved (up to 7 times) and more power-efficient (up to 5 times).

Packing over 200 of these chips into a single powerhouse of a supercomputer is impressive enough, but it gets even better when you consider that, previously, only eight GPUs could be joined with NVLink at a time. A leap from eight to 256 chips certainly gives Nvidia some bragging rights.

It’s hard not to imagine that the DGX GH200 could power improvements in Bard, ChatGPT, and Bing Chat.

Now, where will the DGX GH200 end up and what can it offer to the world? Nvidia’s building its own Helios Supercomputer as a means of advancing its AI research and development. It will encompass four DGX GH200 systems, all interconnected with Nvidia’s Quantum-2 InfiniBand. It expects it to come online by the end of the year.

Nvidia is also sharing its new development with the world, starting with Google Cloud, Meta, and Microsoft. The purpose is much the same — exploring generative AI workloads.

When it comes to Google and Microsoft, it’s hard not to imagine that the DGX GH200 could power improvements in Bard, ChatGPT, and Bing Chat.

Nvidia CEO showing the company's Hopper computer.
Nvidia

The significant computational power provided by a single DGX GH200 system makes it well-suited to advancing the training of sophisticated language models. It’s hard to say what exactly that could mean without comment from one of the interested parties, but we can speculate a little.

More power means larger models, meaning more nuanced and accurate text and a wider range of data for them to be trained on. We might see better cultural understanding, more knowledge of context, and greater coherency. Specialized AI chatbots could also begin popping up, further replacing humans in fields such as technology.

Should we be concerned about potential job displacement, or should we be excited about the advancements these supercomputers could bring? The answer is not straightforward. One thing is for sure — Nvidia’s DGX GH200 might shake things up in the world of AI, and Nvidia has just furthered its AI lead over AMD yet again.

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
Sam Altman confirms ChatGPT’s latest model is free for all users
ChatGPT logo on a phone

Earlier this week, OpenAI CEO Sam Altman declared the company's newest reasoning model, o3, ready for public consumption after it passed its external safety testing and announced that it would soon be arriving as both an API and ChatGPT model option in the coming weeks. On Thursday, Altman took to social media to confirm that the lightweight version, o3-mini, won't just be made available to paid subscribers at the Plus, Teams, and Pro tiers, but to free tier users as well.

https://x.com/sama/status/1882478782059327666

Read more
ChatGPT just dipped its toes into the world of AI agents
OpenAI's ChatGPT blog post is open on a computer monitor, taken from a high angle.

OpenAI appears to be just throwing spaghetti at this point, hoping it sticks to a profitable idea. The company announced on Tuesday that it is rolling out a new feature called ChatGPT Tasks to subscribers of its paid tier that will allow users to set individual and recurring reminders through the ChatGPT interface.

Tasks does exactly what it sounds like it does: It allows you to ask ChatGPT to do a specific action at some point in the future. That could be assembling a weekly news brief every Friday afternoon, telling you what the weather will be like in New York City tomorrow morning at 9 a.m., or reminding you to renew your passport before January 20. ChatGPT will also send a push notification with relevant details. To use it, you'll need to select "4o with scheduled tasks" from the model picker menu, then tell the AI what you want it to do and when.

Read more
It’s not just you: ChatGPT is currently down
OpenAI and ChatGPT logos are marked do not enter with a red circle and line symbol.

OpenAI's ChatGPT platform and Sora video generator have gone offline and are currently not responding to user queries.

Social media accounts began posting about the outage around 1:30 p.m. ET on Thursday, which coincided with a surge of reports to Down Detector. The company confirmed the outage in a blog post at 2 p.m. ET stating, "we are currently experiencing an issue with high error rates on ChatGPT, the API, and Sora. We are currently investigating and will post an update as soon as we are able."

Read more