Skip to main content

How to watch Nvidia’s AI announcement at GTC

NVIDIA CEO Jensen Huang on stage.
Nvidia

Nvidia’s GPU Technology Conference is in full swing, but it’s CEO Jensen Huang’s keynote address that has everyone waiting with bated breath. He’s expected to make major announcements about Nvidia’s future AI developments and how Nvidia GPUs will power them. Following the launch and subsequent explosion in coverage of the ChatGPT chatbot, Nvidia is looking to recapture the conversation around AI and show why its developments, more so than its competition, are what we should be excited about.

But how do you watch Nvidia’s AI announcement at GTC? There’s an official stream that you can tap into right here when it’s time.

Recommended Videos

How to watch Nvidia’s GTC keynote

The easiest way to watch Jensen Huang’s keynote address at GTC is using the official Nvidia YouTube stream. You can view it embedded below, or visit the Nvidia YouTube channel to watch it directly there.

GTC 2023 Keynote with NVIDIA CEO Jensen Huang

The address begins March 21, at 8 a.m. PT (11 a.m. ET).

What to expect from Nvidia’s GTC keynote

Nvidia’s GTC shows are always packed full of exciting announcements and debuts. The last one in November 2022 featured the launch of the RTX 4090 and 4080, as well as showcases of DLSS 3 and new Nvidia in-car entertainment systems. GTC 2023, however, is said to focus much more on artificial intelligence.

The secret’s out. Thanks to ChatGPT, everyone knows about the power of modern #AI.

To find out what’s coming next, tune in to NVIDIA founder and CEO Jensen Huang’s keynote at #GTC23 on Tuesday, March 21, at 8 a.m. PDT. https://t.co/pVJkFMQl9D

— NVIDIA GTC (@NVIDIAGTC) March 17, 2023

But despite Nvidia making major investments in AI hardware and software over the years, from its automated vehicle technologies to Tensor cores in its consumer GPUs, the conversation is moving swiftly in the AI space, and Nvidia is not at the forefront of it. With ChatGPT, Google Bard, and Microsoft’s Bing AI-augmented search, Nvidia faces an uphill battle to regain some mindshare when it comes to AI.

However, it is poised to take a big swing at that lofty goal, where it is likely to show off what its GPUs can do when powering localized AI, and some new software developments it has made on AI models of its own. It’s calling this year’s Spring GTC the “No. 1 AI developer conference,” arguably rebranding it from its titular GPU focus.

We may also hear the first details about future GPU architectures, like Blackwell, which is the expected successor for the existing RTX 40-series Ada Lovelace GPUs.

Jensen Huang will also talk about new metaverse and cloud technologies, as well as sustainable computing.

Nvidia released a short teaser for the keynote, which doesn’t reveal many details, but does whet our appetite for what’s to come.

NVIDIA GTC 2023 Keynote Teaser
Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
Nvidia may finally let gamers buy some GPUs at a reasonable price
Logo on the RTX 4060 Ti graphics card.

Nvidia's getting ready to expand its list of the best graphics cards soon, and thanks to leakers, we now have a rumored date for when these new GPUs might hit the shelves. The date is not the part that excites me the most, though. According to the leak, Nvidia will require that its add-in channel (AIC) partners will have to offer at least one model at the recommended list price (MSRP) -- something we desperately need right now. But how long will it last?

The scoop comes from HKEPC, a Hong Kong-based publication. According to HKEPC, Nvidia revealed the release dates for the RTX 5060 Ti 16GB, RTX 5060 Ti 8GB, and the RTX 5060 (which will likely come with 8GB VRAM, although some sources say 12GB). Keep in mind that the following is still a rumor until Nvidia itself confirms otherwise, which, by the sound of it, won't happen for a while.

Read more
I tested the future of AI image generation. It’s astoundingly fast.
Imagery generated by HART.

One of the core problems with AI is the notoriously high power and computing demand, especially for tasks such as media generation. On mobile phones, when it comes to running natively, only a handful of pricey devices with powerful silicon can run the feature suite. Even when implemented at scale on cloud, it’s a pricey affair.
Nvidia may have quietly addressed that challenge in partnership with the folks over at the Massachusetts Institute of Technology and Tsinghua University. The team created a hybrid AI image generation tool called HART (hybrid autoregressive transformer) that essentially combines two of the most widely used AI image creation techniques. The result is a blazing fast tool with dramatically lower compute requirement.
Just to give you an idea of just how fast it is, I asked it to create an image of a parrot playing a bass guitar. It returned with the following picture in just about a second. I could barely even follow the progress bar. When I pushed the same prompt before Google’s Imagen 3 model in Gemini, it took roughly 9-10 seconds on a 200 Mbps internet connection.

A massive breakthrough
When AI images first started making waves, the diffusion technique was behind it all, powering products such as OpenAI’s Dall-E image generator, Google’s Imagen, and Stable Diffusion. This method can produce images with an extremely high level of detail. However, it is a multi-step approach to creating AI images, and as a result, it is slow and computationally expensive.
The second approach that has recently gained popularity is auto-regressive models, which essentially work in the same fashion as chatbots and generate images using a pixel prediction technique. It is faster, but also a more error-prone method of creating images using AI.
On-device demo for HART: Efficient Visual Generation with Hybrid Autoregressive Transformer
The team at MIT fused both methods into a single package called HART. It relies on an autoregression model to predict compressed image assets as a discrete token, while a small diffusion model handles the rest to compensate for the quality loss. The overall approach reduces the number of steps involved from over two dozen to eight steps.
The experts behind HART claim that it can “generate images that match or exceed the quality of state-of-the-art diffusion models, but do so about nine times faster.” HART combines an autoregressive model with a 700 million parameter range and a small diffusion model that can handle 37 million parameters.

Read more
The RTX 50-series is the worst GPU launch in recent memory
The RTX 5090 sitting on a pink background.

Nvidia has had some less-than-stellar graphics card launches over the years. Its RTX 2000-series was poorly received, with little interest in the flagship features of the time, and the RTX 40-series hardly blew us away. But the RTX 50-series has been something else entirely. It's the worst GPU launch I can remember in a long time.

If you've been following along, the latest is that the RTX 5060 and 5060 Ti are delayed again. But that's just one more straw on the camel's funeral pyre for this catastrophic GPU generation.
In the beginning, there was overhype
It all started off strong for the RTX 50 series. Nvidia CEO Jensen Huang took to the stage at CES 2025 and made some truly grandiose claims which had everyone excited. The RTX 5090 was going to double performance of the RTX 4090. The RTX 5070 was going to offer 4090-level performance at $549. Multi frame generation was going to give Nvidia such a lead, that AMD's cards would look ridiculous in comparison.

Read more