Skip to main content

Nvidia’s Jetson AGX Xavier module is designed to give robots better brains

Image: Nvidia Image used with permission by copyright holder

Nvidia wants to be the brains to help the next-generation of autonomous robots do the heavy lifting. The newly announced Jetson AGX Xavier module aims to do just that.

As a system-on-a-chip, Jetson Xavier is part of Nvidia’s bet to overcome the computational limits of Moore’s Law by relying on graphics and deep learning architectures, rather than the processor. That’s according to Deepu Talla, Nividia’s vice president and general manager of autonomous machines, at a media briefing at the company’s new Endeavor headquarters in Santa Clara, California on Wednesday evening. The company has lined up a number of partners and envisions the Xavier module to power delivery drones, autonomous vehicles, medical imaging, and other tasks that require deep learning and artificial intelligence capabilities.

Nvidia claims that the latest Xavier module is capable of delivering up to 32 trillion operations per second (TOPS). Combined with the latest artificial intelligence capabilities of the Tensor Core found within Nvidia’s Volta architecture, Xavier is capable of twenty times the performance of the older TX2 with 10 times better energy efficiency. This gives Xavier the power of a workstation-class server in a module that fits in the size of your hand, Talla said.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

In a Deepstream demonstration, Talla showed that while the older Jetson TX2 can process two 1080p videos, each with four deep neural networks, the company’s high performance computing Tesla chip increases that number to 24 videos, each at 720p resolution. Xavier takes that even further, and Talla showed that the chipset was capable of processing thirty videos, each at 1080p resolution.

The Xavier module consists of an eight-core Carmel ARM64 processor, 512 CUDA Tensor Cores, dual NVDLA deep-learning accelerator, and multiple engines for video processing to help autonomous robots process images and videos locally. In a presentation, Talla claimed that the new Xavier module beats the prior Jetson TX2 platform and an Intel Core i7 computer coupled with an Nvidia GeForce GTX 1070 graphics card in both AI inference performance and AI inference efficiency.

Image used with permission by copyright holder

Some of Nvidia’s developer partners are still building autonomous machines based on older Nvidia solutions like the TX2 platform or a GTX GPU. Some of these partners include self-driving deliver carts, industrial drones, and smart city solutions. However, many claim that these robots can easily be upgradeable to the new Xavier platform to take advantage of benefits of that platform.

While native on-board processing of images and videos will help autonomous machines learn faster and accelerate how AI can be used to detect diseases in medical imaging applications, it can also be used in the virtual reality space. Live Planet VR, which creates an end-to-end platform and an 16-lens camera solution to live stream VR videos, uses Nvidia’s solution to process the graphics and clips together all inside the camera without requiring any file exports.

“Unlike other solutions, all the processing is done on the camera,” Live Planet community manager Jason Garcia said. Currently, the company uses Nvidia’s GTX card to do stitch the video clips from the different lenses together and reduce image distortion from the wide angle lenses.

Image used with permission by copyright holder

Talla said that videoconferencing solutions can also use AI to improve collaboration by tacking the speaker and switching cameras to either highlight the person talking or the whiteboard. Partner Slightech showed off one version of this by showing how face recognition and tracking can be implemented on a telepresence robot. Slightech used its Mynt 3D camera sensor, AI technology, and Nvidia’s Jetson technology to power this robot. Nvidia is working with more than 200,000 developers, five times the number from spring 2017, to help get Jetson into more applications ranging from healthcare to manufacturing.

The Jetson AGX module is now shipping with a starting price of $1,099 per unit when purchased in 1,000-unit batches.

“Developers can use Jetson AGX Xavier to build the autonomous machines that will solve some of the world’s toughest problems, and help transform a broad range of industries,” Nvidia said in a prepared statement. “Millions are expected to come onto the market in the years ahead.”

Updated December 20: This article originally mentioned that Live Planet VR has an 18-lens system. We’ve updated our reporting to reflect that Live Planet VR uses a 16-lens configuration.

Editors' Recommendations

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
Nvidia’s latest A.I. results prove that ARM is ready for the data center
Jensen Huang at GTX 2020.

Nvidia just published its latest MLPerf benchmark results, and they have are some big implications for the future of computing. In addition to maintaining a lead over other A.I. hardware -- which Nvidia has claimed for the last three batches of results -- the company showcased the power of ARM-based systems in the data center, with results nearly matching traditional x86 systems.

In the six tests MLPerf includes, ARM-based systems came within a few percentage points of x86 systems, with both using Nvidia A100 A.I. graphics cards. In one of the tests, the ARM-based system actually beat the x86 one, showcasing the advancements made in deploying different instruction sets in A.I. applications.

Read more
Nvidia’s new voice A.I. sounds just like a real person
Nvidia Voice AI

The "uncanny valley" is often used to describe artificial intelligence (A.I.) mimicking human behavior. But Nvidia's new voice A.I. is much more realistic than anything we've ever heard before. Using a combination of A.I. and a human reference recording, the fake voice sounds almost identical to a real one.

All the Feels: NVIDIA Shares Expressive Speech Synthesis Research at Interspeech

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more