Skip to main content

Nvidia’s $200 Jetson Orin Nano minicomputer is 80 times faster than the previous version

Nvidia announced the upcoming release of the Jetson Orin Nano, a system-on-module (SOM) that will power up the next generation of entry-level AI and robotics, during its GTC 2022 keynote today.

Nvidia says this new version delivers an 80x increase in performance over the $99 Jetson Nano. The original version was released in 2019 and has been used as a bare-bones entry into the world of AI and robotics, particularly for hobbyists and STEM students. This new version looks to seriously up the power.

Nvidia's Jetson Orin Nano robotics processor.

A system-on-module (also referred to as computer-on-module) features a single board with a microprocessor. It also has memory and input/outputs (IOs), and usually has a carrier board. It’s not the same thing as a system-on-a-chip (SOC), mind you — an SOM is board-based and may have the space to include extra components; it could even include an SOC. In short, an SOM is a ready-to-use computing solution, but it’s not a full computer.

With the technicalities out of the way, let’s talk about Nvidia’s latest development, the Jetson Orin, arriving with six Orin-based production modules that were made to handle AI and robotics applications at an affordable price. Among them is the Nvidia Jetson Orin Nano.

Despite being the smallest form factor Jetson SOM, the Jetson Orin Nano can handle up to 40 trillion operations per second (TOPS) of AI-related tasks. The performance hits new heights with the AGX Orin, serving up 275 TOPS in order to handle advanced autonomous machines.

Nvidia’s Jetson Orin comes with an Ampere-based GPU, an ARM-based CPU, and multimodal sensor support. It’s also fully compatible with Nvidia’s Orin NX modules, including full emulation support that will enable Nvidia’s customers to design around multiple Jetson modules. Other perks include support for multiple concurrent AI application pipelines, complete with fast inputs and outputs.

The Jetson Orin Nano modules will be available in two variants, one with 8GB memory and up to 40 TOPS, and one with 4GB memory and up to 20 TOPS. In terms of power consumption, the SOM needs next to nothing: The former requires between 7 watts and 15 watts while the latter only needs 5 watts to 10 watts.

Nvidia Jetson Orin Nano system-on-module.
Nvidia

Nvidia foresees that the modules will be used by a wide variety of customers, from engineers dealing with edge AI applications to robotics operating system developers. The low price point, starting at just $199, will make this technology more accessible to a wider range of users. Nvidia cites Canon, John Deere, Microsoft Azure, and more as early adopters of Jetson Orin Nano.

“With an orders-of-magnitude increase in performance for millions of edge AI and ROS developers today, Jetson Orin is the ideal platform for virtually every kind of robotics deployment imaginable,” said Deepu Talla, vice president of Nvidia’s embedded and edge computing division.

Nvidia claims that the Jetson Orin will offer an 80x increase in performance over the previous generation of Jetson SOMs. That’s a massive step up at a reasonable price. The modules will be available starting in January 2023.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
Nvidia’s Jetson AGX Xavier module is designed to give robots better brains
nvidias jetson agx xavier module is designed to give robots better brains nvidia

Image: Nvidia

Nvidia wants to be the brains to help the next-generation of autonomous robots do the heavy lifting. The newly announced Jetson AGX Xavier module aims to do just that.

Read more
Nvidia’s new A.I. creates entire virtual cities by watching dash cam videos
nvidia virtual city tech ai research neurips

From the Grand Theft Auto franchise to the plethora of available Spider-Man titles, plenty of video games allow you to explore a three-dimensional representation of a real (or thinly fictionalized) city. Creating these cityscapes isn’t easy, however. The task requires thousands of hours of computer modeling and careful reference studies before players have the chance to walk, drive or fly through the completed virtual world.

An impressive new tech demo from Nvidia shows that there is another way, however. Shown off at the NeurIPS artificial intelligence conference in Montreal, the tech company showcased how machine learning technology can generate a convincing virtual city simply by showing it dash cam videos. These videos were gathered from self-driving cars, during a one-week trial driving around cities. The neural network training process took around one week using Nvidia’s Tesla V100 GPUs on a DGX-1 supercomputer system. Once the A.I. had learned what it was looking at and figured out how to segment this into color-coded objects, the virtual cities were generated using the Unreal Engine 4.

Read more
Nvidia researchers use artificial intelligence to upgrade your game’s graphics
Amazon Warehouse weekend roundup

At the Neural Information Processing Conference in Montreal, Canada, Nvidia researchers demonstrated that they could use the power of artificial intelligence to render synthetic, yet realistic, scenes with details and textures. Researchers claim that the work is still in its early stages, and it is unclear when the technology will be released to consumers, but there is big potential for Nvidia’s artificial intelligence-driven rendering in the gaming space.

“This work is about a new way of rendering computer graphics using neural networks,” Nvidia’s Vice President of Applied Deep Learning Bryan Catanzaro said in a conference call. Basically, researchers wanted to know how they can apply A.I. to make computer graphics better, and the solution is to apply machine learning to real-world videos in order to render new graphics.

Read more