Boost Edge AI Performance with the New NVIDIA Jetson Orin NX 16GB System-on-Module

We are building on the momentum from last year’s NVIDIA Jetson edge AI expansion.

Devices, the NVIDIA Jetson Orin NX 16 GB module is now available for purchase worldwide.

The Jetson Orin NX 16 GB module is unmatched in performance and efficiency for the small form factor, low-power robots, embedded applications, and autonomous machines. This makes it ideal for use in products like drones and handheld devices.

The module can easily be used for advanced applications such as manufacturing, logistics, retail, agriculture, healthcare, and life sciences—all in a compact, power-efficient package.

It is the smallest Jetson form factor, delivering up to 100 TOPS of AI performance with power configurable between 10 W and 25 W. It gives developers 3x the performance of the NVIDIA Jetson AGX Xavier and 5x the performance of the

NVIDIA Jetson Xavier NX.

The system-on-module supports multiple AI application pipelines with NVIDIA

Ampere architecture GPU, next-generation deep learning and vision accelerators, high-speed I/O, and fast memory bandwidth. You can develop solutions using your largest and most complex AI models in natural language understanding, 3D perception, and multi-sensor fusion.

Showcasing the giant leap in performance, NVIDIA ran some computer vision benchmarks using the NVIDIA JetPack 5.1. Testing included some dense INT8 and FP16 pre-trained models from NGC. The same models were also run for comparison on Jetson Xavier NX.

Following is the complete list of benchmarks:

  • NVIDIA PeopleNet v2.5 for the highest accuracy in people detection.
  • NVIDIA ActionRecognitionNet for 2D and 3D models.
  • NVIDIA LPRNet for license plate recognition.
  • NVIDIA DashCamNet for object detection and labeling.
  • NVIDIA BodyPoseNet for multiperson human pose estimation.

Taking the geomean of these benchmarks, Jetson Orin NX shows a 2.1x  performance increase compared to Jetson Xavier NX. With future software optimizations, this is expected to approach 3.1x for dense benchmarks. Other Jetson devices have increased performance by 1.5x since their first supporting software release; a similar is anticipated for the Jetson Orin NX 16 GB.

Jetson Orin NX also brings support for sparsity, which will enable even greater performance. With sparsity, you can take advantage of the fine-grained structured sparsity in deep learning networks to increase the throughput for Tensor Core operations.

Leave a Reply

Your email address will not be published. Required fields are marked *