Jetson Nano Developer Kit

Benchmarks for a future article on Phoronix.com.

HTML result view exported from: https://openbenchmarking.org/result/1908193-HV-1903186HV68.

Jetson Nano Developer KitProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.2ARMv8 rev 1 @ 1.73GHz (4 Cores)jetson_tx14096MB16GB 016G32NVIDIA Tegra X1VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.5X Server 1.18.4NVIDIA 28.1.04.5.01.0.8GCC 5.4.0 20160609ext41920x1080ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA TEGRAUnity 7.4.0NVIDIA 28.2.1GCC 5.4.0 20160609 + CUDA 9.0ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)ARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierUbuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ARMv8 rev 1 @ 1.43GHz (4 Cores)jetson-nano4096MB32GB GB1QTNVIDIA TEGRARealtek RTL8111/8168/84114.9.140-tegra (aarch64)NVIDIA 1.0.01.1.85ARMv7 rev 4 @ 1.40GHz (4 Cores)BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3926MB32GB GB2MWBCM2708Raspbian 9.64.19.23-v7+ (armv7l)LXDEX Server 1.19.2GCC 6.3.0 20170516656x416ARMv7 rev 1 @ 1.80GHz (4 Cores)Rockchip (Device Tree)2048MB32GB GB1QTDebian 9.04.4.16-00006-g4431f98-dirty (armv7l)X Server 1.18.41024x768ARMv7 rev 3 @ 1.50GHz (8 Cores)ODROID-XU4 Hardkernel Odroid XU416GB AJTD4Rllvmpipe 2GBVE228Ubuntu 18.044.14.37-135 (armv7l)X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits)GCC 7.3.01920x1080ARMv8 rev 0 @ 2.27GHz (8 Cores)Jetson-AGX16384MB8GB FLASH DRIVE + 31GB HBG4a2 + 31GB SD32GNVIDIA Tegra XavierPHL 247E64.9.140-tegra (aarch64)GNOME Shell 3.28.4NVIDIA 32.2.04.6.01.1.85GCC 7.4.0 + CUDA 10.0OpenBenchmarking.orgCompiler Details- Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson Nano: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - ODROID-XU4: --build=arm-linux-gnueabihf --disable-libitm --disable-libquadmath --disable-libquadmath-support --disable-sjlj-exceptions --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-gcc-major-version-only --with-mode=thumb --with-target-system-zlib -v - AGX Xavier 32.2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v Processor Details- Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactive- Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil- Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil- Jetson Nano: Scaling Governor: tegra-cpufreq schedutil- Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand- ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive- ODROID-XU4: Scaling Governor: cpufreq-dt ondemand- AGX Xavier 32.2: Scaling Governor: tegra_cpufreq schedutilPython Details- Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2- Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7- Jetson Nano: Python 2.7.15rc1 + Python 3.6.7- Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3- ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3- ODROID-XU4: Python 2.7.15rc1 + Python 3.6.7- AGX Xavier 32.2: Python 2.7.15+ + Python 3.6.8Kernel Details- ODROID-XU4: usbhid.quirks=0x0eef:0x0005:0x0004Graphics Details- ODROID-XU4: EXA

Jetson Nano Developer Kitcuda-mini-nbody: Originalglmark2: 1920 x 1080tensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - INT8 - 4 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: VGG19 - INT8 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 32 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: ResNet152 - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: ResNet152 - INT8 - 32 - Disabledlczero: BLASlczero: CUDA + cuDNNlczero: CUDA + cuDNN FP16ttsiod-renderer: Phong Rendering With Soft-Shadow Mappingcompress-7zip: Compress Speed Testc-ray: Total Time - 4K, 16 Rays Per Pixelrust-prime: Prime Number Test To 200,000,000compress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19encode-flac: WAV To FLACopencv-bench: pybench: Total For Average Test Timestesseract-ocr: Time To OCR 7 ImagesJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.245.094508753128.45145.8079.2063396.7725.9914.2421.0411.4529.8315.7923.9412.5921614837423772.0139.1515688.8827.3414.5086.0847.1517910432.6717.3628.853294869170.25253.80104.2849387358.2432.6417.5626.5614.3236.8719.9129.8315.9226418446230192.2849.9719711335.1118.2911159.6923313041.9122.0749.265593585104.96144.9765.07296540847.132876208.76303.78172.50265.81247.95475.08203.96394.661200114320383143547.50902.787961146224.19372.736361215.0810061693259.82493.2247.6295325151331921235532.3780.0654.47128300771.944.0764614.3511.5911884.1020112841.0420.9683.3747.8215.767.7646.5125.0898.9355.6617.3815.3714040.944049921150.19129.87104.77271.047084132.6717.66201320301097.69342.23339.532.742091321.22283617181821.05496.62279.051150241.964120827574.1197.03520.705009180.6636.362394203.81309.75169.75271.65164.81367.91113.59405.4611951138.1720293166589.98927.67864.751197.35231.88382.28699.891239.961143.011740.52270.54512.1953.679722519147.162101916333.3155.7050.25119.35297890.14OpenBenchmarking.org

CUDA Mini-Nbody

Test: Original

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.21122334455SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 1.40, N = 126.778.2447.134.0736.36

GLmark2

Resolution: 1920 x 1080

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX XavierJetson NanoAGX Xavier 32.2600120018002400300028766462394

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.250100150200250SE +/- 0.13, N = 3SE +/- 0.50, N = 4SE +/- 0.10, N = 3SE +/- 0.02, N = 2SE +/- 0.33, N = 325.9932.64208.7614.35203.81

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierAGX Xavier 32.270140210280350SE +/- 0.20, N = 5SE +/- 0.25, N = 6SE +/- 0.46, N = 3SE +/- 0.28, N = 314.2417.56303.78309.75

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.24080120160200SE +/- 0.34, N = 3SE +/- 0.38, N = 3SE +/- 0.50, N = 3SE +/- 0.05, N = 2SE +/- 0.26, N = 321.0426.56172.5011.59169.75

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierAGX Xavier 32.260120180240300SE +/- 0.23, N = 3SE +/- 0.25, N = 4SE +/- 0.20, N = 3SE +/- 0.23, N = 311.4514.32265.81271.65

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierAGX Xavier 32.250100150200250SE +/- 0.18, N = 3SE +/- 0.31, N = 3SE +/- 0.12, N = 3SE +/- 2.57, N = 1529.8336.87247.95164.81

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierAGX Xavier 32.2100200300400500SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 4.28, N = 615.7919.91475.08367.91

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierAGX Xavier 32.24080120160200SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 1.61, N = 1523.9429.83203.96113.59

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierAGX Xavier 32.290180270360450SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.23, N = 3SE +/- 4.10, N = 1212.5915.92394.66405.46

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.230060090012001500SE +/- 3.03, N = 6SE +/- 7.77, N = 12SE +/- 1.82, N = 3SE +/- 2.12, N = 12SE +/- 0.95, N = 321626412001181195

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.22004006008001000SE +/- 0.91, N = 3SE +/- 2.79, N = 5SE +/- 2.59, N = 3SE +/- 0.72, N = 3SE +/- 4.56, N = 3148.00184.001143.0084.101138.17

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.2400800120016002000SE +/- 2.82, N = 3SE +/- 7.68, N = 12SE +/- 2.07, N = 3SE +/- 1.59, N = 3SE +/- 2.15, N = 337446220382012029

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.27001400210028003500SE +/- 1.39, N = 3SE +/- 0.52, N = 3SE +/- 1.06, N = 3SE +/- 0.06, N = 3SE +/- 1.15, N = 323730131431283166

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.2130260390520650SE +/- 1.10, N = 12SE +/- 1.32, N = 12SE +/- 0.03, N = 3SE +/- 0.25, N = 3SE +/- 0.48, N = 372.0192.28547.5041.04589.98

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.22004006008001000SE +/- 0.64, N = 3SE +/- 0.79, N = 4SE +/- 1.86, N = 3SE +/- 0.36, N = 3SE +/- 6.25, N = 339.1549.97902.7820.96927.67

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.22004006008001000SE +/- 1.90, N = 12SE +/- 2.27, N = 3SE +/- 2.48, N = 3SE +/- 0.70, N = 3SE +/- 2.77, N = 3156.00197.00796.0083.37864.75

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.230060090012001500SE +/- 1.32, N = 3SE +/- 1.65, N = 3SE +/- 4.31, N = 3SE +/- 0.60, N = 3SE +/- 3.40, N = 388.88113.001146.0047.821197.35

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.250100150200250SE +/- 0.34, N = 3SE +/- 0.36, N = 3SE +/- 0.22, N = 3SE +/- 0.04, N = 3SE +/- 0.38, N = 327.3435.11224.1915.76231.88

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.280160240320400SE +/- 0.15, N = 3SE +/- 0.14, N = 3SE +/- 1.59, N = 3SE +/- 0.03, N = 3SE +/- 0.84, N = 314.5018.29372.737.76382.28

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.2150300450600750SE +/- 0.86, N = 3SE +/- 1.22, N = 3SE +/- 1.23, N = 3SE +/- 0.02, N = 3SE +/- 0.23, N = 386.08111.00636.0046.51699.89

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.230060090012001500SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.25, N = 3SE +/- 0.06, N = 3SE +/- 7.21, N = 347.1559.691215.0825.081239.96

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.22004006008001000SE +/- 2.17, N = 8SE +/- 4.50, N = 3SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 18.13, N = 3179.00233.001006.0098.931143.01

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.2400800120016002000SE +/- 0.07, N = 3SE +/- 0.74, N = 3SE +/- 8.72, N = 3SE +/- 0.18, N = 3SE +/- 0.38, N = 3104.00130.001693.0055.661740.52

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoAGX Xavier 32.260120180240300SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.26, N = 3SE +/- 0.01, N = 3SE +/- 0.13, N = 332.6741.91259.8217.38270.54

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierAGX Xavier 32.2110220330440550SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.81, N = 3SE +/- 0.24, N = 317.3622.07493.22512.19

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson AGX XavierJetson NanoAGX Xavier 32.21224364860SE +/- 0.62, N = 7SE +/- 0.03, N = 3SE +/- 0.17, N = 347.6215.3753.671. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson AGX XavierJetson NanoAGX Xavier 32.22004006008001000SE +/- 6.14, N = 3SE +/- 0.26, N = 3SE +/- 3.18, N = 39531409721. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNN FP16Jetson AGX XavierAGX Xavier 32.25001000150020002500SE +/- 7.60, N = 3SE +/- 33.41, N = 3251525191. (CXX) g++ options: -lpthread -lz

TTSIOD 3D Renderer

Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.2306090120150SE +/- 0.04, N = 3SE +/- 0.46, N = 4SE +/- 0.15, N = 3SE +/- 1.63, N = 12SE +/- 0.11, N = 3SE +/- 0.16, N = 3SE +/- 0.27, N = 9SE +/- 0.97, N = 9SE +/- 2.09, N = 445.0928.8549.26133.0040.9417.6621.2241.96147.161. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

7-Zip Compression

Compress Speed Test

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.25K10K15K20K25KSE +/- 13.43, N = 3SE +/- 13.05, N = 3SE +/- 20.85, N = 3SE +/- 274.18, N = 12SE +/- 18.00, N = 3SE +/- 23.74, N = 11SE +/- 34.93, N = 3SE +/- 89.16, N = 12SE +/- 180.27, N = 12450832945593192124049201328364120210191. (CXX) g++ options: -pipe -lpthread

C-Ray

Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.2400800120016002000SE +/- 10.23, N = 3SE +/- 1.44, N = 3SE +/- 49.09, N = 9SE +/- 7.17, N = 9SE +/- 0.35, N = 3SE +/- 2.46, N = 3SE +/- 22.09, N = 3SE +/- 29.65, N = 9SE +/- 0.77, N = 3753869585355921203017188271631. (CC) gcc options: -lm -lpthread -O3

Rust Prime Benchmark

Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds, Fewer Is BetterRust Prime BenchmarkPrime Number Test To 200,000,000Jetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.2400800120016002000SE +/- 0.77, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.22, N = 3SE +/- 1.55, N = 3SE +/- 187.90, N = 6SE +/- 0.37, N = 3SE +/- 0.04, N = 3128.45170.25104.9632.37150.191097.691821.05574.1133.31-ldl -lrt -lpthread -lgcc_s -lc -lm -lutil1. (CC) gcc options: -pie -nodefaultlibs

Zstd Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardAGX Xavier 32.2110220330440550SE +/- 0.42, N = 3SE +/- 1.02, N = 3SE +/- 0.29, N = 3SE +/- 0.91, N = 3SE +/- 0.23, N = 3SE +/- 1.03, N = 3SE +/- 2.16, N = 3SE +/- 0.37, N = 3145.80253.80144.9780.06129.87342.23496.6255.701. (CC) gcc options: -O3 -pthread -lz -llzma

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.270140210280350SE +/- 0.74, N = 5SE +/- 0.18, N = 5SE +/- 0.15, N = 5SE +/- 0.61, N = 5SE +/- 0.83, N = 5SE +/- 0.98, N = 5SE +/- 2.51, N = 5SE +/- 0.31, N = 5SE +/- 0.39, N = 579.20104.2865.0754.47104.77339.53279.0597.0350.251. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

OpenCV Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenCV Benchmark 3.3.0Jetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ODROID-XU4AGX Xavier 32.2110220330440550SE +/- 5.74, N = 3SE +/- 0.27, N = 3SE +/- 1.57, N = 3SE +/- 4.66, N = 9SE +/- 5.31, N = 3SE +/- 0.47, N = 3493.00296.00128.00271.042.74520.70119.351. (CXX) g++ options: -std=c++11 -rdynamic

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4AGX Xavier 32.24K8K12K16K20KSE +/- 18.55, N = 3SE +/- 42.52, N = 3SE +/- 33.86, N = 3SE +/- 4.67, N = 3SE +/- 37.23, N = 3SE +/- 43.80, N = 3SE +/- 854.75, N = 9SE +/- 30.99, N = 3SE +/- 17.35, N = 363398735540830077084209131150250092978

Tesseract OCR

Time To OCR 7 Images

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesJetson AGX XavierJetson NanoODROID-XU4AGX Xavier 32.24080120160200SE +/- 0.89, N = 3SE +/- 1.50, N = 3SE +/- 1.38, N = 3SE +/- 0.09, N = 371.94132.67180.6690.14

TTSIOD 3D Renderer

Performance / Cost - Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS Per Dollar, More Is BetterTTSIOD 3D Renderer 2.3bPerformance / Cost - Phong Rendering With Soft-Shadow MappingJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU40.1530.3060.4590.6120.7650.090.050.080.100.410.500.320.681. Jetson TX1 Max-P: $499 reported cost.2. Jetson TX2 Max-Q: $599 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson AGX Xavier: $1299 reported cost.5. Jetson Nano: $99 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.7. ASUS TinkerBoard: $66 reported cost.8. ODROID-XU4: $62 reported cost.

7-Zip Compression

Performance / Cost - Compress Speed Test

OpenBenchmarking.orgMIPS Per Dollar, More Is Better7-Zip Compression 16.02Performance / Cost - Compress Speed TestJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU415304560759.035.509.3414.7940.9057.5142.9766.451. Jetson TX1 Max-P: $499 reported cost.2. Jetson TX2 Max-Q: $599 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson AGX Xavier: $1299 reported cost.5. Jetson Nano: $99 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.7. ASUS TinkerBoard: $66 reported cost.8. ODROID-XU4: $62 reported cost.

C-Ray

Performance / Cost - Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterC-Ray 1.1Performance / Cost - Total Time - 4K, 16 Rays Per PixelJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4110K220K330K440K550K375747.00520531.00350415.00461145.0091179.0071050.00113388.0051274.001. Jetson TX1 Max-P: $499 reported cost.2. Jetson TX2 Max-Q: $599 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson AGX Xavier: $1299 reported cost.5. Jetson Nano: $99 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.7. ASUS TinkerBoard: $66 reported cost.8. ODROID-XU4: $62 reported cost.

Rust Prime Benchmark

Performance / Cost - Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterRust Prime BenchmarkPerformance / Cost - Prime Number Test To 200,000,000Jetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU430K60K90K120K150K64096.55101979.7562871.0442048.6314868.8138419.15120189.3035594.821. Jetson TX1 Max-P: $499 reported cost.2. Jetson TX2 Max-Q: $599 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson AGX Xavier: $1299 reported cost.5. Jetson Nano: $99 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.7. ASUS TinkerBoard: $66 reported cost.8. ODROID-XU4: $62 reported cost.

Zstd Compression

Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterZstd Compression 1.3.4Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoard30K60K90K120K150K72754.20152026.2086837.03103997.9412857.1311978.0532776.921. Jetson TX1 Max-P: $499 reported cost.2. Jetson TX2 Max-Q: $599 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson AGX Xavier: $1299 reported cost.5. Jetson Nano: $99 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.7. ASUS TinkerBoard: $66 reported cost.

FLAC Audio Encoding

Performance / Cost - WAV To FLAC

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterFLAC Audio Encoding 1.3.2Performance / Cost - WAV To FLACJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU415K30K45K60K75K39520.8062463.7238976.9370756.5310372.2311883.5518417.306015.861. Jetson TX1 Max-P: $499 reported cost.2. Jetson TX2 Max-Q: $599 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson AGX Xavier: $1299 reported cost.5. Jetson Nano: $99 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.7. ASUS TinkerBoard: $66 reported cost.8. ODROID-XU4: $62 reported cost.

PyBench

Performance / Cost - Total For Average Test Times

OpenBenchmarking.orgMilliseconds x Dollar, Fewer Is BetterPyBench 2018-02-16Performance / Cost - Total For Average Test TimesJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU41.1M2.2M3.3M4.4M5.5M3163161.005232265.003239392.003906093.00701316.00731955.00759132.00310558.001. Jetson TX1 Max-P: $499 reported cost.2. Jetson TX2 Max-Q: $599 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson AGX Xavier: $1299 reported cost.5. Jetson Nano: $99 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.7. ASUS TinkerBoard: $66 reported cost.8. ODROID-XU4: $62 reported cost.

CUDA Mini-Nbody

Performance / Cost - Test: Original

OpenBenchmarking.org(NBody^2)/s Per Dollar, More Is BetterCUDA Mini-Nbody 2015-11-10Performance / Cost - Test: OriginalJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.0090.0180.0270.0360.0450.010.010.040.041. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.0360.0720.1080.1440.180.040.050.160.141. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX Xavier0.05180.10360.15540.20720.2590.020.030.231. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.02930.05860.08790.11720.14650.040.040.130.121. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX Xavier0.0450.090.1350.180.2250.020.020.201. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX Xavier0.04280.08560.12840.17120.2140.050.060.191. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX Xavier0.08330.16660.24990.33320.41650.030.030.371. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX Xavier0.0360.0720.1080.1440.180.040.050.161. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX Xavier0.06750.1350.20250.270.33750.020.030.301. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.26780.53560.80341.07121.3390.360.440.921.191. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.1980.3960.5940.7920.990.250.310.880.851. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.45680.91361.37041.82722.2840.620.771.572.031. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.54451.0891.63352.1782.72250.400.502.421.291. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.09450.1890.28350.3780.47250.120.150.420.411. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.15530.31060.46590.62120.77650.070.080.690.211. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.1890.3780.5670.7560.9450.260.330.610.841. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.1980.3960.5940.7920.990.150.190.880.481. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.03830.07660.11490.15320.19150.050.060.170.161. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.06530.13060.19590.26120.32650.020.030.290.081. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.11030.22060.33090.44120.55150.140.190.490.471. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.21150.4230.63450.8461.05750.080.100.940.251. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.2250.450.6750.91.1250.300.390.771.001. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.29250.5850.87751.171.46250.170.221.300.561. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson Nano0.0450.090.1350.180.2250.050.070.200.181. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.

NVIDIA TensorRT Inference

Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson TX2 Max-QJetson TX2 Max-PJetson AGX Xavier0.08550.1710.25650.3420.42750.030.040.381. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.

OpenCV Benchmark

Performance / Cost -

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterOpenCV Benchmark 3.3.0Performance / Cost -Jetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ODROID-XU460K120K180K240K300K295307.00177304.00166272.0026832.9695.9032283.401. Jetson TX2 Max-Q: $599 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson AGX Xavier: $1299 reported cost.4. Jetson Nano: $99 reported cost.5. Raspberry Pi 3 Model B+: $35 reported cost.6. ODROID-XU4: $62 reported cost.

GLmark2

Performance / Cost - Resolution: 1920 x 1080

OpenBenchmarking.orgScore Per Dollar, More Is BetterGLmark2Performance / Cost - Resolution: 1920 x 1080Jetson AGX XavierJetson Nano2468102.216.531. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: BLAS

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: BLASJetson AGX XavierJetson Nano0.0360.0720.1080.1440.180.040.161. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNNJetson AGX XavierJetson Nano0.31730.63460.95191.26921.58650.731.411. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

Performance / Cost - Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNN FP16Jetson AGX Xavier0.43650.8731.30951.7462.18251.941. $1299 reported cost.

Tesseract OCR

Performance / Cost - Time To OCR 7 Images

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterTesseract OCR 4.0.0-beta.1Performance / Cost - Time To OCR 7 ImagesJetson AGX XavierJetson NanoODROID-XU420K40K60K80K100K93450.0613134.3311200.921. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. ODROID-XU4: $62 reported cost.


Phoronix Test Suite v10.8.4