ODROID-N2 benchmarks for a future article.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1904251-HV-ODROIDN2760 ODROID-N2 Benchmark Comparison - Phoronix Test Suite ODROID-N2 Benchmark Comparison ODROID-N2 benchmarks for a future article.
HTML result view exported from: https://openbenchmarking.org/result/1904251-HV-ODROIDN2760&rdt&gru .
ODROID-N2 Benchmark Comparison Processor Motherboard Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL Vulkan Compiler File-System Screen Resolution Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 ARMv8 rev 0 @ 2.27GHz (8 Cores) jetson-xavier 16384MB 31GB HBG4a2 NVIDIA Tegra Xavier VE228 Ubuntu 18.04 4.9.108-tegra (aarch64) Unity 7.5.0 X Server 1.19.6 NVIDIA 31.0.2 4.6.0 1.1.76 GCC 7.3.0 + CUDA 10.0 ext4 1920x1080 ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads) quill 8192MB 31GB 032G34 NVIDIA TEGRA Ubuntu 16.04 4.4.38-tegra (aarch64) Unity 7.4.0 X Server 1.18.4 NVIDIA 28.2.1 4.5.0 GCC 5.4.0 20160609 + CUDA 9.0 ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads) ARMv7 rev 4 @ 1.40GHz (4 Cores) BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3 926MB 32GB GB2MW BCM2708 Raspbian 9.6 4.19.23-v7+ (armv7l) LXDE X Server 1.19.2 GCC 6.3.0 20170516 656x416 ARMv7 rev 1 @ 1.80GHz (4 Cores) Rockchip (Device Tree) 2048MB 32GB GB1QT Debian 9.0 4.4.16-00006-g4431f98-dirty (armv7l) X Server 1.18.4 1024x768 ARMv8 rev 1 @ 1.73GHz (4 Cores) jetson_tx1 4096MB 16GB 016G32 NVIDIA Tegra X1 VE228 Ubuntu 16.04 4.4.38-tegra (aarch64) Unity 7.4.5 NVIDIA 28.1.0 4.5.0 1.0.8 GCC 5.4.0 20160609 1920x1080 ARMv7 rev 3 @ 1.50GHz (8 Cores) ODROID-XU4 Hardkernel Odroid XU4 2048MB 16GB AJTD4R llvmpipe 2GB Ubuntu 18.04 4.14.37-135 (armv7l) X Server 1.19.6 3.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits) GCC 7.3.0 ARMv8 rev 1 @ 1.43GHz (4 Cores) jetson-nano 4096MB 32GB GB1QT NVIDIA TEGRA Realtek RTL8111/8168/8411 4.9.140-tegra (aarch64) Unity 7.5.0 NVIDIA 1.0.0 1.1.85 GCC 7.3.0 + CUDA 10.0 ARMv8 Cortex-A73 @ 1.90GHz (6 Cores) Hardkernel ODROID-N2 16GB AJTD4R OSD 4.9.156-14 (aarch64) GCC 7.3.0 1920x2160 Amlogic ARMv8 Cortex-A53 @ 1.54GHz (4 Cores) ODROID-C2 2048MB 32GB GB1QT 3.16.57-20 (aarch64) X Server 1.19.6 1920x1080 OpenBenchmarking.org Compiler Details - Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - ODROID-XU4: --build=arm-linux-gnueabihf --disable-libitm --disable-libquadmath --disable-libquadmath-support --disable-sjlj-exceptions --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-gcc-major-version-only --with-mode=thumb --with-target-system-zlib -v - Jetson Nano: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - ODROID-N2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - ODROID-C2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-as=/usr/bin/aarch64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/aarch64-linux-gnu-ld -v Processor Details - Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil - Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil - Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil - Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand - ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive - Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactive - ODROID-XU4: Scaling Governor: cpufreq-dt ondemand - Jetson Nano: Scaling Governor: tegra-cpufreq schedutil - ODROID-N2: Scaling Governor: arm-big-little performance - ODROID-C2: Scaling Governor: meson_cpufreq interactive Python Details - Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7 - Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2 - Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2 - Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3 - ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3 - Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2 - ODROID-XU4: Python 2.7.15rc1 + Python 3.6.7 - Jetson Nano: Python 2.7.15rc1 + Python 3.6.7 - ODROID-N2: Python 2.7.15rc1 + Python 3.6.7 - ODROID-C2: Python 2.7.15rc1 + Python 3.6.7 Kernel Details - ODROID-XU4: usbhid.quirks=0x0eef:0x0005:0x0004 Graphics Details - ODROID-XU4: EXA
ODROID-N2 Benchmark Comparison cuda-mini-nbody: Original ttsiod-renderer: Phong Rendering With Soft-Shadow Mapping tensorrt-inference: VGG16 - FP16 - 4 - Disabled tensorrt-inference: VGG16 - INT8 - 4 - Disabled tensorrt-inference: VGG19 - INT8 - 32 - Disabled tensorrt-inference: AlexNet - FP16 - 4 - Disabled tensorrt-inference: AlexNet - INT8 - 4 - Disabled tensorrt-inference: AlexNet - FP16 - 32 - Disabled tensorrt-inference: AlexNet - INT8 - 32 - Disabled tensorrt-inference: ResNet50 - FP16 - 4 - Disabled tensorrt-inference: ResNet50 - INT8 - 4 - Disabled tensorrt-inference: GoogleNet - FP16 - 4 - Disabled tensorrt-inference: GoogleNet - INT8 - 4 - Disabled tensorrt-inference: ResNet152 - FP16 - 4 - Disabled tensorrt-inference: ResNet152 - INT8 - 4 - Disabled tensorrt-inference: ResNet50 - FP16 - 32 - Disabled tensorrt-inference: ResNet50 - INT8 - 32 - Disabled tensorrt-inference: GoogleNet - FP16 - 32 - Disabled tensorrt-inference: GoogleNet - INT8 - 32 - Disabled tensorrt-inference: ResNet152 - FP16 - 32 - Disabled tensorrt-inference: ResNet152 - INT8 - 32 - Disabled tensorrt-inference: VGG16 - FP16 - 32 - Disabled tensorrt-inference: VGG19 - FP16 - 32 - Disabled tensorrt-inference: VGG19 - FP16 - 4 - Disabled tensorrt-inference: VGG19 - INT8 - 4 - Disabled tensorrt-inference: VGG16 - INT8 - 32 - Disabled compress-7zip: Compress Speed Test lczero: BLAS lczero: CUDA + cuDNN lczero: CUDA + cuDNN FP16 glmark2: 1920 x 1080 pybench: Total For Average Test Times c-ray: Total Time - 4K, 16 Rays Per Pixel rust-prime: Prime Number Test To 200,000,000 compress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 encode-flac: WAV To FLAC opencv-bench: tesseract-ocr: Time To OCR 7 Images Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 47.13 133 208.76 303.78 394.66 1200 1143 2038 3143 547.50 902.78 796 1146 224.19 372.73 636 1215.08 1006 1693 259.82 493.22 247.95 203.96 172.50 265.81 475.08 19212 47.62 953 2515.01 2876 3007 355 32.37 80.06 54.47 128 71.94 8.24 49.26 32.64 17.56 15.92 264 184 462 301 92.28 49.97 197 113 35.11 18.29 111 59.69 233 130 41.91 22.07 36.87 29.83 26.56 14.32 19.91 5593 5408 585 104.96 144.97 65.07 296 6.77 28.85 25.99 14.24 12.59 216 148 374 237 72.01 39.15 156 88.88 27.34 14.50 86.08 47.15 179 104 32.67 17.36 29.83 23.94 21.04 11.45 15.79 3294 8735 869 170.25 253.80 104.28 493 17.66 2013 20913 2030 1097.69 342.23 339.53 2.74 21.22 2836 11502 1718 1821.05 496.62 279.05 45.09 4508 6339 753 128.45 145.80 79.20 41.96 4120 5009 827 574.11 97.03 520.70 180.66 4.07 40.94 14.35 118 84.10 201 128 41.04 20.96 83.37 47.82 15.76 7.76 46.51 25.08 98.93 55.66 17.38 11.59 4049 15.37 140 646 7084 921 150.19 129.87 104.77 271.04 132.67 57.42 5970 24.39 5231 492 73.11 152.04 95.59 243.05 110.73 22.10 2121 7.33 12184 1535 125.81 314.33 262.31 474.35 220.44 OpenBenchmarking.org
CUDA Mini-Nbody Performance / Cost - Test: Original OpenBenchmarking.org (NBody^2)/s Per Dollar, More Is Better CUDA Mini-Nbody 2015-11-10 Performance / Cost - Test: Original Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.009 0.018 0.027 0.036 0.045 0.04 0.01 0.01 0.04 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
CUDA Mini-Nbody Test: Original OpenBenchmarking.org (NBody^2)/s, More Is Better CUDA Mini-Nbody 2015-11-10 Test: Original Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 11 22 33 44 55 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 47.13 8.24 6.77 4.07
TTSIOD 3D Renderer Performance / Cost - Phong Rendering With Soft-Shadow Mapping OpenBenchmarking.org FPS Per Dollar, More Is Better TTSIOD 3D Renderer 2.3b Performance / Cost - Phong Rendering With Soft-Shadow Mapping Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 0.198 0.396 0.594 0.792 0.99 0.10 0.08 0.05 0.50 0.32 0.09 0.68 0.41 0.88 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ASUS TinkerBoard: $66 reported cost. 6. Jetson TX1 Max-P: $499 reported cost. 7. ODROID-XU4: $62 reported cost. 8. Jetson Nano: $99 reported cost. 9. ODROID-N2: $64.95 reported cost.
TTSIOD 3D Renderer Phong Rendering With Soft-Shadow Mapping OpenBenchmarking.org FPS, More Is Better TTSIOD 3D Renderer 2.3b Phong Rendering With Soft-Shadow Mapping Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 30 60 90 120 150 SE +/- 1.63, N = 12 SE +/- 0.15, N = 3 SE +/- 0.46, N = 4 SE +/- 0.16, N = 3 SE +/- 0.27, N = 9 SE +/- 0.04, N = 3 SE +/- 0.97, N = 9 SE +/- 0.11, N = 3 SE +/- 0.05, N = 3 SE +/- 0.08, N = 3 133.00 49.26 28.85 17.66 21.22 45.09 41.96 40.94 57.42 22.10 1. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 50 100 150 200 250 SE +/- 0.10, N = 3 SE +/- 0.50, N = 4 SE +/- 0.13, N = 3 SE +/- 0.02, N = 2 208.76 32.64 25.99 14.35
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.1553 0.3106 0.4659 0.6212 0.7765 0.69 0.08 0.07 0.21 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 70 140 210 280 350 SE +/- 0.46, N = 3 SE +/- 0.25, N = 6 SE +/- 0.20, N = 5 303.78 17.56 14.24
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 90 180 270 360 450 SE +/- 0.23, N = 3 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 394.66 15.92 12.59
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 300 600 900 1200 1500 SE +/- 1.82, N = 3 SE +/- 7.77, N = 12 SE +/- 3.03, N = 6 SE +/- 2.12, N = 12 1200 264 216 118
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 200 400 600 800 1000 SE +/- 2.59, N = 3 SE +/- 2.79, N = 5 SE +/- 0.91, N = 3 SE +/- 0.72, N = 3 1143.00 184.00 148.00 84.10
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 400 800 1200 1600 2000 SE +/- 2.07, N = 3 SE +/- 7.68, N = 12 SE +/- 2.82, N = 3 SE +/- 1.59, N = 3 2038 462 374 201
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 700 1400 2100 2800 3500 SE +/- 1.06, N = 3 SE +/- 0.52, N = 3 SE +/- 1.39, N = 3 SE +/- 0.06, N = 3 3143 301 237 128
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 120 240 360 480 600 SE +/- 0.03, N = 3 SE +/- 1.32, N = 12 SE +/- 1.10, N = 12 SE +/- 0.25, N = 3 547.50 92.28 72.01 41.04
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 200 400 600 800 1000 SE +/- 1.86, N = 3 SE +/- 0.79, N = 4 SE +/- 0.64, N = 3 SE +/- 0.36, N = 3 902.78 49.97 39.15 20.96
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 200 400 600 800 1000 SE +/- 2.48, N = 3 SE +/- 2.27, N = 3 SE +/- 1.90, N = 12 SE +/- 0.70, N = 3 796.00 197.00 156.00 83.37
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 200 400 600 800 1000 SE +/- 4.31, N = 3 SE +/- 1.65, N = 3 SE +/- 1.32, N = 3 SE +/- 0.60, N = 3 1146.00 113.00 88.88 47.82
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 50 100 150 200 250 SE +/- 0.22, N = 3 SE +/- 0.36, N = 3 SE +/- 0.34, N = 3 SE +/- 0.04, N = 3 224.19 35.11 27.34 15.76
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 80 160 240 320 400 SE +/- 1.59, N = 3 SE +/- 0.14, N = 3 SE +/- 0.15, N = 3 SE +/- 0.03, N = 3 372.73 18.29 14.50 7.76
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 140 280 420 560 700 SE +/- 1.23, N = 3 SE +/- 1.22, N = 3 SE +/- 0.86, N = 3 SE +/- 0.02, N = 3 636.00 111.00 86.08 46.51
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 300 600 900 1200 1500 SE +/- 0.25, N = 3 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 1215.08 59.69 47.15 25.08
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 200 400 600 800 1000 SE +/- 0.21, N = 3 SE +/- 4.50, N = 3 SE +/- 2.17, N = 8 SE +/- 0.19, N = 3 1006.00 233.00 179.00 98.93
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.036 0.072 0.108 0.144 0.18 0.16 0.05 0.04 0.14 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 400 800 1200 1600 2000 SE +/- 8.72, N = 3 SE +/- 0.74, N = 3 SE +/- 0.07, N = 3 SE +/- 0.18, N = 3 1693.00 130.00 104.00 55.66
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 60 120 180 240 300 SE +/- 0.26, N = 3 SE +/- 0.07, N = 3 SE +/- 0.10, N = 3 SE +/- 0.01, N = 3 259.82 41.91 32.67 17.38
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 110 220 330 440 550 SE +/- 0.81, N = 3 SE +/- 0.03, N = 3 SE +/- 0.00, N = 3 493.22 22.07 17.36
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0518 0.1036 0.1554 0.2072 0.259 0.23 0.03 0.02 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.0293 0.0586 0.0879 0.1172 0.1465 0.13 0.04 0.04 0.12 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.045 0.09 0.135 0.18 0.225 0.20 0.02 0.02 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0428 0.0856 0.1284 0.1712 0.214 0.19 0.06 0.05 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0833 0.1666 0.2499 0.3332 0.4165 0.37 0.03 0.03 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.036 0.072 0.108 0.144 0.18 0.16 0.05 0.04 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 50 100 150 200 250 SE +/- 0.12, N = 3 SE +/- 0.31, N = 3 SE +/- 0.18, N = 3 247.95 36.87 29.83
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 40 80 120 160 200 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.07, N = 3 203.96 29.83 23.94
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0675 0.135 0.2025 0.27 0.3375 0.30 0.03 0.02 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.2678 0.5356 0.8034 1.0712 1.339 0.92 0.44 0.36 1.19 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.198 0.396 0.594 0.792 0.99 0.88 0.31 0.25 0.85 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.4568 0.9136 1.3704 1.8272 2.284 1.57 0.77 0.62 2.03 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.5445 1.089 1.6335 2.178 2.7225 2.42 0.50 0.40 1.29 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.0945 0.189 0.2835 0.378 0.4725 0.42 0.15 0.12 0.41 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 40 80 120 160 200 SE +/- 0.50, N = 3 SE +/- 0.38, N = 3 SE +/- 0.34, N = 3 SE +/- 0.05, N = 2 172.50 26.56 21.04 11.59
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.189 0.378 0.567 0.756 0.945 0.61 0.33 0.26 0.84 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 60 120 180 240 300 SE +/- 0.20, N = 3 SE +/- 0.25, N = 4 SE +/- 0.23, N = 3 265.81 14.32 11.45
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.198 0.396 0.594 0.792 0.99 0.88 0.19 0.15 0.48 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.0383 0.0766 0.1149 0.1532 0.1915 0.17 0.06 0.05 0.16 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.0653 0.1306 0.1959 0.2612 0.3265 0.29 0.03 0.02 0.08 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.1103 0.2206 0.3309 0.4412 0.5515 0.49 0.19 0.14 0.47 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.2115 0.423 0.6345 0.846 1.0575 0.94 0.10 0.08 0.25 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.225 0.45 0.675 0.9 1.125 0.77 0.39 0.30 1.00 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.2925 0.585 0.8775 1.17 1.4625 1.30 0.22 0.17 0.56 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Jetson Nano 0.045 0.09 0.135 0.18 0.225 0.20 0.07 0.05 0.18 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Jetson Nano: $99 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0855 0.171 0.2565 0.342 0.4275 0.38 0.04 0.03 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 100 200 300 400 500 SE +/- 0.10, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 475.08 19.91 15.79
7-Zip Compression Performance / Cost - Compress Speed Test OpenBenchmarking.org MIPS Per Dollar, More Is Better 7-Zip Compression 16.02 Performance / Cost - Compress Speed Test Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 20 40 60 80 100 14.79 9.34 5.50 57.51 42.97 9.03 66.45 40.90 91.92 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ASUS TinkerBoard: $66 reported cost. 6. Jetson TX1 Max-P: $499 reported cost. 7. ODROID-XU4: $62 reported cost. 8. Jetson Nano: $99 reported cost. 9. ODROID-N2: $64.95 reported cost.
7-Zip Compression Compress Speed Test OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 16.02 Compress Speed Test Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 4K 8K 12K 16K 20K SE +/- 274.18, N = 12 SE +/- 20.85, N = 3 SE +/- 13.05, N = 3 SE +/- 23.74, N = 11 SE +/- 34.93, N = 3 SE +/- 13.43, N = 3 SE +/- 89.16, N = 12 SE +/- 18.00, N = 3 SE +/- 2.40, N = 3 SE +/- 7.36, N = 3 19212 5593 3294 2013 2836 4508 4120 4049 5970 2121 1. (CXX) g++ options: -pipe -lpthread
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.20.1 Backend: BLAS Jetson AGX Xavier Jetson Nano ODROID-N2 ODROID-C2 11 22 33 44 55 SE +/- 0.62, N = 7 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 SE +/- 0.09, N = 7 47.62 15.37 24.39 7.33 1. (CXX) g++ options: -lpthread -lz
LeelaChessZero Backend: CUDA + cuDNN OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.20.1 Backend: CUDA + cuDNN Jetson AGX Xavier Jetson Nano 200 400 600 800 1000 SE +/- 6.14, N = 3 SE +/- 0.26, N = 3 953 140 1. (CXX) g++ options: -lpthread -lz
LeelaChessZero Backend: CUDA + cuDNN FP16 OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.20.1 Backend: CUDA + cuDNN FP16 Jetson AGX Xavier 500 1000 1500 2000 2500 SE +/- 7.60, N = 3 2515.01 1. (CXX) g++ options: -lpthread -lz
LeelaChessZero Performance / Cost - Backend: BLAS OpenBenchmarking.org Nodes Per Second Per Dollar, More Is Better LeelaChessZero 0.20.1 Performance / Cost - Backend: BLAS Jetson AGX Xavier Jetson Nano ODROID-N2 0.0855 0.171 0.2565 0.342 0.4275 0.04 0.16 0.38 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. ODROID-N2: $64.95 reported cost.
LeelaChessZero Performance / Cost - Backend: CUDA + cuDNN OpenBenchmarking.org Nodes Per Second Per Dollar, More Is Better LeelaChessZero 0.20.1 Performance / Cost - Backend: CUDA + cuDNN Jetson AGX Xavier Jetson Nano 0.3173 0.6346 0.9519 1.2692 1.5865 0.73 1.41 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost.
LeelaChessZero Performance / Cost - Backend: CUDA + cuDNN FP16 OpenBenchmarking.org Nodes Per Second Per Dollar, More Is Better LeelaChessZero 0.20.1 Performance / Cost - Backend: CUDA + cuDNN FP16 Jetson AGX Xavier 0.4365 0.873 1.3095 1.746 2.1825 1.94 1. $1299 reported cost.
Meta Performance Per Dollar Performance Per Dollar OpenBenchmarking.org Performance Per Dollar, More Is Better Meta Performance Per Dollar Performance Per Dollar ODROID-N2 5 10 15 20 25 19.17 1. $64.95 reported value. Average value: 1244.91.
GLmark2 Resolution: 1920 x 1080 OpenBenchmarking.org Score, More Is Better GLmark2 Resolution: 1920 x 1080 Jetson AGX Xavier Jetson Nano 600 1200 1800 2400 3000 2876 646
GLmark2 Performance / Cost - Resolution: 1920 x 1080 OpenBenchmarking.org Score Per Dollar, More Is Better GLmark2 Performance / Cost - Resolution: 1920 x 1080 Jetson AGX Xavier Jetson Nano 2 4 6 8 10 2.21 6.53 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost.
PyBench Total For Average Test Times OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 4K 8K 12K 16K 20K SE +/- 4.67, N = 3 SE +/- 33.86, N = 3 SE +/- 42.52, N = 3 SE +/- 43.80, N = 3 SE +/- 854.75, N = 9 SE +/- 18.55, N = 3 SE +/- 30.99, N = 3 SE +/- 37.23, N = 3 SE +/- 9.24, N = 3 SE +/- 28.15, N = 3 3007 5408 8735 20913 11502 6339 5009 7084 5231 12184
PyBench Performance / Cost - Total For Average Test Times OpenBenchmarking.org Milliseconds x Dollar, Fewer Is Better PyBench 2018-02-16 Performance / Cost - Total For Average Test Times Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 1.1M 2.2M 3.3M 4.4M 5.5M 3906093.00 3239392.00 5232265.00 731955.00 759132.00 3163161.00 310558.00 701316.00 339753.45 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ASUS TinkerBoard: $66 reported cost. 6. Jetson TX1 Max-P: $499 reported cost. 7. ODROID-XU4: $62 reported cost. 8. Jetson Nano: $99 reported cost. 9. ODROID-N2: $64.95 reported cost.
C-Ray Total Time - 4K, 16 Rays Per Pixel OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 400 800 1200 1600 2000 SE +/- 7.17, N = 9 SE +/- 49.09, N = 9 SE +/- 1.44, N = 3 SE +/- 2.46, N = 3 SE +/- 22.09, N = 3 SE +/- 10.23, N = 3 SE +/- 29.65, N = 9 SE +/- 0.35, N = 3 SE +/- 0.25, N = 3 SE +/- 0.16, N = 3 355 585 869 2030 1718 753 827 921 492 1535 1. (CC) gcc options: -lm -lpthread -O3
Rust Prime Benchmark Prime Number Test To 200,000,000 OpenBenchmarking.org Seconds, Fewer Is Better Rust Prime Benchmark Prime Number Test To 200,000,000 Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 400 800 1200 1600 2000 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 SE +/- 0.09, N = 3 SE +/- 1.55, N = 3 SE +/- 187.90, N = 6 SE +/- 0.77, N = 3 SE +/- 0.37, N = 3 SE +/- 0.22, N = 3 SE +/- 0.02, N = 3 SE +/- 0.30, N = 3 32.37 104.96 170.25 1097.69 1821.05 128.45 574.11 150.19 73.11 125.81 -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil 1. (CC) gcc options: -pie -nodefaultlibs
Zstd Compression Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 OpenBenchmarking.org Seconds, Fewer Is Better Zstd Compression 1.3.4 Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P Jetson Nano ODROID-N2 ODROID-C2 110 220 330 440 550 SE +/- 0.91, N = 3 SE +/- 0.29, N = 3 SE +/- 1.02, N = 3 SE +/- 1.03, N = 3 SE +/- 2.16, N = 3 SE +/- 0.42, N = 3 SE +/- 0.23, N = 3 SE +/- 1.77, N = 3 SE +/- 1.41, N = 3 80.06 144.97 253.80 342.23 496.62 145.80 129.87 152.04 314.33 1. (CC) gcc options: -O3 -pthread -lz -llzma
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.3.2 WAV To FLAC Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 70 140 210 280 350 SE +/- 0.61, N = 5 SE +/- 0.15, N = 5 SE +/- 0.18, N = 5 SE +/- 0.98, N = 5 SE +/- 2.51, N = 5 SE +/- 0.74, N = 5 SE +/- 0.31, N = 5 SE +/- 0.83, N = 5 SE +/- 0.27, N = 5 SE +/- 1.49, N = 5 54.47 65.07 104.28 339.53 279.05 79.20 97.03 104.77 95.59 262.31 1. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenCV Benchmark OpenBenchmarking.org Seconds, Fewer Is Better OpenCV Benchmark 3.3.0 Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 110 220 330 440 550 SE +/- 1.57, N = 3 SE +/- 0.27, N = 3 SE +/- 5.74, N = 3 SE +/- 5.31, N = 3 SE +/- 4.66, N = 9 SE +/- 0.26, N = 3 SE +/- 3.48, N = 3 128.00 296.00 493.00 2.74 520.70 271.04 243.05 474.35 1. (CXX) g++ options: -std=c++11 -rdynamic
Tesseract OCR Time To OCR 7 Images OpenBenchmarking.org Seconds, Fewer Is Better Tesseract OCR 4.0.0-beta.1 Time To OCR 7 Images Jetson AGX Xavier ODROID-XU4 Jetson Nano ODROID-N2 ODROID-C2 50 100 150 200 250 SE +/- 0.89, N = 3 SE +/- 1.38, N = 3 SE +/- 1.50, N = 3 SE +/- 0.05, N = 3 SE +/- 0.86, N = 3 71.94 180.66 132.67 110.73 220.44
C-Ray Performance / Cost - Total Time - 4K, 16 Rays Per Pixel OpenBenchmarking.org Seconds x Dollar, Fewer Is Better C-Ray 1.1 Performance / Cost - Total Time - 4K, 16 Rays Per Pixel Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 110K 220K 330K 440K 550K 461145.00 350415.00 520531.00 71050.00 113388.00 375747.00 51274.00 91179.00 31940.46 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ASUS TinkerBoard: $66 reported cost. 6. Jetson TX1 Max-P: $499 reported cost. 7. ODROID-XU4: $62 reported cost. 8. Jetson Nano: $99 reported cost. 9. ODROID-N2: $64.95 reported cost.
Rust Prime Benchmark Performance / Cost - Prime Number Test To 200,000,000 OpenBenchmarking.org Seconds x Dollar, Fewer Is Better Rust Prime Benchmark Performance / Cost - Prime Number Test To 200,000,000 Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 30K 60K 90K 120K 150K 42048.63 62871.04 101979.75 38419.15 120189.30 64096.55 35594.82 14868.81 4748.49 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ASUS TinkerBoard: $66 reported cost. 6. Jetson TX1 Max-P: $499 reported cost. 7. ODROID-XU4: $62 reported cost. 8. Jetson Nano: $99 reported cost. 9. ODROID-N2: $64.95 reported cost.
Zstd Compression Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 OpenBenchmarking.org Seconds x Dollar, Fewer Is Better Zstd Compression 1.3.4 Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P Jetson Nano ODROID-N2 30K 60K 90K 120K 150K 103997.94 86837.03 152026.20 11978.05 32776.92 72754.20 12857.13 9875.00 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ASUS TinkerBoard: $66 reported cost. 6. Jetson TX1 Max-P: $499 reported cost. 7. Jetson Nano: $99 reported cost. 8. ODROID-N2: $64.95 reported cost.
FLAC Audio Encoding Performance / Cost - WAV To FLAC OpenBenchmarking.org Seconds x Dollar, Fewer Is Better FLAC Audio Encoding 1.3.2 Performance / Cost - WAV To FLAC Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ASUS TinkerBoard Jetson TX1 Max-P ODROID-XU4 Jetson Nano ODROID-N2 15K 30K 45K 60K 75K 70756.53 38976.93 62463.72 11883.55 18417.30 39520.80 6015.86 10372.23 6208.57 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ASUS TinkerBoard: $66 reported cost. 6. Jetson TX1 Max-P: $499 reported cost. 7. ODROID-XU4: $62 reported cost. 8. Jetson Nano: $99 reported cost. 9. ODROID-N2: $64.95 reported cost.
OpenCV Benchmark Performance / Cost - OpenBenchmarking.org Seconds x Dollar, Fewer Is Better OpenCV Benchmark 3.3.0 Performance / Cost - Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q Raspberry Pi 3 Model B+ ODROID-XU4 Jetson Nano ODROID-N2 60K 120K 180K 240K 300K 166272.00 177304.00 295307.00 95.90 32283.40 26832.96 15786.10 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost. 4. Raspberry Pi 3 Model B+: $35 reported cost. 5. ODROID-XU4: $62 reported cost. 6. Jetson Nano: $99 reported cost. 7. ODROID-N2: $64.95 reported cost.
Tesseract OCR Performance / Cost - Time To OCR 7 Images OpenBenchmarking.org Seconds x Dollar, Fewer Is Better Tesseract OCR 4.0.0-beta.1 Performance / Cost - Time To OCR 7 Images Jetson AGX Xavier ODROID-XU4 Jetson Nano ODROID-N2 20K 40K 60K 80K 100K 93450.06 11200.92 13134.33 7191.91 1. Jetson AGX Xavier: $1299 reported cost. 2. ODROID-XU4: $62 reported cost. 3. Jetson Nano: $99 reported cost. 4. ODROID-N2: $64.95 reported cost.
Phoronix Test Suite v10.8.4