ODROID-N2 benchmarks for a future article.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1904251-HV-ODROIDN2760 ODROID-N2 Benchmark Comparison - Phoronix Test Suite ODROID-N2 Benchmark Comparison ODROID-N2 benchmarks for a future article.
HTML result view exported from: https://openbenchmarking.org/result/1904251-HV-ODROIDN2760&grr&sro .
ODROID-N2 Benchmark Comparison Processor Motherboard Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL Vulkan Compiler File-System Screen Resolution Jetson TX1 Max-P Jetson TX2 Max-Q Jetson TX2 Max-P Jetson AGX Xavier Jetson Nano Raspberry Pi 3 Model B+ ASUS TinkerBoard ODROID-XU4 ODROID-N2 ODROID-C2 ARMv8 rev 1 @ 1.73GHz (4 Cores) jetson_tx1 4096MB 16GB 016G32 NVIDIA Tegra X1 VE228 Ubuntu 16.04 4.4.38-tegra (aarch64) Unity 7.4.5 X Server 1.18.4 NVIDIA 28.1.0 4.5.0 1.0.8 GCC 5.4.0 20160609 ext4 1920x1080 ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads) quill 8192MB 31GB 032G34 NVIDIA TEGRA Unity 7.4.0 NVIDIA 28.2.1 GCC 5.4.0 20160609 + CUDA 9.0 ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads) ARMv8 rev 0 @ 2.27GHz (8 Cores) jetson-xavier 16384MB 31GB HBG4a2 NVIDIA Tegra Xavier Ubuntu 18.04 4.9.108-tegra (aarch64) Unity 7.5.0 X Server 1.19.6 NVIDIA 31.0.2 4.6.0 1.1.76 GCC 7.3.0 + CUDA 10.0 ARMv8 rev 1 @ 1.43GHz (4 Cores) jetson-nano 4096MB 32GB GB1QT NVIDIA TEGRA Realtek RTL8111/8168/8411 4.9.140-tegra (aarch64) NVIDIA 1.0.0 1.1.85 ARMv7 rev 4 @ 1.40GHz (4 Cores) BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3 926MB 32GB GB2MW BCM2708 Raspbian 9.6 4.19.23-v7+ (armv7l) LXDE X Server 1.19.2 GCC 6.3.0 20170516 656x416 ARMv7 rev 1 @ 1.80GHz (4 Cores) Rockchip (Device Tree) 2048MB 32GB GB1QT Debian 9.0 4.4.16-00006-g4431f98-dirty (armv7l) X Server 1.18.4 1024x768 ARMv7 rev 3 @ 1.50GHz (8 Cores) ODROID-XU4 Hardkernel Odroid XU4 16GB AJTD4R llvmpipe 2GB VE228 Ubuntu 18.04 4.14.37-135 (armv7l) X Server 1.19.6 3.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits) GCC 7.3.0 1920x1080 ARMv8 Cortex-A73 @ 1.90GHz (6 Cores) Hardkernel ODROID-N2 4096MB OSD 4.9.156-14 (aarch64) 1920x2160 Amlogic ARMv8 Cortex-A53 @ 1.54GHz (4 Cores) ODROID-C2 2048MB 32GB GB1QT 3.16.57-20 (aarch64) X Server 1.19.6 1920x1080 OpenBenchmarking.org Compiler Details - Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson Nano: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - ODROID-XU4: --build=arm-linux-gnueabihf --disable-libitm --disable-libquadmath --disable-libquadmath-support --disable-sjlj-exceptions --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-gcc-major-version-only --with-mode=thumb --with-target-system-zlib -v - ODROID-N2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - ODROID-C2: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-as=/usr/bin/aarch64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/aarch64-linux-gnu-ld -v Processor Details - Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactive - Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil - Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil - Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil - Jetson Nano: Scaling Governor: tegra-cpufreq schedutil - Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand - ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive - ODROID-XU4: Scaling Governor: cpufreq-dt ondemand - ODROID-N2: Scaling Governor: arm-big-little performance - ODROID-C2: Scaling Governor: meson_cpufreq interactive Python Details - Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2 - Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2 - Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2 - Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7 - Jetson Nano: Python 2.7.15rc1 + Python 3.6.7 - Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3 - ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3 - ODROID-XU4: Python 2.7.15rc1 + Python 3.6.7 - ODROID-N2: Python 2.7.15rc1 + Python 3.6.7 - ODROID-C2: Python 2.7.15rc1 + Python 3.6.7 Kernel Details - ODROID-XU4: usbhid.quirks=0x0eef:0x0005:0x0004 Graphics Details - ODROID-XU4: EXA
ODROID-N2 Benchmark Comparison lczero: BLAS c-ray: Total Time - 4K, 16 Rays Per Pixel rust-prime: Prime Number Test To 200,000,000 ttsiod-renderer: Phong Rendering With Soft-Shadow Mapping cuda-mini-nbody: Original opencv-bench: tensorrt-inference: ResNet152 - FP16 - 32 - Disabled tensorrt-inference: ResNet152 - INT8 - 32 - Disabled tensorrt-inference: VGG19 - INT8 - 32 - Disabled tensorrt-inference: VGG19 - FP16 - 32 - Disabled tensorrt-inference: ResNet152 - INT8 - 4 - Disabled tensorrt-inference: VGG16 - INT8 - 32 - Disabled pybench: Total For Average Test Times encode-flac: WAV To FLAC tensorrt-inference: VGG16 - FP16 - 32 - Disabled compress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 tensorrt-inference: ResNet152 - FP16 - 4 - Disabled tensorrt-inference: ResNet50 - FP16 - 4 - Disabled tensorrt-inference: VGG16 - INT8 - 4 - Disabled tensorrt-inference: ResNet50 - INT8 - 32 - Disabled tesseract-ocr: Time To OCR 7 Images tensorrt-inference: VGG19 - FP16 - 4 - Disabled tensorrt-inference: VGG16 - FP16 - 4 - Disabled tensorrt-inference: ResNet50 - FP16 - 32 - Disabled compress-7zip: Compress Speed Test tensorrt-inference: VGG19 - INT8 - 4 - Disabled tensorrt-inference: GoogleNet - FP16 - 32 - Disabled tensorrt-inference: GoogleNet - INT8 - 32 - Disabled tensorrt-inference: AlexNet - FP16 - 4 - Disabled glmark2: 1920 x 1080 tensorrt-inference: GoogleNet - FP16 - 4 - Disabled tensorrt-inference: ResNet50 - INT8 - 4 - Disabled tensorrt-inference: AlexNet - FP16 - 32 - Disabled tensorrt-inference: GoogleNet - INT8 - 4 - Disabled lczero: CUDA + cuDNN tensorrt-inference: AlexNet - INT8 - 32 - Disabled tensorrt-inference: AlexNet - INT8 - 4 - Disabled lczero: CUDA + cuDNN FP16 Jetson TX1 Max-P Jetson TX2 Max-Q Jetson TX2 Max-P Jetson AGX Xavier Jetson Nano Raspberry Pi 3 Model B+ ASUS TinkerBoard ODROID-XU4 ODROID-N2 ODROID-C2 753 128.45 45.09 6339 79.20 145.80 4508 869 170.25 28.85 6.77 493 32.67 17.36 12.59 23.94 14.50 15.79 8735 104.28 29.83 253.80 27.34 72.01 14.24 47.15 21.04 25.99 86.08 3294 11.45 179 104 216 156 39.15 374 88.88 237 148 585 104.96 49.26 8.24 296 41.91 22.07 15.92 29.83 18.29 19.91 5408 65.07 36.87 144.97 35.11 92.28 17.56 59.69 26.56 32.64 111 5593 14.32 233 130 264 197 49.97 462 113 301 184 47.62 355 32.37 133 47.13 128 259.82 493.22 394.66 203.96 372.73 475.08 3007 54.47 247.95 80.06 224.19 547.50 303.78 1215.08 71.94 172.50 208.76 636 19212 265.81 1006 1693 1200 2876 796 902.78 2038 1146 953 3143 1143 2515.01 15.37 921 150.19 40.94 4.07 271.04 17.38 7.76 7084 104.77 129.87 15.76 41.04 25.08 132.67 11.59 14.35 46.51 4049 98.93 55.66 118 646 83.37 20.96 201 47.82 140 128 84.10 2030 1097.69 17.66 2.74 20913 339.53 342.23 2013 1718 1821.05 21.22 11502 279.05 496.62 2836 827 574.11 41.96 520.70 5009 97.03 180.66 4120 24.39 492 73.11 57.42 243.05 5231 95.59 152.04 110.73 5970 7.33 1535 125.81 22.10 474.35 12184 262.31 314.33 220.44 2121 OpenBenchmarking.org
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.20.1 Backend: BLAS Jetson AGX Xavier Jetson Nano ODROID-C2 ODROID-N2 11 22 33 44 55 SE +/- 0.62, N = 7 SE +/- 0.03, N = 3 SE +/- 0.09, N = 7 SE +/- 0.10, N = 3 47.62 15.37 7.33 24.39 1. (CXX) g++ options: -lpthread -lz
C-Ray Total Time - 4K, 16 Rays Per Pixel OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 400 800 1200 1600 2000 SE +/- 22.09, N = 3 SE +/- 7.17, N = 9 SE +/- 0.35, N = 3 SE +/- 10.23, N = 3 SE +/- 49.09, N = 9 SE +/- 1.44, N = 3 SE +/- 0.16, N = 3 SE +/- 0.25, N = 3 SE +/- 29.65, N = 9 SE +/- 2.46, N = 3 1718 355 921 753 585 869 1535 492 827 2030 1. (CC) gcc options: -lm -lpthread -O3
Rust Prime Benchmark Prime Number Test To 200,000,000 OpenBenchmarking.org Seconds, Fewer Is Better Rust Prime Benchmark Prime Number Test To 200,000,000 ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 400 800 1200 1600 2000 SE +/- 187.90, N = 6 SE +/- 0.00, N = 3 SE +/- 0.22, N = 3 SE +/- 0.77, N = 3 SE +/- 0.04, N = 3 SE +/- 0.09, N = 3 SE +/- 0.30, N = 3 SE +/- 0.02, N = 3 SE +/- 0.37, N = 3 SE +/- 1.55, N = 3 1821.05 32.37 150.19 128.45 104.96 170.25 125.81 73.11 574.11 1097.69 -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil 1. (CC) gcc options: -pie -nodefaultlibs
TTSIOD 3D Renderer Phong Rendering With Soft-Shadow Mapping OpenBenchmarking.org FPS, More Is Better TTSIOD 3D Renderer 2.3b Phong Rendering With Soft-Shadow Mapping ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 30 60 90 120 150 SE +/- 0.27, N = 9 SE +/- 1.63, N = 12 SE +/- 0.11, N = 3 SE +/- 0.04, N = 3 SE +/- 0.15, N = 3 SE +/- 0.46, N = 4 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 SE +/- 0.97, N = 9 SE +/- 0.16, N = 3 21.22 133.00 40.94 45.09 49.26 28.85 22.10 57.42 41.96 17.66 1. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++
CUDA Mini-Nbody Test: Original OpenBenchmarking.org (NBody^2)/s, More Is Better CUDA Mini-Nbody 2015-11-10 Test: Original Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 11 22 33 44 55 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 47.13 4.07 8.24 6.77
OpenCV Benchmark OpenBenchmarking.org Seconds, Fewer Is Better OpenCV Benchmark 3.3.0 Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 110 220 330 440 550 SE +/- 1.57, N = 3 SE +/- 4.66, N = 9 SE +/- 0.27, N = 3 SE +/- 5.74, N = 3 SE +/- 3.48, N = 3 SE +/- 0.26, N = 3 SE +/- 5.31, N = 3 128.00 271.04 296.00 493.00 474.35 243.05 520.70 2.74 1. (CXX) g++ options: -std=c++11 -rdynamic
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 60 120 180 240 300 SE +/- 0.26, N = 3 SE +/- 0.01, N = 3 SE +/- 0.07, N = 3 SE +/- 0.10, N = 3 259.82 17.38 41.91 32.67
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 110 220 330 440 550 SE +/- 0.81, N = 3 SE +/- 0.03, N = 3 SE +/- 0.00, N = 3 493.22 22.07 17.36
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 90 180 270 360 450 SE +/- 0.23, N = 3 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 394.66 15.92 12.59
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 40 80 120 160 200 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 SE +/- 0.07, N = 3 203.96 29.83 23.94
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 80 160 240 320 400 SE +/- 1.59, N = 3 SE +/- 0.03, N = 3 SE +/- 0.14, N = 3 SE +/- 0.15, N = 3 372.73 7.76 18.29 14.50
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 100 200 300 400 500 SE +/- 0.10, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 475.08 19.91 15.79
PyBench Total For Average Test Times OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 4K 8K 12K 16K 20K SE +/- 854.75, N = 9 SE +/- 4.67, N = 3 SE +/- 37.23, N = 3 SE +/- 18.55, N = 3 SE +/- 33.86, N = 3 SE +/- 42.52, N = 3 SE +/- 28.15, N = 3 SE +/- 9.24, N = 3 SE +/- 30.99, N = 3 SE +/- 43.80, N = 3 11502 3007 7084 6339 5408 8735 12184 5231 5009 20913
FLAC Audio Encoding WAV To FLAC OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.3.2 WAV To FLAC ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 70 140 210 280 350 SE +/- 2.51, N = 5 SE +/- 0.61, N = 5 SE +/- 0.83, N = 5 SE +/- 0.74, N = 5 SE +/- 0.15, N = 5 SE +/- 0.18, N = 5 SE +/- 1.49, N = 5 SE +/- 0.27, N = 5 SE +/- 0.31, N = 5 SE +/- 0.98, N = 5 279.05 54.47 104.77 79.20 65.07 104.28 262.31 95.59 97.03 339.53 1. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 50 100 150 200 250 SE +/- 0.12, N = 3 SE +/- 0.31, N = 3 SE +/- 0.18, N = 3 247.95 36.87 29.83
Zstd Compression Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 OpenBenchmarking.org Seconds, Fewer Is Better Zstd Compression 1.3.4 Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 Raspberry Pi 3 Model B+ 110 220 330 440 550 SE +/- 2.16, N = 3 SE +/- 0.91, N = 3 SE +/- 0.23, N = 3 SE +/- 0.42, N = 3 SE +/- 0.29, N = 3 SE +/- 1.02, N = 3 SE +/- 1.41, N = 3 SE +/- 1.77, N = 3 SE +/- 1.03, N = 3 496.62 80.06 129.87 145.80 144.97 253.80 314.33 152.04 342.23 1. (CC) gcc options: -O3 -pthread -lz -llzma
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 50 100 150 200 250 SE +/- 0.22, N = 3 SE +/- 0.04, N = 3 SE +/- 0.36, N = 3 SE +/- 0.34, N = 3 224.19 15.76 35.11 27.34
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 120 240 360 480 600 SE +/- 0.03, N = 3 SE +/- 0.25, N = 3 SE +/- 1.32, N = 12 SE +/- 1.10, N = 12 547.50 41.04 92.28 72.01
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 70 140 210 280 350 SE +/- 0.46, N = 3 SE +/- 0.25, N = 6 SE +/- 0.20, N = 5 303.78 17.56 14.24
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 300 600 900 1200 1500 SE +/- 0.25, N = 3 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 SE +/- 0.08, N = 3 1215.08 25.08 59.69 47.15
Tesseract OCR Time To OCR 7 Images OpenBenchmarking.org Seconds, Fewer Is Better Tesseract OCR 4.0.0-beta.1 Time To OCR 7 Images Jetson AGX Xavier Jetson Nano ODROID-C2 ODROID-N2 ODROID-XU4 50 100 150 200 250 SE +/- 0.89, N = 3 SE +/- 1.50, N = 3 SE +/- 0.86, N = 3 SE +/- 0.05, N = 3 SE +/- 1.38, N = 3 71.94 132.67 220.44 110.73 180.66
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 40 80 120 160 200 SE +/- 0.50, N = 3 SE +/- 0.05, N = 2 SE +/- 0.38, N = 3 SE +/- 0.34, N = 3 172.50 11.59 26.56 21.04
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 50 100 150 200 250 SE +/- 0.10, N = 3 SE +/- 0.02, N = 2 SE +/- 0.50, N = 4 SE +/- 0.13, N = 3 208.76 14.35 32.64 25.99
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 140 280 420 560 700 SE +/- 1.23, N = 3 SE +/- 0.02, N = 3 SE +/- 1.22, N = 3 SE +/- 0.86, N = 3 636.00 46.51 111.00 86.08
7-Zip Compression Compress Speed Test OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 16.02 Compress Speed Test ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-C2 ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 4K 8K 12K 16K 20K SE +/- 34.93, N = 3 SE +/- 274.18, N = 12 SE +/- 18.00, N = 3 SE +/- 13.43, N = 3 SE +/- 20.85, N = 3 SE +/- 13.05, N = 3 SE +/- 7.36, N = 3 SE +/- 2.40, N = 3 SE +/- 89.16, N = 12 SE +/- 23.74, N = 11 2836 19212 4049 4508 5593 3294 2121 5970 4120 2013 1. (CXX) g++ options: -pipe -lpthread
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 60 120 180 240 300 SE +/- 0.20, N = 3 SE +/- 0.25, N = 4 SE +/- 0.23, N = 3 265.81 14.32 11.45
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 200 400 600 800 1000 SE +/- 0.21, N = 3 SE +/- 0.19, N = 3 SE +/- 4.50, N = 3 SE +/- 2.17, N = 8 1006.00 98.93 233.00 179.00
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 400 800 1200 1600 2000 SE +/- 8.72, N = 3 SE +/- 0.18, N = 3 SE +/- 0.74, N = 3 SE +/- 0.07, N = 3 1693.00 55.66 130.00 104.00
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 300 600 900 1200 1500 SE +/- 1.82, N = 3 SE +/- 2.12, N = 12 SE +/- 7.77, N = 12 SE +/- 3.03, N = 6 1200 118 264 216
GLmark2 Resolution: 1920 x 1080 OpenBenchmarking.org Score, More Is Better GLmark2 Resolution: 1920 x 1080 Jetson AGX Xavier Jetson Nano 600 1200 1800 2400 3000 2876 646
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 200 400 600 800 1000 SE +/- 2.48, N = 3 SE +/- 0.70, N = 3 SE +/- 2.27, N = 3 SE +/- 1.90, N = 12 796.00 83.37 197.00 156.00
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 200 400 600 800 1000 SE +/- 1.86, N = 3 SE +/- 0.36, N = 3 SE +/- 0.79, N = 4 SE +/- 0.64, N = 3 902.78 20.96 49.97 39.15
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 400 800 1200 1600 2000 SE +/- 2.07, N = 3 SE +/- 1.59, N = 3 SE +/- 7.68, N = 12 SE +/- 2.82, N = 3 2038 201 462 374
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 200 400 600 800 1000 SE +/- 4.31, N = 3 SE +/- 0.60, N = 3 SE +/- 1.65, N = 3 SE +/- 1.32, N = 3 1146.00 47.82 113.00 88.88
LeelaChessZero Backend: CUDA + cuDNN OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.20.1 Backend: CUDA + cuDNN Jetson AGX Xavier Jetson Nano 200 400 600 800 1000 SE +/- 6.14, N = 3 SE +/- 0.26, N = 3 953 140 1. (CXX) g++ options: -lpthread -lz
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 700 1400 2100 2800 3500 SE +/- 1.06, N = 3 SE +/- 0.06, N = 3 SE +/- 0.52, N = 3 SE +/- 1.39, N = 3 3143 128 301 237
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 200 400 600 800 1000 SE +/- 2.59, N = 3 SE +/- 0.72, N = 3 SE +/- 2.79, N = 5 SE +/- 0.91, N = 3 1143.00 84.10 184.00 148.00
LeelaChessZero Backend: CUDA + cuDNN FP16 OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.20.1 Backend: CUDA + cuDNN FP16 Jetson AGX Xavier 500 1000 1500 2000 2500 SE +/- 7.60, N = 3 2515.01 1. (CXX) g++ options: -lpthread -lz
Meta Performance Per Dollar Performance Per Dollar OpenBenchmarking.org Performance Per Dollar, More Is Better Meta Performance Per Dollar Performance Per Dollar ODROID-N2 5 10 15 20 25 19.17 1. $64.95 reported value. Average value: 1244.91.
Tesseract OCR Performance / Cost - Time To OCR 7 Images OpenBenchmarking.org Seconds x Dollar, Fewer Is Better Tesseract OCR 4.0.0-beta.1 Performance / Cost - Time To OCR 7 Images Jetson AGX Xavier Jetson Nano ODROID-N2 ODROID-XU4 20K 40K 60K 80K 100K 93450.06 13134.33 7191.91 11200.92 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. ODROID-N2: $64.95 reported cost. 4. ODROID-XU4: $62 reported cost.
LeelaChessZero Performance / Cost - Backend: CUDA + cuDNN FP16 OpenBenchmarking.org Nodes Per Second Per Dollar, More Is Better LeelaChessZero 0.20.1 Performance / Cost - Backend: CUDA + cuDNN FP16 Jetson AGX Xavier 0.4365 0.873 1.3095 1.746 2.1825 1.94 1. $1299 reported cost.
LeelaChessZero Performance / Cost - Backend: CUDA + cuDNN OpenBenchmarking.org Nodes Per Second Per Dollar, More Is Better LeelaChessZero 0.20.1 Performance / Cost - Backend: CUDA + cuDNN Jetson AGX Xavier Jetson Nano 0.3173 0.6346 0.9519 1.2692 1.5865 0.73 1.41 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost.
LeelaChessZero Performance / Cost - Backend: BLAS OpenBenchmarking.org Nodes Per Second Per Dollar, More Is Better LeelaChessZero 0.20.1 Performance / Cost - Backend: BLAS Jetson AGX Xavier Jetson Nano ODROID-N2 0.0855 0.171 0.2565 0.342 0.4275 0.04 0.16 0.38 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. ODROID-N2: $64.95 reported cost.
GLmark2 Performance / Cost - Resolution: 1920 x 1080 OpenBenchmarking.org Score Per Dollar, More Is Better GLmark2 Performance / Cost - Resolution: 1920 x 1080 Jetson AGX Xavier Jetson Nano 2 4 6 8 10 2.21 6.53 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost.
OpenCV Benchmark Performance / Cost - OpenBenchmarking.org Seconds x Dollar, Fewer Is Better OpenCV Benchmark 3.3.0 Performance / Cost - Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 60K 120K 180K 240K 300K 166272.00 26832.96 177304.00 295307.00 15786.10 32283.40 95.90 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost. 5. ODROID-N2: $64.95 reported cost. 6. ODROID-XU4: $62 reported cost. 7. Raspberry Pi 3 Model B+: $35 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0855 0.171 0.2565 0.342 0.4275 0.38 0.04 0.03 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.045 0.09 0.135 0.18 0.225 0.20 0.18 0.07 0.05 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.2925 0.585 0.8775 1.17 1.4625 1.30 0.56 0.22 0.17 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.225 0.45 0.675 0.9 1.125 0.77 1.00 0.39 0.30 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.2115 0.423 0.6345 0.846 1.0575 0.94 0.25 0.10 0.08 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.1103 0.2206 0.3309 0.4412 0.5515 0.49 0.47 0.19 0.14 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.0653 0.1306 0.1959 0.2612 0.3265 0.29 0.08 0.03 0.02 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.0383 0.0766 0.1149 0.1532 0.1915 0.17 0.16 0.06 0.05 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.198 0.396 0.594 0.792 0.99 0.88 0.48 0.19 0.15 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.189 0.378 0.567 0.756 0.945 0.61 0.84 0.33 0.26 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.1553 0.3106 0.4659 0.6212 0.7765 0.69 0.21 0.08 0.07 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.0945 0.189 0.2835 0.378 0.4725 0.42 0.41 0.15 0.12 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.5445 1.089 1.6335 2.178 2.7225 2.42 1.29 0.50 0.40 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.4568 0.9136 1.3704 1.8272 2.284 1.57 2.03 0.77 0.62 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.198 0.396 0.594 0.792 0.99 0.88 0.85 0.31 0.25 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.2678 0.5356 0.8034 1.0712 1.339 0.92 1.19 0.44 0.36 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0675 0.135 0.2025 0.27 0.3375 0.30 0.03 0.02 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.036 0.072 0.108 0.144 0.18 0.16 0.05 0.04 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0833 0.1666 0.2499 0.3332 0.4165 0.37 0.03 0.03 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0428 0.0856 0.1284 0.1712 0.214 0.19 0.06 0.05 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.045 0.09 0.135 0.18 0.225 0.20 0.02 0.02 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.0293 0.0586 0.0879 0.1172 0.1465 0.13 0.12 0.04 0.04 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson TX2 Max-P Jetson TX2 Max-Q 0.0518 0.1036 0.1554 0.2072 0.259 0.23 0.03 0.02 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson TX2 Max-P: $599 reported cost. 3. Jetson TX2 Max-Q: $599 reported cost.
NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled OpenBenchmarking.org Images Per Second Per Dollar, More Is Better NVIDIA TensorRT Inference Performance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.036 0.072 0.108 0.144 0.18 0.16 0.14 0.05 0.04 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
CUDA Mini-Nbody Performance / Cost - Test: Original OpenBenchmarking.org (NBody^2)/s Per Dollar, More Is Better CUDA Mini-Nbody 2015-11-10 Performance / Cost - Test: Original Jetson AGX Xavier Jetson Nano Jetson TX2 Max-P Jetson TX2 Max-Q 0.009 0.018 0.027 0.036 0.045 0.04 0.04 0.01 0.01 1. Jetson AGX Xavier: $1299 reported cost. 2. Jetson Nano: $99 reported cost. 3. Jetson TX2 Max-P: $599 reported cost. 4. Jetson TX2 Max-Q: $599 reported cost.
PyBench Performance / Cost - Total For Average Test Times OpenBenchmarking.org Milliseconds x Dollar, Fewer Is Better PyBench 2018-02-16 Performance / Cost - Total For Average Test Times ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 1.1M 2.2M 3.3M 4.4M 5.5M 759132.00 3906093.00 701316.00 3163161.00 3239392.00 5232265.00 339753.45 310558.00 731955.00 1. ASUS TinkerBoard: $66 reported cost. 2. Jetson AGX Xavier: $1299 reported cost. 3. Jetson Nano: $99 reported cost. 4. Jetson TX1 Max-P: $499 reported cost. 5. Jetson TX2 Max-P: $599 reported cost. 6. Jetson TX2 Max-Q: $599 reported cost. 7. ODROID-N2: $64.95 reported cost. 8. ODROID-XU4: $62 reported cost. 9. Raspberry Pi 3 Model B+: $35 reported cost.
FLAC Audio Encoding Performance / Cost - WAV To FLAC OpenBenchmarking.org Seconds x Dollar, Fewer Is Better FLAC Audio Encoding 1.3.2 Performance / Cost - WAV To FLAC ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 15K 30K 45K 60K 75K 18417.30 70756.53 10372.23 39520.80 38976.93 62463.72 6208.57 6015.86 11883.55 1. ASUS TinkerBoard: $66 reported cost. 2. Jetson AGX Xavier: $1299 reported cost. 3. Jetson Nano: $99 reported cost. 4. Jetson TX1 Max-P: $499 reported cost. 5. Jetson TX2 Max-P: $599 reported cost. 6. Jetson TX2 Max-Q: $599 reported cost. 7. ODROID-N2: $64.95 reported cost. 8. ODROID-XU4: $62 reported cost. 9. Raspberry Pi 3 Model B+: $35 reported cost.
Zstd Compression Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 OpenBenchmarking.org Seconds x Dollar, Fewer Is Better Zstd Compression 1.3.4 Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 Raspberry Pi 3 Model B+ 30K 60K 90K 120K 150K 32776.92 103997.94 12857.13 72754.20 86837.03 152026.20 9875.00 11978.05 1. ASUS TinkerBoard: $66 reported cost. 2. Jetson AGX Xavier: $1299 reported cost. 3. Jetson Nano: $99 reported cost. 4. Jetson TX1 Max-P: $499 reported cost. 5. Jetson TX2 Max-P: $599 reported cost. 6. Jetson TX2 Max-Q: $599 reported cost. 7. ODROID-N2: $64.95 reported cost. 8. Raspberry Pi 3 Model B+: $35 reported cost.
Rust Prime Benchmark Performance / Cost - Prime Number Test To 200,000,000 OpenBenchmarking.org Seconds x Dollar, Fewer Is Better Rust Prime Benchmark Performance / Cost - Prime Number Test To 200,000,000 ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 30K 60K 90K 120K 150K 120189.30 42048.63 14868.81 64096.55 62871.04 101979.75 4748.49 35594.82 38419.15 1. ASUS TinkerBoard: $66 reported cost. 2. Jetson AGX Xavier: $1299 reported cost. 3. Jetson Nano: $99 reported cost. 4. Jetson TX1 Max-P: $499 reported cost. 5. Jetson TX2 Max-P: $599 reported cost. 6. Jetson TX2 Max-Q: $599 reported cost. 7. ODROID-N2: $64.95 reported cost. 8. ODROID-XU4: $62 reported cost. 9. Raspberry Pi 3 Model B+: $35 reported cost.
C-Ray Performance / Cost - Total Time - 4K, 16 Rays Per Pixel OpenBenchmarking.org Seconds x Dollar, Fewer Is Better C-Ray 1.1 Performance / Cost - Total Time - 4K, 16 Rays Per Pixel ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 110K 220K 330K 440K 550K 113388.00 461145.00 91179.00 375747.00 350415.00 520531.00 31940.46 51274.00 71050.00 1. ASUS TinkerBoard: $66 reported cost. 2. Jetson AGX Xavier: $1299 reported cost. 3. Jetson Nano: $99 reported cost. 4. Jetson TX1 Max-P: $499 reported cost. 5. Jetson TX2 Max-P: $599 reported cost. 6. Jetson TX2 Max-Q: $599 reported cost. 7. ODROID-N2: $64.95 reported cost. 8. ODROID-XU4: $62 reported cost. 9. Raspberry Pi 3 Model B+: $35 reported cost.
7-Zip Compression Performance / Cost - Compress Speed Test OpenBenchmarking.org MIPS Per Dollar, More Is Better 7-Zip Compression 16.02 Performance / Cost - Compress Speed Test ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 20 40 60 80 100 42.97 14.79 40.90 9.03 9.34 5.50 91.92 66.45 57.51 1. ASUS TinkerBoard: $66 reported cost. 2. Jetson AGX Xavier: $1299 reported cost. 3. Jetson Nano: $99 reported cost. 4. Jetson TX1 Max-P: $499 reported cost. 5. Jetson TX2 Max-P: $599 reported cost. 6. Jetson TX2 Max-Q: $599 reported cost. 7. ODROID-N2: $64.95 reported cost. 8. ODROID-XU4: $62 reported cost. 9. Raspberry Pi 3 Model B+: $35 reported cost.
TTSIOD 3D Renderer Performance / Cost - Phong Rendering With Soft-Shadow Mapping OpenBenchmarking.org FPS Per Dollar, More Is Better TTSIOD 3D Renderer 2.3b Performance / Cost - Phong Rendering With Soft-Shadow Mapping ASUS TinkerBoard Jetson AGX Xavier Jetson Nano Jetson TX1 Max-P Jetson TX2 Max-P Jetson TX2 Max-Q ODROID-N2 ODROID-XU4 Raspberry Pi 3 Model B+ 0.198 0.396 0.594 0.792 0.99 0.32 0.10 0.41 0.09 0.08 0.05 0.88 0.68 0.50 1. ASUS TinkerBoard: $66 reported cost. 2. Jetson AGX Xavier: $1299 reported cost. 3. Jetson Nano: $99 reported cost. 4. Jetson TX1 Max-P: $499 reported cost. 5. Jetson TX2 Max-P: $599 reported cost. 6. Jetson TX2 Max-Q: $599 reported cost. 7. ODROID-N2: $64.95 reported cost. 8. ODROID-XU4: $62 reported cost. 9. Raspberry Pi 3 Model B+: $35 reported cost.
Phoronix Test Suite v10.8.4