Jetson Nano Developer Kit

Benchmarks for a future article on Phoronix.com.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1903186-HV-JETSONNAN05
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 4 Tests
Compression Tests 2 Tests
CPU Massive 8 Tests
Creator Workloads 4 Tests
Multi-Core 5 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 2 Tests
Renderers 2 Tests
Server CPU Tests 6 Tests
Single-Threaded 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Jetson TX1 Max-P
March 17 2019
  1 Hour, 20 Minutes
Jetson TX2 Max-Q
March 16 2019
  7 Hours, 23 Minutes
Jetson TX2 Max-P
March 15 2019
  6 Hours, 25 Minutes
Jetson AGX Xavier
March 15 2019
  4 Hours, 1 Minute
Jetson Nano
March 17 2019
  7 Hours, 18 Minutes
Raspberry Pi 3 Model B+
March 16 2019
  4 Hours, 32 Minutes
ASUS TinkerBoard
March 16 2019
  7 Hours, 20 Minutes
ODROID-XU4
March 17 2019
  4 Hours, 21 Minutes
Invert Hiding All Results Option
  5 Hours, 20 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Jetson Nano Developer KitProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4ARMv8 rev 1 @ 1.73GHz (4 Cores)jetson_tx14096MB16GB 016G32NVIDIA Tegra X1VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.5X Server 1.18.4NVIDIA 28.1.04.5.01.0.8GCC 5.4.0 20160609ext41920x1080ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA TEGRAUnity 7.4.0NVIDIA 28.2.1GCC 5.4.0 20160609 + CUDA 9.0ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)ARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierUbuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ARMv8 rev 1 @ 1.43GHz (4 Cores)jetson-nano4096MB32GB GB1QTNVIDIA TEGRARealtek RTL8111/8168/84114.9.140-tegra (aarch64)NVIDIA 1.0.01.1.85ARMv7 rev 4 @ 1.40GHz (4 Cores)BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3926MB32GB GB2MWBCM2708Raspbian 9.64.19.23-v7+ (armv7l)LXDEX Server 1.19.2GCC 6.3.0 20170516656x416ARMv7 rev 1 @ 1.80GHz (4 Cores)Rockchip (Device Tree)2048MB32GB GB1QTDebian 9.04.4.16-00006-g4431f98-dirty (armv7l)X Server 1.18.41024x768ARMv7 rev 3 @ 1.50GHz (8 Cores)ODROID-XU4 Hardkernel Odroid XU416GB AJTD4Rllvmpipe 2GBVE228Ubuntu 18.044.14.37-135 (armv7l)X Server 1.19.63.3 Mesa 18.0.0-rc5 (LLVM 6.0 128 bits)GCC 7.3.01920x1080OpenBenchmarking.orgCompiler Details- Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson Nano: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - ODROID-XU4: --build=arm-linux-gnueabihf --disable-libitm --disable-libquadmath --disable-libquadmath-support --disable-sjlj-exceptions --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-multilib --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-gcc-major-version-only --with-mode=thumb --with-target-system-zlib -v Processor Details- Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactive- Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil- Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil- Jetson Nano: Scaling Governor: tegra-cpufreq schedutil- Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand- ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive- ODROID-XU4: Scaling Governor: cpufreq-dt ondemandPython Details- Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2- Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7- Jetson Nano: Python 2.7.15rc1 + Python 3.6.7- Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3- ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3- ODROID-XU4: Python 2.7.15rc1 + Python 3.6.7Kernel Details- ODROID-XU4: usbhid.quirks=0x0eef:0x0005:0x0004Graphics Details- ODROID-XU4: EXA

Jetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU4Logarithmic Result OverviewPhoronix Test Suite7-Zip CompressionTTSIOD 3D RendererPyBenchFLAC Audio EncodingRust Prime BenchmarkC-Ray

Jetson Nano Developer Kitcuda-mini-nbody: Originalglmark2: 1920 x 1080tensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - INT8 - 4 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: VGG19 - INT8 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 32 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: ResNet152 - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: ResNet152 - INT8 - 32 - Disabledlczero: BLASlczero: CUDA + cuDNNlczero: CUDA + cuDNN FP16ttsiod-renderer: Phong Rendering With Soft-Shadow Mappingcompress-7zip: Compress Speed Testc-ray: Total Time - 4K, 16 Rays Per Pixelrust-prime: Prime Number Test To 200,000,000compress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19encode-flac: WAV To FLACopencv-bench: pybench: Total For Average Test Timestesseract-ocr: Time To OCR 7 ImagesJetson TX1 Max-PJetson TX2 Max-QJetson TX2 Max-PJetson AGX XavierJetson NanoRaspberry Pi 3 Model B+ASUS TinkerBoardODROID-XU445.094508753128.45145.8079.2063396.7725.9914.2421.0411.4529.8315.7923.9412.5921614837423772.0139.1515688.8827.3414.5086.0847.1517910432.6717.3628.853294869170.25253.80104.2849387358.2432.6417.5626.5614.3236.8719.9129.8315.9226418446230192.2849.9719711335.1118.2911159.6923313041.9122.0749.265593585104.96144.9765.07296540847.132876208.76303.78172.50265.81247.95475.08203.96394.661200114320383143547.50902.787961146224.19372.736361215.0810061693259.82493.2247.629532515.011331921235532.3780.0654.47128300771.944.0764614.3511.5911884.1020112841.0420.9683.3747.8215.767.7646.5125.0898.9355.6617.3815.3714040.944049921150.19129.87104.77271.047084132.6717.66201320301097.69342.23339.532.742091321.22283617181821.05496.62279.051150241.964120827574.1197.03520.705009180.66OpenBenchmarking.org

CUDA Mini-Nbody

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q1122334455SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 347.134.078.246.77
OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q1020304050Min: 47.12 / Avg: 47.13 / Max: 47.14Min: 4.07 / Avg: 4.07 / Max: 4.09Min: 8.23 / Avg: 8.24 / Max: 8.25Min: 6.71 / Avg: 6.77 / Max: 6.8

GLmark2

This is a test of any system-installed GLMark2 OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX XavierJetson Nano60012001800240030002876646

NVIDIA TensorRT Inference

This test profile uses any existing system installation of NVIDIA TensorRT for carrying out inference benchmarks with various neural networks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.10, N = 3SE +/- 0.02, N = 2SE +/- 0.50, N = 4SE +/- 0.13, N = 3208.7614.3532.6425.99
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q4080120160200Min: 208.63 / Avg: 208.76 / Max: 208.96Min: 14.33 / Avg: 14.35 / Max: 14.37Min: 31.7 / Avg: 32.64 / Max: 33.98Min: 25.73 / Avg: 25.99 / Max: 26.17

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q70140210280350SE +/- 0.46, N = 3SE +/- 0.25, N = 6SE +/- 0.20, N = 5303.7817.5614.24
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250Min: 303.01 / Avg: 303.78 / Max: 304.61Min: 16.36 / Avg: 17.56 / Max: 18.11Min: 13.46 / Avg: 14.24 / Max: 14.6

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.50, N = 3SE +/- 0.05, N = 2SE +/- 0.38, N = 3SE +/- 0.34, N = 3172.5011.5926.5621.04
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q306090120150Min: 171.71 / Avg: 172.5 / Max: 173.43Min: 11.54 / Avg: 11.59 / Max: 11.64Min: 25.87 / Avg: 26.56 / Max: 27.2Min: 20.54 / Avg: 21.04 / Max: 21.7

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.20, N = 3SE +/- 0.25, N = 4SE +/- 0.23, N = 3265.8114.3211.45
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250Min: 265.44 / Avg: 265.81 / Max: 266.11Min: 13.63 / Avg: 14.32 / Max: 14.8Min: 10.99 / Avg: 11.45 / Max: 11.69

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.12, N = 3SE +/- 0.31, N = 3SE +/- 0.18, N = 3247.9536.8729.83
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200Min: 247.73 / Avg: 247.95 / Max: 248.14Min: 36.25 / Avg: 36.87 / Max: 37.19Min: 29.48 / Avg: 29.83 / Max: 30.12

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q100200300400500SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3475.0819.9115.79
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q80160240320400Min: 474.88 / Avg: 475.08 / Max: 475.2Min: 19.83 / Avg: 19.91 / Max: 19.99Min: 15.76 / Avg: 15.79 / Max: 15.81

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3203.9629.8323.94
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200Min: 203.92 / Avg: 203.96 / Max: 204.03Min: 29.76 / Avg: 29.83 / Max: 29.94Min: 23.79 / Avg: 23.94 / Max: 24.04

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q90180270360450SE +/- 0.23, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3394.6615.9212.59
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q70140210280350Min: 394.4 / Avg: 394.66 / Max: 395.11Min: 15.81 / Avg: 15.92 / Max: 16Min: 12.53 / Avg: 12.59 / Max: 12.63

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 1.82, N = 3SE +/- 2.12, N = 12SE +/- 7.77, N = 12SE +/- 3.03, N = 61200118264216
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000Min: 1196.24 / Avg: 1199.87 / Max: 1201.91Min: 104.75 / Avg: 118.34 / Max: 127.58Min: 222.34 / Avg: 263.9 / Max: 304.01Min: 202.59 / Avg: 216.45 / Max: 224.23

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.59, N = 3SE +/- 0.72, N = 3SE +/- 2.79, N = 5SE +/- 0.91, N = 31143.0084.10184.00148.00
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000Min: 1138.12 / Avg: 1142.71 / Max: 1147.08Min: 82.88 / Avg: 84.1 / Max: 85.38Min: 175.71 / Avg: 184.09 / Max: 192.25Min: 146.67 / Avg: 148.2 / Max: 149.83

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 2.07, N = 3SE +/- 1.59, N = 3SE +/- 7.68, N = 12SE +/- 2.82, N = 32038201462374
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q400800120016002000Min: 2035.4 / Avg: 2038.31 / Max: 2042.31Min: 197.66 / Avg: 200.84 / Max: 202.54Min: 418.29 / Avg: 461.97 / Max: 493.49Min: 368.3 / Avg: 373.57 / Max: 377.97

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q7001400210028003500SE +/- 1.06, N = 3SE +/- 0.06, N = 3SE +/- 0.52, N = 3SE +/- 1.39, N = 33143128301237
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q5001000150020002500Min: 3140.95 / Avg: 3142.61 / Max: 3144.59Min: 127.47 / Avg: 127.59 / Max: 127.67Min: 300.06 / Avg: 300.95 / Max: 301.86Min: 235.62 / Avg: 237.06 / Max: 239.84

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q120240360480600SE +/- 0.03, N = 3SE +/- 0.25, N = 3SE +/- 1.32, N = 12SE +/- 1.10, N = 12547.5041.0492.2872.01
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q100200300400500Min: 547.46 / Avg: 547.5 / Max: 547.56Min: 40.75 / Avg: 41.04 / Max: 41.54Min: 85.37 / Avg: 92.28 / Max: 100.02Min: 65.28 / Avg: 72.01 / Max: 77.6

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 1.86, N = 3SE +/- 0.36, N = 3SE +/- 0.79, N = 4SE +/- 0.64, N = 3902.7820.9649.9739.15
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q160320480640800Min: 899.76 / Avg: 902.78 / Max: 906.18Min: 20.29 / Avg: 20.96 / Max: 21.53Min: 47.92 / Avg: 49.97 / Max: 51.79Min: 37.88 / Avg: 39.15 / Max: 39.9

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.48, N = 3SE +/- 0.70, N = 3SE +/- 2.27, N = 3SE +/- 1.90, N = 12796.0083.37197.00156.00
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q140280420560700Min: 790.77 / Avg: 795.65 / Max: 798.88Min: 82.6 / Avg: 83.37 / Max: 84.77Min: 193.49 / Avg: 197.4 / Max: 201.35Min: 146.96 / Avg: 156.25 / Max: 165.88

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 4.31, N = 3SE +/- 0.60, N = 3SE +/- 1.65, N = 3SE +/- 1.32, N = 31146.0047.82113.0088.88
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000Min: 1137.32 / Avg: 1145.68 / Max: 1151.7Min: 46.83 / Avg: 47.82 / Max: 48.89Min: 110.9 / Avg: 113.14 / Max: 116.36Min: 86.29 / Avg: 88.88 / Max: 90.62

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.22, N = 3SE +/- 0.04, N = 3SE +/- 0.36, N = 3SE +/- 0.34, N = 3224.1915.7635.1127.34
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q4080120160200Min: 223.75 / Avg: 224.19 / Max: 224.48Min: 15.71 / Avg: 15.76 / Max: 15.83Min: 34.41 / Avg: 35.11 / Max: 35.61Min: 26.85 / Avg: 27.34 / Max: 27.98

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q80160240320400SE +/- 1.59, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 3372.737.7618.2914.50
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q70140210280350Min: 370.1 / Avg: 372.73 / Max: 375.59Min: 7.71 / Avg: 7.76 / Max: 7.8Min: 18.02 / Avg: 18.29 / Max: 18.46Min: 14.29 / Avg: 14.5 / Max: 14.8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q140280420560700SE +/- 1.23, N = 3SE +/- 0.02, N = 3SE +/- 1.22, N = 3SE +/- 0.86, N = 3636.0046.51111.0086.08
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q110220330440550Min: 633.64 / Avg: 635.99 / Max: 637.77Min: 46.48 / Avg: 46.51 / Max: 46.55Min: 108.57 / Avg: 110.71 / Max: 112.8Min: 84.39 / Avg: 86.08 / Max: 87.21

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 0.25, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 31215.0825.0859.6947.15
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000Min: 1214.64 / Avg: 1215.08 / Max: 1215.5Min: 24.98 / Avg: 25.08 / Max: 25.19Min: 59.62 / Avg: 59.69 / Max: 59.74Min: 47.01 / Avg: 47.15 / Max: 47.28

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 4.50, N = 3SE +/- 2.17, N = 81006.0098.93233.00179.00
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q2004006008001000Min: 1005.65 / Avg: 1005.91 / Max: 1006.32Min: 98.55 / Avg: 98.93 / Max: 99.16Min: 223.96 / Avg: 232.95 / Max: 237.67Min: 171.57 / Avg: 179.28 / Max: 186.27

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 8.72, N = 3SE +/- 0.18, N = 3SE +/- 0.74, N = 3SE +/- 0.07, N = 31693.0055.66130.00104.00
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q30060090012001500Min: 1675.82 / Avg: 1693.17 / Max: 1703.42Min: 55.3 / Avg: 55.66 / Max: 55.89Min: 128.85 / Avg: 130.12 / Max: 131.41Min: 103.51 / Avg: 103.65 / Max: 103.76

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.26, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3259.8217.3841.9132.67
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q50100150200250Min: 259.31 / Avg: 259.82 / Max: 260.15Min: 17.37 / Avg: 17.38 / Max: 17.4Min: 41.81 / Avg: 41.91 / Max: 42.04Min: 32.48 / Avg: 32.67 / Max: 32.81

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q110220330440550SE +/- 0.81, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3493.2222.0717.36
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q90180270360450Min: 491.63 / Avg: 493.22 / Max: 494.31Min: 22.03 / Avg: 22.07 / Max: 22.12Min: 17.35 / Avg: 17.36 / Max: 17.37

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson AGX XavierJetson Nano1122334455SE +/- 0.62, N = 7SE +/- 0.03, N = 347.6215.371. (CXX) g++ options: -lpthread -lz
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson AGX XavierJetson Nano1020304050Min: 46.47 / Avg: 47.62 / Max: 51.3Min: 15.31 / Avg: 15.37 / Max: 15.41. (CXX) g++ options: -lpthread -lz

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson AGX XavierJetson Nano2004006008001000SE +/- 6.14, N = 3SE +/- 0.26, N = 39531401. (CXX) g++ options: -lpthread -lz
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson AGX XavierJetson Nano2004006008001000Min: 940.76 / Avg: 952.89 / Max: 960.65Min: 139.47 / Avg: 139.79 / Max: 140.311. (CXX) g++ options: -lpthread -lz

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNN FP16Jetson AGX Xavier5001000150020002500SE +/- 7.60, N = 32515.011. (CXX) g++ options: -lpthread -lz

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+306090120150SE +/- 0.27, N = 9SE +/- 1.63, N = 12SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.46, N = 4SE +/- 0.97, N = 9SE +/- 0.16, N = 321.22133.0040.9445.0949.2628.8541.9617.661. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+20406080100Min: 20.85 / Avg: 21.22 / Max: 23.35Min: 130.64 / Avg: 133.37 / Max: 150.98Min: 40.77 / Avg: 40.94 / Max: 41.15Min: 45.04 / Avg: 45.09 / Max: 45.16Min: 48.96 / Avg: 49.26 / Max: 49.42Min: 27.49 / Avg: 28.85 / Max: 29.39Min: 39.77 / Avg: 41.96 / Max: 49.13Min: 17.5 / Avg: 17.66 / Max: 17.981. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+4K8K12K16K20KSE +/- 34.93, N = 3SE +/- 274.18, N = 12SE +/- 18.00, N = 3SE +/- 13.43, N = 3SE +/- 20.85, N = 3SE +/- 13.05, N = 3SE +/- 89.16, N = 12SE +/- 23.74, N = 112836192124049450855933294412020131. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+3K6K9K12K15KMin: 2766 / Avg: 2835.67 / Max: 2875Min: 16806 / Avg: 19211.83 / Max: 19780Min: 4025 / Avg: 4048.67 / Max: 4084Min: 4483 / Avg: 4508 / Max: 4529Min: 5571 / Avg: 5593.33 / Max: 5635Min: 3269 / Avg: 3294 / Max: 3313Min: 3798 / Avg: 4120.25 / Max: 4934Min: 1778 / Avg: 2012.64 / Max: 20561. (CXX) g++ options: -pipe -lpthread

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+400800120016002000SE +/- 22.09, N = 3SE +/- 7.17, N = 9SE +/- 0.35, N = 3SE +/- 10.23, N = 3SE +/- 49.09, N = 9SE +/- 1.44, N = 3SE +/- 29.65, N = 9SE +/- 2.46, N = 3171835592175358586982720301. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+400800120016002000Min: 1673.94 / Avg: 1718.09 / Max: 1741.49Min: 300.33 / Avg: 354.78 / Max: 364.69Min: 920.71 / Avg: 921.21 / Max: 921.88Min: 741.18 / Avg: 752.56 / Max: 772.97Min: 531.98 / Avg: 585.26 / Max: 977.76Min: 865.79 / Avg: 868.66 / Max: 870.17Min: 747.78 / Avg: 827.04 / Max: 1020.68Min: 2025.91 / Avg: 2029.72 / Max: 2034.331. (CC) gcc options: -lm -lpthread -O3

Rust Prime Benchmark

Based on petehunt/rust-benchmark, this is a prime number benchmark that is multi-threaded and written in Rustlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRust Prime BenchmarkPrime Number Test To 200,000,000ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+400800120016002000SE +/- 187.90, N = 6SE +/- 0.00, N = 3SE +/- 0.22, N = 3SE +/- 0.77, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.37, N = 3SE +/- 1.55, N = 31821.0532.37150.19128.45104.96170.25574.111097.69-ldl -lrt -lpthread -lgcc_s -lc -lm -lutil1. (CC) gcc options: -pie -nodefaultlibs
OpenBenchmarking.orgSeconds, Fewer Is BetterRust Prime BenchmarkPrime Number Test To 200,000,000ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+30060090012001500Min: 1407.01 / Avg: 1821.05 / Max: 2575.57Min: 32.36 / Avg: 32.37 / Max: 32.37Min: 149.76 / Avg: 150.19 / Max: 150.43Min: 127 / Avg: 128.45 / Max: 129.6Min: 104.9 / Avg: 104.96 / Max: 105.04Min: 170.09 / Avg: 170.25 / Max: 170.41Min: 573.68 / Avg: 574.11 / Max: 574.84Min: 1095.85 / Avg: 1097.69 / Max: 1100.761. (CC) gcc options: -pie -nodefaultlibs

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+110220330440550SE +/- 2.16, N = 3SE +/- 0.91, N = 3SE +/- 0.23, N = 3SE +/- 0.42, N = 3SE +/- 0.29, N = 3SE +/- 1.02, N = 3SE +/- 1.03, N = 3496.6280.06129.87145.80144.97253.80342.231. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+90180270360450Min: 493.98 / Avg: 496.62 / Max: 500.9Min: 78.26 / Avg: 80.06 / Max: 81.18Min: 129.55 / Avg: 129.87 / Max: 130.3Min: 144.97 / Avg: 145.8 / Max: 146.33Min: 144.4 / Avg: 144.97 / Max: 145.39Min: 252.57 / Avg: 253.8 / Max: 255.83Min: 340.32 / Avg: 342.23 / Max: 343.871. (CC) gcc options: -O3 -pthread -lz -llzma

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+70140210280350SE +/- 2.51, N = 5SE +/- 0.61, N = 5SE +/- 0.83, N = 5SE +/- 0.74, N = 5SE +/- 0.15, N = 5SE +/- 0.18, N = 5SE +/- 0.31, N = 5SE +/- 0.98, N = 5279.0554.47104.7779.2065.07104.2897.03339.531. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+60120180240300Min: 271.5 / Avg: 279.05 / Max: 286.25Min: 53.06 / Avg: 54.47 / Max: 56.73Min: 103.14 / Avg: 104.77 / Max: 107.75Min: 76.88 / Avg: 79.2 / Max: 80.64Min: 64.8 / Avg: 65.07 / Max: 65.64Min: 103.77 / Avg: 104.28 / Max: 104.76Min: 96.22 / Avg: 97.03 / Max: 98.12Min: 337.11 / Avg: 339.53 / Max: 342.571. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

OpenCV Benchmark

Stress benchmark tests to measure time consumed by the OpenCV libraries installed Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenCV Benchmark 3.3.0Jetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+110220330440550SE +/- 1.57, N = 3SE +/- 4.66, N = 9SE +/- 0.27, N = 3SE +/- 5.74, N = 3SE +/- 5.31, N = 3128.00271.04296.00493.00520.702.741. (CXX) g++ options: -std=c++11 -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenCV Benchmark 3.3.0Jetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+90180270360450Min: 125.26 / Avg: 127.83 / Max: 130.69Min: 261.63 / Avg: 271.04 / Max: 304.63Min: 295.74 / Avg: 296.1 / Max: 296.62Min: 486.66 / Avg: 492.74 / Max: 504.22Min: 514.99 / Avg: 520.7 / Max: 531.31. (CXX) g++ options: -std=c++11 -rdynamic

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+4K8K12K16K20KSE +/- 854.75, N = 9SE +/- 4.67, N = 3SE +/- 37.23, N = 3SE +/- 18.55, N = 3SE +/- 33.86, N = 3SE +/- 42.52, N = 3SE +/- 30.99, N = 3SE +/- 43.80, N = 31150230077084633954088735500920913
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+4K8K12K16K20KMin: 9624 / Avg: 11502.33 / Max: 16297Min: 2998 / Avg: 3006.67 / Max: 3014Min: 7031 / Avg: 7084.33 / Max: 7156Min: 6303 / Avg: 6339.33 / Max: 6364Min: 5366 / Avg: 5408 / Max: 5475Min: 8690 / Avg: 8735 / Max: 8820Min: 4968 / Avg: 5009.33 / Max: 5070Min: 20826 / Avg: 20912.67 / Max: 20967

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesJetson AGX XavierJetson NanoODROID-XU44080120160200SE +/- 0.89, N = 3SE +/- 1.50, N = 3SE +/- 1.38, N = 371.94132.67180.66
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesJetson AGX XavierJetson NanoODROID-XU4306090120150Min: 70.52 / Avg: 71.94 / Max: 73.57Min: 130.29 / Avg: 132.67 / Max: 135.45Min: 178.32 / Avg: 180.66 / Max: 183.09

TTSIOD 3D Renderer

OpenBenchmarking.orgFPS Per Dollar, More Is BetterTTSIOD 3D Renderer 2.3bPerformance / Cost - Phong Rendering With Soft-Shadow MappingASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+0.1530.3060.4590.6120.7650.320.100.410.090.080.050.680.501. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-XU4: $62 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.

7-Zip Compression

OpenBenchmarking.orgMIPS Per Dollar, More Is Better7-Zip Compression 16.02Performance / Cost - Compress Speed TestASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+153045607542.9714.7940.909.039.345.5066.4557.511. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-XU4: $62 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.

C-Ray

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterC-Ray 1.1Performance / Cost - Total Time - 4K, 16 Rays Per PixelASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+110K220K330K440K550K113388.00461145.0091179.00375747.00350415.00520531.0051274.0071050.001. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-XU4: $62 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.

Rust Prime Benchmark

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterRust Prime BenchmarkPerformance / Cost - Prime Number Test To 200,000,000ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+30K60K90K120K150K120189.3042048.6314868.8164096.5562871.04101979.7535594.8238419.151. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-XU4: $62 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.

Zstd Compression

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterZstd Compression 1.3.4Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19ASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+30K60K90K120K150K32776.92103997.9412857.1372754.2086837.03152026.2011978.051. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. Raspberry Pi 3 Model B+: $35 reported cost.

FLAC Audio Encoding

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterFLAC Audio Encoding 1.3.2Performance / Cost - WAV To FLACASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+15K30K45K60K75K18417.3070756.5310372.2339520.8038976.9362463.726015.8611883.551. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-XU4: $62 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.

PyBench

OpenBenchmarking.orgMilliseconds x Dollar, Fewer Is BetterPyBench 2018-02-16Performance / Cost - Total For Average Test TimesASUS TinkerBoardJetson AGX XavierJetson NanoJetson TX1 Max-PJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+1.1M2.2M3.3M4.4M5.5M759132.003906093.00701316.003163161.003239392.005232265.00310558.00731955.001. ASUS TinkerBoard: $66 reported cost.2. Jetson AGX Xavier: $1299 reported cost.3. Jetson Nano: $99 reported cost.4. Jetson TX1 Max-P: $499 reported cost.5. Jetson TX2 Max-P: $599 reported cost.6. Jetson TX2 Max-Q: $599 reported cost.7. ODROID-XU4: $62 reported cost.8. Raspberry Pi 3 Model B+: $35 reported cost.

CUDA Mini-Nbody

OpenBenchmarking.org(NBody^2)/s Per Dollar, More Is BetterCUDA Mini-Nbody 2015-11-10Performance / Cost - Test: OriginalJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0090.0180.0270.0360.0450.040.040.010.011. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

NVIDIA TensorRT Inference

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0360.0720.1080.1440.180.160.140.050.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.05180.10360.15540.20720.2590.230.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.02930.05860.08790.11720.14650.130.120.040.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0450.090.1350.180.2250.200.020.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.04280.08560.12840.17120.2140.190.060.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08330.16660.24990.33320.41650.370.030.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.0360.0720.1080.1440.180.160.050.041. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.06750.1350.20250.270.33750.300.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.26780.53560.80341.07121.3390.921.190.440.361. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1980.3960.5940.7920.990.880.850.310.251. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.45680.91361.37041.82722.2841.572.030.770.621. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.54451.0891.63352.1782.72252.421.290.500.401. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.09450.1890.28350.3780.47250.420.410.150.121. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.15530.31060.46590.62120.77650.690.210.080.071. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1890.3780.5670.7560.9450.610.840.330.261. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.1980.3960.5940.7920.990.880.480.190.151. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.03830.07660.11490.15320.19150.170.160.060.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.06530.13060.19590.26120.32650.290.080.030.021. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.11030.22060.33090.44120.55150.490.470.190.141. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.21150.4230.63450.8461.05750.940.250.100.081. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.2250.450.6750.91.1250.771.000.390.301. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.29250.5850.87751.171.46251.300.560.220.171. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-Q0.0450.090.1350.180.2250.200.180.070.051. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.

OpenBenchmarking.orgImages Per Second Per Dollar, More Is BetterNVIDIA TensorRT InferencePerformance / Cost - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q0.08550.1710.25650.3420.42750.380.040.031. Jetson AGX Xavier: $1299 reported cost.2. Jetson TX2 Max-P: $599 reported cost.3. Jetson TX2 Max-Q: $599 reported cost.

OpenCV Benchmark

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterOpenCV Benchmark 3.3.0Performance / Cost -Jetson AGX XavierJetson NanoJetson TX2 Max-PJetson TX2 Max-QODROID-XU4Raspberry Pi 3 Model B+60K120K180K240K300K166272.0026832.96177304.00295307.0032283.4095.901. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. Jetson TX2 Max-P: $599 reported cost.4. Jetson TX2 Max-Q: $599 reported cost.5. ODROID-XU4: $62 reported cost.6. Raspberry Pi 3 Model B+: $35 reported cost.

GLmark2

OpenBenchmarking.orgScore Per Dollar, More Is BetterGLmark2Performance / Cost - Resolution: 1920 x 1080Jetson AGX XavierJetson Nano2468102.216.531. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

LeelaChessZero

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: BLASJetson AGX XavierJetson Nano0.0360.0720.1080.1440.180.040.161. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNNJetson AGX XavierJetson Nano0.31730.63460.95191.26921.58650.731.411. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.

OpenBenchmarking.orgNodes Per Second Per Dollar, More Is BetterLeelaChessZero 0.20.1Performance / Cost - Backend: CUDA + cuDNN FP16Jetson AGX Xavier0.43650.8731.30951.7462.18251.941. $1299 reported cost.

Tesseract OCR

OpenBenchmarking.orgSeconds x Dollar, Fewer Is BetterTesseract OCR 4.0.0-beta.1Performance / Cost - Time To OCR 7 ImagesJetson AGX XavierJetson NanoODROID-XU420K40K60K80K100K93450.0613134.3311200.921. Jetson AGX Xavier: $1299 reported cost.2. Jetson Nano: $99 reported cost.3. ODROID-XU4: $62 reported cost.

76 Results Shown

CUDA Mini-Nbody
GLmark2
NVIDIA TensorRT Inference:
  VGG16 - FP16 - 4 - Disabled
  VGG16 - INT8 - 4 - Disabled
  VGG19 - FP16 - 4 - Disabled
  VGG19 - INT8 - 4 - Disabled
  VGG16 - FP16 - 32 - Disabled
  VGG16 - INT8 - 32 - Disabled
  VGG19 - FP16 - 32 - Disabled
  VGG19 - INT8 - 32 - Disabled
  AlexNet - FP16 - 4 - Disabled
  AlexNet - INT8 - 4 - Disabled
  AlexNet - FP16 - 32 - Disabled
  AlexNet - INT8 - 32 - Disabled
  ResNet50 - FP16 - 4 - Disabled
  ResNet50 - INT8 - 4 - Disabled
  GoogleNet - FP16 - 4 - Disabled
  GoogleNet - INT8 - 4 - Disabled
  ResNet152 - FP16 - 4 - Disabled
  ResNet152 - INT8 - 4 - Disabled
  ResNet50 - FP16 - 32 - Disabled
  ResNet50 - INT8 - 32 - Disabled
  GoogleNet - FP16 - 32 - Disabled
  GoogleNet - INT8 - 32 - Disabled
  ResNet152 - FP16 - 32 - Disabled
  ResNet152 - INT8 - 32 - Disabled
LeelaChessZero:
  BLAS
  CUDA + cuDNN
  CUDA + cuDNN FP16
TTSIOD 3D Renderer
7-Zip Compression
C-Ray
Rust Prime Benchmark
Zstd Compression
FLAC Audio Encoding
OpenCV Benchmark
PyBench
Tesseract OCR
TTSIOD 3D Renderer:
  Performance / Cost - Phong Rendering With Soft-Shadow Mapping
  Performance / Cost - Compress Speed Test
  Performance / Cost - Total Time - 4K, 16 Rays Per Pixel
  Performance / Cost - Prime Number Test To 200,000,000
  Performance / Cost - Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19
  Performance / Cost - WAV To FLAC
  Performance / Cost - Total For Average Test Times
  Performance / Cost - Original
  Performance / Cost - VGG16 - FP16 - 4 - Disabled
  Performance / Cost - VGG16 - INT8 - 4 - Disabled
  Performance / Cost - VGG19 - FP16 - 4 - Disabled
  Performance / Cost - VGG19 - INT8 - 4 - Disabled
  Performance / Cost - VGG16 - FP16 - 32 - Disabled
  Performance / Cost - VGG16 - INT8 - 32 - Disabled
  Performance / Cost - VGG19 - FP16 - 32 - Disabled
  Performance / Cost - VGG19 - INT8 - 32 - Disabled
  Performance / Cost - AlexNet - FP16 - 4 - Disabled
  Performance / Cost - AlexNet - INT8 - 4 - Disabled
  Performance / Cost - AlexNet - FP16 - 32 - Disabled
  Performance / Cost - AlexNet - INT8 - 32 - Disabled
  Performance / Cost - ResNet50 - FP16 - 4 - Disabled
  Performance / Cost - ResNet50 - INT8 - 4 - Disabled
  Performance / Cost - GoogleNet - FP16 - 4 - Disabled
  Performance / Cost - GoogleNet - INT8 - 4 - Disabled
  Performance / Cost - ResNet152 - FP16 - 4 - Disabled
  Performance / Cost - ResNet152 - INT8 - 4 - Disabled
  Performance / Cost - ResNet50 - FP16 - 32 - Disabled
  Performance / Cost - ResNet50 - INT8 - 32 - Disabled
  Performance / Cost - GoogleNet - FP16 - 32 - Disabled
  Performance / Cost - GoogleNet - INT8 - 32 - Disabled
  Performance / Cost - ResNet152 - FP16 - 32 - Disabled
  Performance / Cost - ResNet152 - INT8 - 32 - Disabled
  Performance / Cost -
  Performance / Cost - 1920 x 1080
  Performance / Cost - BLAS
  Performance / Cost - CUDA + cuDNN
  Performance / Cost - CUDA + cuDNN FP16
  Performance / Cost - Time To OCR 7 Images