Jetson Prep

ARMv8 rev 1 testing with a jetson_tx1 and NVIDIA Tegra X1 on Ubuntu 16.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1903178-SP-1903167SP11&grw&sor.

Jetson PrepProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-PARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierVE228Ubuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ext41920x1080ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA TEGRAUbuntu 16.044.4.38-tegra (aarch64)Unity 7.4.0X Server 1.18.4NVIDIA 28.2.14.5.0GCC 5.4.0 20160609 + CUDA 9.0ARMv8 rev 3 @ 1.27GHz (4 Cores / 6 Threads)ARMv7 rev 4 @ 1.40GHz (4 Cores)BCM2835 Raspberry Pi 3 Model B Plus Rev 1.3926MB32GB GB2MWBCM2708Raspbian 9.64.19.23-v7+ (armv7l)LXDEX Server 1.19.2GCC 6.3.0 20170516656x416ARMv7 rev 1 @ 1.80GHz (4 Cores)Rockchip (Device Tree)2048MB32GB GB1QTDebian 9.04.4.16-00006-g4431f98-dirty (armv7l)X Server 1.18.41024x768ARMv8 rev 1 @ 1.73GHz (4 Cores)jetson_tx14096MB16GB 016G32NVIDIA Tegra X1VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.5NVIDIA 28.1.04.5.01.0.8GCC 5.4.0 201606091920x1080OpenBenchmarking.orgCompiler Details- Jetson AGX Xavier: --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Jetson TX2 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Jetson TX2 Max-Q: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v - Raspberry Pi 3 Model B+: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv6 --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfp --with-target-system-zlib -v - ASUS TinkerBoard: --build=arm-linux-gnueabihf --disable-browser-plugin --disable-libitm --disable-libquadmath --disable-sjlj-exceptions --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-plugin --enable-shared --enable-threads=posix --host=arm-linux-gnueabihf --program-prefix=arm-linux-gnueabihf- --target=arm-linux-gnueabihf --with-arch-directory=arm --with-arch=armv7-a --with-default-libstdcxx-abi=new --with-float=hard --with-fpu=vfpv3-d16 --with-mode=thumb --with-target-system-zlib -v - Jetson TX1 Max-P: --build=aarch64-linux-gnu --disable-browser-plugin --disable-libquadmath --disable-werror --enable-checking=release --enable-clocale=gnu --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-gtk-cairo --enable-java-awt=gtk --enable-java-home --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --target=aarch64-linux-gnu --with-arch-directory=aarch64 --with-default-libstdcxx-abi=new -v Processor Details- Jetson AGX Xavier: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-P: Scaling Governor: tegra_cpufreq schedutil- Jetson TX2 Max-Q: Scaling Governor: tegra_cpufreq schedutil- Raspberry Pi 3 Model B+: Scaling Governor: BCM2835 Freq ondemand- ASUS TinkerBoard: Scaling Governor: cpufreq-dt interactive- Jetson TX1 Max-P: Scaling Governor: tegra-cpufreq interactivePython Details- Jetson AGX Xavier: Python 2.7.15rc1 + Python 3.6.7- Jetson TX2 Max-P: Python 2.7.12 + Python 3.5.2- Jetson TX2 Max-Q: Python 2.7.12 + Python 3.5.2- Raspberry Pi 3 Model B+: Python 2.7.13 + Python 3.5.3- ASUS TinkerBoard: Python 2.7.13 + Python 3.5.3- Jetson TX1 Max-P: Python 2.7.12 + Python 3.5.2

Jetson Preptensorrt-inference: ResNet50 - INT8 - 4 - Disabledencode-flac: WAV To FLACtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - INT8 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - INT8 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: VGG19 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledglmark2: 1920 x 1080tensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - INT8 - 4 - Disabledtesseract-ocr: Time To OCR 7 Imageslczero: BLAScuda-mini-nbody: Originallczero: CUDA + cuDNNlczero: CUDA + cuDNN FP16rust-prime: Prime Number Test To 200,000,000compress-7zip: Compress Speed Testcompress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19c-ray: Total Time - 4K, 16 Rays Per Pixelttsiod-renderer: Phong Rendering With Soft-Shadow Mappingopencv-bench: pybench: Total For Average Test TimesJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoardJetson TX1 Max-P902.7854.47547.501006259.82224.19636475.08203.96265.81247.95172.50303.78208.76796394.6611461200287611431215.08203816933143493.22372.7371.9447.6247.13952.892515.0132.371921280.06355133128300749.9765.0792.2823341.9135.1111119.9129.8314.3236.8726.5617.5632.6419715.9211326418459.6946213030122.0718.298.24104.965593144.9758549.26296540839.15104.2872.0117932.6727.3486.0815.7923.9411.4529.8321.0414.2425.9915612.5988.8821614847.1537410423717.3614.506.77170.253294253.8086928.854938735339.531097.692013342.23203017.662.7420913279.051821.052836496.62171821.221150279.20128.454508145.8075345.096339OpenBenchmarking.org

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 1.86, N = 3SE +/- 0.79, N = 4SE +/- 0.64, N = 3902.7849.9739.15

FLAC Audio Encoding

WAV To FLAC

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACJetson AGX XavierJetson TX2 Max-PJetson TX1 Max-PJetson TX2 Max-QASUS TinkerBoardRaspberry Pi 3 Model B+70140210280350SE +/- 0.61, N = 5SE +/- 0.15, N = 5SE +/- 0.74, N = 5SE +/- 0.18, N = 5SE +/- 2.51, N = 5SE +/- 0.98, N = 554.4765.0779.20104.28279.05339.531. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q120240360480600SE +/- 0.03, N = 3SE +/- 1.32, N = 12SE +/- 1.10, N = 12547.5092.2872.01

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 0.21, N = 3SE +/- 4.50, N = 3SE +/- 2.17, N = 81006233179

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.26, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3259.8241.9132.67

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.22, N = 3SE +/- 0.36, N = 3SE +/- 0.34, N = 3224.1935.1127.34

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q140280420560700SE +/- 1.23, N = 3SE +/- 1.22, N = 3SE +/- 0.86, N = 3636.00111.0086.08

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q100200300400500SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3475.0819.9115.79

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3203.9629.8323.94

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q60120180240300SE +/- 0.20, N = 3SE +/- 0.25, N = 4SE +/- 0.23, N = 3265.8114.3211.45

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.12, N = 3SE +/- 0.31, N = 3SE +/- 0.18, N = 3247.9536.8729.83

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q4080120160200SE +/- 0.50, N = 3SE +/- 0.38, N = 3SE +/- 0.34, N = 3172.5026.5621.04

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q70140210280350SE +/- 0.46, N = 3SE +/- 0.25, N = 6SE +/- 0.20, N = 5303.7817.5614.24

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q50100150200250SE +/- 0.10, N = 3SE +/- 0.50, N = 4SE +/- 0.13, N = 3208.7632.6425.99

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.48, N = 3SE +/- 2.27, N = 3SE +/- 1.90, N = 12796197156

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q90180270360450SE +/- 0.23, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3394.6615.9212.59

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 4.31, N = 3SE +/- 1.65, N = 3SE +/- 1.32, N = 31146.00113.0088.88

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 1.82, N = 3SE +/- 7.77, N = 12SE +/- 3.03, N = 61200264216

GLmark2

Resolution: 1920 x 1080

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX Xavier60012001800240030002876

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q2004006008001000SE +/- 2.59, N = 3SE +/- 2.79, N = 5SE +/- 0.91, N = 31143184148

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q30060090012001500SE +/- 0.25, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 31215.0859.6947.15

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 2.07, N = 3SE +/- 7.68, N = 12SE +/- 2.82, N = 32038462374

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q400800120016002000SE +/- 8.72, N = 3SE +/- 0.74, N = 3SE +/- 0.07, N = 31693130104

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q7001400210028003500SE +/- 1.06, N = 3SE +/- 0.52, N = 3SE +/- 1.39, N = 33143301237

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q110220330440550SE +/- 0.81, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3493.2222.0717.36

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q80160240320400SE +/- 1.59, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 3372.7318.2914.50

Tesseract OCR

Time To OCR 7 Images

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0-beta.1Time To OCR 7 ImagesJetson AGX Xavier1632486480SE +/- 0.89, N = 371.94

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson AGX Xavier1122334455SE +/- 0.62, N = 747.621. (CXX) g++ options: -lpthread -lz

CUDA Mini-Nbody

Test: Original

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q1122334455SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 347.138.246.77

LeelaChessZero

Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson AGX Xavier2004006008001000SE +/- 6.14, N = 3952.891. (CXX) g++ options: -lpthread -lz

LeelaChessZero

Backend: CUDA + cuDNN FP16

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNN FP16Jetson AGX Xavier5001000150020002500SE +/- 7.60, N = 32515.011. (CXX) g++ options: -lpthread -lz

Rust Prime Benchmark

Prime Number Test To 200,000,000

OpenBenchmarking.orgSeconds, Fewer Is BetterRust Prime BenchmarkPrime Number Test To 200,000,000Jetson AGX XavierJetson TX2 Max-PJetson TX1 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoard400800120016002000SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.77, N = 3SE +/- 0.09, N = 3SE +/- 1.55, N = 3SE +/- 187.90, N = 632.37104.96128.45170.251097.691821.051. (CC) gcc options: -pie -nodefaultlibs

7-Zip Compression

Compress Speed Test

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestJetson AGX XavierJetson TX2 Max-PJetson TX1 Max-PJetson TX2 Max-QASUS TinkerBoardRaspberry Pi 3 Model B+4K8K12K16K20KSE +/- 274.18, N = 12SE +/- 20.85, N = 3SE +/- 13.43, N = 3SE +/- 13.05, N = 3SE +/- 34.93, N = 3SE +/- 23.74, N = 1119212559345083294283620131. (CXX) g++ options: -pipe -lpthread

Zstd Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson AGX XavierJetson TX2 Max-PJetson TX1 Max-PJetson TX2 Max-QRaspberry Pi 3 Model B+ASUS TinkerBoard110220330440550SE +/- 0.91, N = 3SE +/- 0.29, N = 3SE +/- 0.42, N = 3SE +/- 1.02, N = 3SE +/- 1.03, N = 3SE +/- 2.16, N = 380.06144.97145.80253.80342.23496.621. (CC) gcc options: -O3 -pthread -lz -llzma

C-Ray

Total Time - 4K, 16 Rays Per Pixel

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelJetson AGX XavierJetson TX2 Max-PJetson TX1 Max-PJetson TX2 Max-QASUS TinkerBoardRaspberry Pi 3 Model B+400800120016002000SE +/- 7.17, N = 9SE +/- 49.09, N = 9SE +/- 10.23, N = 3SE +/- 1.44, N = 3SE +/- 22.09, N = 3SE +/- 2.46, N = 3355585753869171820301. (CC) gcc options: -lm -lpthread -O3

TTSIOD 3D Renderer

Phong Rendering With Soft-Shadow Mapping

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingJetson AGX XavierJetson TX2 Max-PJetson TX1 Max-PJetson TX2 Max-QASUS TinkerBoardRaspberry Pi 3 Model B+306090120150SE +/- 1.63, N = 12SE +/- 0.15, N = 3SE +/- 0.04, N = 3SE +/- 0.46, N = 4SE +/- 0.27, N = 9SE +/- 0.16, N = 3133.0049.2645.0928.8521.2217.661. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

OpenCV Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenCV Benchmark 3.3.0Raspberry Pi 3 Model B+Jetson AGX XavierJetson TX2 Max-PJetson TX2 Max-Q110220330440550SE +/- 1.57, N = 3SE +/- 0.27, N = 3SE +/- 5.74, N = 32.74128.00296.00493.001. (CXX) g++ options: -std=c++11 -rdynamic

PyBench

Total For Average Test Times

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesJetson AGX XavierJetson TX2 Max-PJetson TX1 Max-PJetson TX2 Max-QASUS TinkerBoardRaspberry Pi 3 Model B+4K8K12K16K20KSE +/- 4.67, N = 3SE +/- 33.86, N = 3SE +/- 18.55, N = 3SE +/- 42.52, N = 3SE +/- 854.75, N = 9SE +/- 43.80, N = 330075408633987351150220913


Phoronix Test Suite v10.8.5