NVIDIA Jetson Nano Benchmarks

ARMv8 rev 1 testing with a jetson-nano and NVIDIA Tegra X1 on Ubuntu 18.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1903316-HV-NVIDIAJET61&grr.

NVIDIA Jetson Nano BenchmarksProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson NanoARMv8 rev 1 @ 1.43GHz (4 Cores)jetson-nano4096MB32GB GB1QTNVIDIA Tegra X1VE228Realtek RTL8111/8168/8411Ubuntu 18.044.9.140-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 32.1.04.6.01.1.85GCC 7.3.0 + CUDA 10.0ext41920x1080OpenBenchmarking.org- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v - Scaling Governor: tegra-cpufreq schedutil- Python 2.7.15rc1 + Python 3.6.7

NVIDIA Jetson Nano Benchmarksbuild-linux-kernel: Time To Compilelczero: BLAScuda-mini-nbody: Flush Denormals To Zerocuda-mini-nbody: SOA Data Layoutcuda-mini-nbody: Originaltensorrt-inference: ResNet50 - FP16 - 1 - Disabledtensorrt-inference: ResNet152 - INT8 - 1 - Disabledtensorrt-inference: ResNet152 - FP16 - 1 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledtensorrt-inference: VGG16 - FP16 - 1 - Disabledtensorrt-inference: GoogleNet - FP16 - 1 - Disabledcuda-mini-nbody: Cache Blockingcuda-mini-nbody: Loop Unrollingtensorrt-inference: ResNet152 - FP16 - 16 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledj2dbench: Vector Graphics Renderingtensorrt-inference: ResNet152 - FP16 - 8 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 1 - Disabledtensorrt-inference: ResNet50 - INT8 - 1 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 8 - Disabledtensorrt-inference: GoogleNet - INT8 - 1 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 16 - Disabledtensorrt-inference: AlexNet - FP16 - 1 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 16 - Disabledtensorrt-inference: ResNet50 - FP16 - 16 - Disabledlczero: CUDA + cuDNNj2dbench: Image Renderingtensorrt-inference: GoogleNet - INT8 - 8 - Disabledtensorrt-inference: ResNet50 - INT8 - 8 - Disabledcompress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19tensorrt-inference: GoogleNet - FP16 - 16 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 8 - Disabledx264: H.264 Video Encodingglmark2: 800 x 600tensorrt-inference: ResNet50 - INT8 - 4 - Disabledglmark2: 1920 x 1080glmark2: 1280 x 1024glmark2: 1024 x 768tensorrt-inference: GoogleNet - FP16 - 8 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledramspeed: Average - Integerramspeed: Copy - Integerramspeed: Triad - Integerramspeed: Add - Integerramspeed: Scale - Integertensorrt-inference: AlexNet - INT8 - 16 - Disabledt-test1: 1tensorrt-inference: AlexNet - INT8 - 8 - Disabledtensorrt-inference: AlexNet - INT8 - 1 - Disabledtensorrt-inference: AlexNet - FP16 - 16 - Disabledtensorrt-inference: AlexNet - FP16 - 8 - Disabledcompress-7zip: Compress Speed Testcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9t-test1: 2j2dbench: Text Renderingmbw: Memory Copy - 512 MiBmbw: Memory Copy, Fixed Block Size - 512 MiBmbw: Memory Copy - 128 MiBmbw: Memory Copy, Fixed Block Size - 128 MiBJetson Nano2378.6915.343.663.664.0927.375.4510.0917.2810.2965.618.478.9316.9885.12486283.5916.4215.7814.1825.018.7014.61115.4946.2611.6114.6035.8755.4723.8254.8698.7552.1944.49139897658.5149.1822.16127.2893.3347.8342.055.12191520.59646904136285.9240.65128.5582.30202.297839.779544.184856.027943.839141.59113.7680.3192.5440.48168.83133.74405044.4327.356226.123438.753448.763420.373450.26OpenBenchmarking.org

Timed Linux Kernel Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 4.18Time To CompileJetson Nano5001000150020002500SE +/- 13.46, N = 32378.69

LeelaChessZero

Backend: BLAS

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson Nano48121620SE +/- 0.10, N = 315.341. (CXX) g++ options: -lpthread -lz

CUDA Mini-Nbody

Test: Flush Denormals To Zero

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Flush Denormals To ZeroJetson Nano0.82351.6472.47053.2944.1175SE +/- 0.00, N = 33.66

CUDA Mini-Nbody

Test: SOA Data Layout

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: SOA Data LayoutJetson Nano0.82351.6472.47053.2944.1175SE +/- 0.00, N = 33.66

CUDA Mini-Nbody

Test: Original

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson Nano0.92031.84062.76093.68124.6015SE +/- 0.01, N = 34.09

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson Nano612182430SE +/- 0.34, N = 927.37

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson Nano1.22632.45263.67894.90526.1315SE +/- 0.01, N = 35.45

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson Nano3691215SE +/- 0.05, N = 310.09

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson Nano48121620SE +/- 0.01, N = 317.28

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson Nano3691215SE +/- 0.13, N = 810.29

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson Nano1530456075SE +/- 0.96, N = 965.61

CUDA Mini-Nbody

Test: Cache Blocking

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Cache BlockingJetson Nano246810SE +/- 0.00, N = 38.47

CUDA Mini-Nbody

Test: Loop Unrolling

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Loop UnrollingJetson Nano246810SE +/- 0.03, N = 38.93

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson Nano48121620SE +/- 0.04, N = 316.98

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson Nano20406080100SE +/- 1.10, N = 1285.12

Java 2D Microbenchmark

Rendering Test: Vector Graphics Rendering

OpenBenchmarking.orgUnits Per Second, More Is BetterJava 2D Microbenchmark 1.0Rendering Test: Vector Graphics RenderingJetson Nano100K200K300K400K500KSE +/- 983.45, N = 4486283.59

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson Nano48121620SE +/- 0.08, N = 316.42

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson Nano48121620SE +/- 0.07, N = 315.78

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson Nano48121620SE +/- 0.08, N = 314.18

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson Nano612182430SE +/- 0.06, N = 325.01

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson Nano246810SE +/- 0.09, N = 38.70

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson Nano48121620SE +/- 0.12, N = 314.61

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson Nano306090120150SE +/- 2.17, N = 12115.49

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson Nano1020304050SE +/- 0.01, N = 346.26

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson Nano3691215SE +/- 0.08, N = 311.61

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson Nano48121620SE +/- 0.00, N = 314.60

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson Nano816243240SE +/- 0.50, N = 335.87

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson Nano1224364860SE +/- 0.21, N = 355.47

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson Nano612182430SE +/- 0.03, N = 323.82

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson Nano1224364860SE +/- 1.49, N = 954.86

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson Nano20406080100SE +/- 0.20, N = 398.75

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson Nano1224364860SE +/- 0.34, N = 352.19

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson Nano1020304050SE +/- 0.39, N = 344.49

LeelaChessZero

Backend: CUDA + cuDNN

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson Nano306090120150SE +/- 0.64, N = 31391. (CXX) g++ options: -lpthread -lz

Java 2D Microbenchmark

Rendering Test: Image Rendering

OpenBenchmarking.orgUnits Per Second, More Is BetterJava 2D Microbenchmark 1.0Rendering Test: Image RenderingJetson Nano200K400K600K800K1000KSE +/- 1827.16, N = 4897658.51

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson Nano1122334455SE +/- 0.47, N = 349.18

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson Nano510152025SE +/- 0.15, N = 322.16

Zstd Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson Nano306090120150SE +/- 0.22, N = 3127.281. (CC) gcc options: -O3 -pthread -lz -llzma

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson Nano20406080100SE +/- 1.84, N = 393.33

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson Nano1122334455SE +/- 0.39, N = 347.83

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson Nano1020304050SE +/- 0.20, N = 342.05

x264

H.264 Video Encoding

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2018-09-25H.264 Video EncodingJetson Nano1.1522.3043.4564.6085.76SE +/- 0.08, N = 35.121. (CC) gcc options: -ldl -lm -lpthread

GLmark2

Resolution: 800 x 600

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 800 x 600Jetson Nano4008001200160020001915

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson Nano510152025SE +/- 0.30, N = 320.59

GLmark2

Resolution: 1920 x 1080

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 1920 x 1080Jetson Nano140280420560700646

GLmark2

Resolution: 1280 x 1024

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 1280 x 1024Jetson Nano2004006008001000904

GLmark2

Resolution: 1024 x 768

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 1024 x 768Jetson Nano300600900120015001362

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson Nano20406080100SE +/- 0.10, N = 385.92

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson Nano918273645SE +/- 0.26, N = 340.65

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson Nano306090120150SE +/- 0.58, N = 3128.55

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson Nano20406080100SE +/- 1.37, N = 482.30

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson Nano4080120160200SE +/- 0.75, N = 3202.29

RAMspeed SMP

Type: Average - Benchmark: Integer

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Average - Benchmark: IntegerJetson Nano2K4K6K8K10K7839.771. (CC) gcc options: -O3 -march=native

RAMspeed SMP

Type: Copy - Benchmark: Integer

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Copy - Benchmark: IntegerJetson Nano2K4K6K8K10K9544.181. (CC) gcc options: -O3 -march=native

RAMspeed SMP

Type: Triad - Benchmark: Integer

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Triad - Benchmark: IntegerJetson Nano100020003000400050004856.021. (CC) gcc options: -O3 -march=native

RAMspeed SMP

Type: Add - Benchmark: Integer

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Add - Benchmark: IntegerJetson Nano2K4K6K8K10K7943.831. (CC) gcc options: -O3 -march=native

RAMspeed SMP

Type: Scale - Benchmark: Integer

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Scale - Benchmark: IntegerJetson Nano2K4K6K8K10K9141.591. (CC) gcc options: -O3 -march=native

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson Nano306090120150SE +/- 1.48, N = 3113.76

t-test1

Threads: 1

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1Jetson Nano20406080100SE +/- 0.23, N = 380.311. (CC) gcc options: -pthread

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson Nano20406080100SE +/- 0.96, N = 392.54

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson Nano918273645SE +/- 0.71, N = 340.48

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson Nano4080120160200SE +/- 1.25, N = 3168.83

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson Nano306090120150SE +/- 0.87, N = 3133.74

7-Zip Compression

Compress Speed Test

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestJetson Nano9001800270036004500SE +/- 17.21, N = 340501. (CXX) g++ options: -pipe -lpthread

XZ Compression

Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Jetson Nano1020304050SE +/- 0.86, N = 344.431. (CC) gcc options: -pthread -fvisibility=hidden -O2

t-test1

Threads: 2

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2Jetson Nano612182430SE +/- 0.07, N = 327.351. (CC) gcc options: -pthread

Java 2D Microbenchmark

Rendering Test: Text Rendering

OpenBenchmarking.orgUnits Per Second, More Is BetterJava 2D Microbenchmark 1.0Rendering Test: Text RenderingJetson Nano13002600390052006500SE +/- 34.48, N = 46226.12

MBW

Test: Memory Copy - Array Size: 512 MiB

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 512 MiBJetson Nano7001400210028003500SE +/- 12.45, N = 33438.751. (CC) gcc options: -O3 -march=native

MBW

Test: Memory Copy, Fixed Block Size - Array Size: 512 MiB

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 512 MiBJetson Nano7001400210028003500SE +/- 14.16, N = 33448.761. (CC) gcc options: -O3 -march=native

MBW

Test: Memory Copy - Array Size: 128 MiB

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 128 MiBJetson Nano7001400210028003500SE +/- 7.55, N = 33420.371. (CC) gcc options: -O3 -march=native

MBW

Test: Memory Copy, Fixed Block Size - Array Size: 128 MiB

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 128 MiBJetson Nano7001400210028003500SE +/- 7.52, N = 33450.261. (CC) gcc options: -O3 -march=native


Phoronix Test Suite v10.8.5