NVIDIA Jetson Nano Benchmarks

ARMv8 rev 1 testing with a NVIDIA Jetson Nano Developer Kit and NVIDIA TEGRA on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1908018-HV-1903316HV93
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 4 Tests
Compression Tests 3 Tests
CPU Massive 9 Tests
Common Kernel Benchmarks 2 Tests
Memory Test Suite 3 Tests
Multi-Core 4 Tests
Programmer / Developer System Benchmarks 2 Tests
Server CPU Tests 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Jetson Nano
March 30 2019
  16 Hours, 5 Minutes
Nano
July 31 2019
 
nano
July 31 2019
  4 Hours, 35 Minutes
Nano 5W
August 01 2019
  18 Hours, 38 Minutes
Invert Hiding All Results Option
  9 Hours, 50 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


NVIDIA Jetson Nano BenchmarksProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson NanoNanonanoNano 5WARMv8 rev 1 @ 1.43GHz (4 Cores)jetson-nano4096MB32GB GB1QTNVIDIA Tegra X1VE228Realtek RTL8111/8168/8411Ubuntu 18.044.9.140-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 32.1.04.6.01.1.85GCC 7.3.0 + CUDA 10.0ext41920x1080ARMv8 rev 1 @ 0.92GHz (2 Cores)NVIDIA Jetson Nano Developer Kit64GB SN64GNVIDIA TEGRANVIDIA 1.0.0GCC 7.4.0 + CUDA 10.0OpenBenchmarking.orgCompiler Details- --build=aarch64-linux-gnu --disable-libquadmath --disable-libquadmath-support --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-fix-cortex-a53-843419 --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-plugin --enable-shared --enable-threads=posix --host=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --target=aarch64-linux-gnu --with-default-libstdcxx-abi=new --with-gcc-major-version-only -v Processor Details- Scaling Governor: tegra-cpufreq schedutilPython Details- Jetson Nano: Python 2.7.15rc1 + Python 3.6.7- Nano: Python 2.7.15+ + Python 3.6.8- nano: Python 2.7.15+ + Python 3.6.8- Nano 5W: Python 2.7.15+ + Python 3.6.8Java Details- Nano, nano, Nano 5W: OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)

NVIDIA Jetson Nano Benchmarkscuda-mini-nbody: Originalcuda-mini-nbody: Cache Blockingcuda-mini-nbody: Loop Unrollingcuda-mini-nbody: SOA Data Layoutcuda-mini-nbody: Flush Denormals To Zeroglmark2: 800 x 600glmark2: 1024 x 768glmark2: 1280 x 1024glmark2: 1920 x 1080tensorrt-inference: VGG16 - FP16 - 1 - Disabledtensorrt-inference: VGG16 - FP16 - 4 - Disabledtensorrt-inference: VGG16 - FP16 - 8 - Disabledtensorrt-inference: VGG19 - FP16 - 1 - Disabledtensorrt-inference: VGG19 - FP16 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 1 - Disabledtensorrt-inference: AlexNet - FP16 - 4 - Disabledtensorrt-inference: AlexNet - FP16 - 8 - Disabledtensorrt-inference: AlexNet - INT8 - 1 - Disabledtensorrt-inference: AlexNet - INT8 - 4 - Disabledtensorrt-inference: AlexNet - INT8 - 8 - Disabledtensorrt-inference: AlexNet - FP16 - 16 - Disabledtensorrt-inference: AlexNet - FP16 - 32 - Disabledtensorrt-inference: AlexNet - INT8 - 16 - Disabledtensorrt-inference: AlexNet - INT8 - 32 - Disabledtensorrt-inference: ResNet50 - FP16 - 1 - Disabledtensorrt-inference: ResNet50 - FP16 - 4 - Disabledtensorrt-inference: ResNet50 - FP16 - 8 - Disabledtensorrt-inference: ResNet50 - INT8 - 1 - Disabledtensorrt-inference: ResNet50 - INT8 - 4 - Disabledtensorrt-inference: ResNet50 - INT8 - 8 - Disabledtensorrt-inference: GoogleNet - FP16 - 1 - Disabledtensorrt-inference: GoogleNet - FP16 - 4 - Disabledtensorrt-inference: GoogleNet - FP16 - 8 - Disabledtensorrt-inference: GoogleNet - INT8 - 1 - Disabledtensorrt-inference: GoogleNet - INT8 - 4 - Disabledtensorrt-inference: GoogleNet - INT8 - 8 - Disabledtensorrt-inference: ResNet152 - FP16 - 1 - Disabledtensorrt-inference: ResNet152 - FP16 - 4 - Disabledtensorrt-inference: ResNet152 - FP16 - 8 - Disabledtensorrt-inference: ResNet152 - INT8 - 1 - Disabledtensorrt-inference: ResNet50 - FP16 - 16 - Disabledtensorrt-inference: ResNet50 - FP16 - 32 - Disabledtensorrt-inference: ResNet50 - INT8 - 16 - Disabledtensorrt-inference: ResNet50 - INT8 - 32 - Disabledtensorrt-inference: GoogleNet - FP16 - 16 - Disabledtensorrt-inference: GoogleNet - FP16 - 32 - Disabledtensorrt-inference: GoogleNet - INT8 - 16 - Disabledtensorrt-inference: GoogleNet - INT8 - 32 - Disabledtensorrt-inference: ResNet152 - FP16 - 16 - Disabledtensorrt-inference: ResNet152 - FP16 - 32 - Disabledj2dbench: Text Renderingj2dbench: Image Renderingj2dbench: Vector Graphics Renderingramspeed: Add - Integerramspeed: Copy - Integerramspeed: Scale - Integerramspeed: Triad - Integerramspeed: Average - Integermbw: Memory Copy - 128 MiBmbw: Memory Copy - 512 MiBmbw: Memory Copy, Fixed Block Size - 128 MiBmbw: Memory Copy, Fixed Block Size - 512 MiBt-test1: 1t-test1: 2lczero: BLASlczero: CUDA + cuDNNx264: H.264 Video Encodingcompress-7zip: Compress Speed Testbuild-linux-kernel: Time To Compilecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9compress-zstd: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson NanoNanonanoNano 5W4.098.478.933.663.661915136290464610.2914.1814.608.7011.6154.8611513440.4882.3092.54169202113.76128.5527.3740.6542.0514.6120.5922.1665.6185.1285.9235.8747.8349.1810.0915.7816.425.4544.4946.2623.8225.0193.3398.7552.1955.4716.9817.286226.12897658.51486283.5979449544914248567840342034393450344980.3127.3515.341395.124050237944.43127.283.075.676.002.442.443.065.676.002.442.449.1510.9210.997.648.8357.1010711138.4569.0671.9212214983.1289.8825.7030.1430.9013.7916.2116.6259.3966.2967.9533.2537.6838.7010.1211.3911.795.3232.1432.7917.3117.6569.6870.5739.6040.1212.1312.30655080487090516867102963297729632967110.7037.542.3120325479OpenBenchmarking.org

CUDA Mini-Nbody

The CUDA version of Harrism's mini-nbody tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson NanonanoNano 5W0.92031.84062.76093.68124.6015SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.093.073.06
OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: OriginalJetson NanonanoNano 5W246810Min: 4.07 / Avg: 4.09 / Max: 4.1Min: 3.07 / Avg: 3.07 / Max: 3.07Min: 3.05 / Avg: 3.06 / Max: 3.07

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Cache BlockingJetson NanonanoNano 5W246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 38.475.675.67
OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Cache BlockingJetson NanonanoNano 5W3691215Min: 8.47 / Avg: 8.47 / Max: 8.47Min: 5.67 / Avg: 5.67 / Max: 5.67Min: 5.67 / Avg: 5.67 / Max: 5.67

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Loop UnrollingJetson NanonanoNano 5W246810SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 38.936.006.00
OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Loop UnrollingJetson NanonanoNano 5W3691215Min: 8.87 / Avg: 8.93 / Max: 8.95Min: 6 / Avg: 6 / Max: 6Min: 6 / Avg: 6 / Max: 6

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: SOA Data LayoutJetson NanonanoNano 5W0.82351.6472.47053.2944.1175SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.662.442.44
OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: SOA Data LayoutJetson NanonanoNano 5W246810Min: 3.66 / Avg: 3.66 / Max: 3.67Min: 2.44 / Avg: 2.44 / Max: 2.44Min: 2.44 / Avg: 2.44 / Max: 2.44

OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Flush Denormals To ZeroJetson NanonanoNano 5W0.82351.6472.47053.2944.1175SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.662.442.44
OpenBenchmarking.org(NBody^2)/s, More Is BetterCUDA Mini-Nbody 2015-11-10Test: Flush Denormals To ZeroJetson NanonanoNano 5W246810Min: 3.66 / Avg: 3.66 / Max: 3.66Min: 2.44 / Avg: 2.44 / Max: 2.44Min: 2.44 / Avg: 2.44 / Max: 2.44

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 800 x 600Jetson Nano4008001200160020001915

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 1024 x 768Jetson Nano300600900120015001362

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 1280 x 1024Jetson Nano2004006008001000904

OpenBenchmarking.orgScore, More Is BetterGLmark2 276Resolution: 1920 x 1080Jetson Nano140280420560700646

NVIDIA TensorRT Inference

This test profile uses any existing system installation of NVIDIA TensorRT for carrying out inference benchmarks with various neural networks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W3691215SE +/- 0.13, N = 8SE +/- 0.03, N = 310.299.15
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W3691215Min: 9.81 / Avg: 10.29 / Max: 10.75Min: 9.09 / Avg: 9.15 / Max: 9.2

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W48121620SE +/- 0.08, N = 3SE +/- 0.00, N = 314.1810.92
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W48121620Min: 14.03 / Avg: 14.18 / Max: 14.31Min: 10.91 / Avg: 10.92 / Max: 10.93

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 314.6010.99
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W48121620Min: 14.59 / Avg: 14.6 / Max: 14.6Min: 10.98 / Avg: 10.99 / Max: 11

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W246810SE +/- 0.09, N = 3SE +/- 0.00, N = 38.707.64
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W3691215Min: 8.52 / Avg: 8.7 / Max: 8.84Min: 7.64 / Avg: 7.64 / Max: 7.65

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W3691215SE +/- 0.08, N = 3SE +/- 0.01, N = 311.618.83
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W3691215Min: 11.46 / Avg: 11.61 / Max: 11.7Min: 8.82 / Avg: 8.83 / Max: 8.84

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W1326395265SE +/- 1.49, N = 9SE +/- 0.16, N = 354.8657.10
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W1122334455Min: 48.05 / Avg: 54.86 / Max: 59.48Min: 56.78 / Avg: 57.1 / Max: 57.3

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W306090120150SE +/- 2.17, N = 12SE +/- 0.39, N = 3115107
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W20406080100Min: 100.92 / Avg: 115.49 / Max: 124.19Min: 106.05 / Avg: 106.82 / Max: 107.32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W306090120150SE +/- 0.87, N = 3SE +/- 2.21, N = 3134111
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W306090120150Min: 132.58 / Avg: 133.74 / Max: 135.45Min: 106.79 / Avg: 111.22 / Max: 113.54

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W918273645SE +/- 0.71, N = 3SE +/- 0.17, N = 340.4838.45
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W816243240Min: 39.13 / Avg: 40.48 / Max: 41.54Min: 38.18 / Avg: 38.45 / Max: 38.76

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W20406080100SE +/- 1.37, N = 4SE +/- 0.01, N = 382.3069.06
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W1632486480Min: 79.93 / Avg: 82.3 / Max: 85.85Min: 69.05 / Avg: 69.06 / Max: 69.09

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W20406080100SE +/- 0.96, N = 3SE +/- 0.03, N = 392.5471.92
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W20406080100Min: 91.54 / Avg: 92.54 / Max: 94.46Min: 71.87 / Avg: 71.92 / Max: 71.96

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W4080120160200SE +/- 1.25, N = 3SE +/- 1.89, N = 5169122
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W306090120150Min: 166.44 / Avg: 168.83 / Max: 170.67Min: 119.21 / Avg: 121.59 / Max: 129.15

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W4080120160200SE +/- 0.75, N = 3SE +/- 0.07, N = 3202149
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W4080120160200Min: 201.01 / Avg: 202.29 / Max: 203.6Min: 148.46 / Avg: 148.58 / Max: 148.7

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W306090120150SE +/- 1.48, N = 3SE +/- 0.09, N = 3113.7683.12
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W20406080100Min: 111.98 / Avg: 113.76 / Max: 116.71Min: 83 / Avg: 83.12 / Max: 83.31

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W306090120150SE +/- 0.58, N = 3SE +/- 0.04, N = 3128.5589.88
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W20406080100Min: 127.9 / Avg: 128.55 / Max: 129.7Min: 89.83 / Avg: 89.88 / Max: 89.97

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W612182430SE +/- 0.34, N = 9SE +/- 0.11, N = 327.3725.70
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W612182430Min: 25.64 / Avg: 27.37 / Max: 29.2Min: 25.49 / Avg: 25.7 / Max: 25.85

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W918273645SE +/- 0.26, N = 3SE +/- 0.01, N = 340.6530.14
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W816243240Min: 40.37 / Avg: 40.65 / Max: 41.18Min: 30.13 / Avg: 30.14 / Max: 30.16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W1020304050SE +/- 0.20, N = 3SE +/- 0.02, N = 342.0530.90
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W918273645Min: 41.64 / Avg: 42.05 / Max: 42.26Min: 30.86 / Avg: 30.9 / Max: 30.94

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W48121620SE +/- 0.12, N = 3SE +/- 0.13, N = 314.6113.79
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W48121620Min: 14.37 / Avg: 14.61 / Max: 14.8Min: 13.53 / Avg: 13.79 / Max: 13.93

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W510152025SE +/- 0.30, N = 3SE +/- 0.05, N = 320.5916.21
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W510152025Min: 20.02 / Avg: 20.59 / Max: 21.05Min: 16.12 / Avg: 16.21 / Max: 16.28

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W510152025SE +/- 0.15, N = 3SE +/- 0.00, N = 322.1616.62
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W510152025Min: 21.96 / Avg: 22.16 / Max: 22.47Min: 16.62 / Avg: 16.62 / Max: 16.63

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W1530456075SE +/- 0.96, N = 9SE +/- 0.02, N = 365.6159.39
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W1326395265Min: 62.16 / Avg: 65.61 / Max: 71.14Min: 59.36 / Avg: 59.39 / Max: 59.44

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W20406080100SE +/- 1.10, N = 12SE +/- 0.02, N = 385.1266.29
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W1632486480Min: 77.7 / Avg: 85.12 / Max: 90.7Min: 66.26 / Avg: 66.29 / Max: 66.34

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W20406080100SE +/- 0.10, N = 3SE +/- 0.03, N = 385.9267.95
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W1632486480Min: 85.71 / Avg: 85.92 / Max: 86.05Min: 67.9 / Avg: 67.95 / Max: 68.01

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W816243240SE +/- 0.50, N = 3SE +/- 0.21, N = 335.8733.25
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W816243240Min: 35.02 / Avg: 35.87 / Max: 36.75Min: 32.86 / Avg: 33.25 / Max: 33.57

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W1122334455SE +/- 0.39, N = 3SE +/- 0.05, N = 347.8337.68
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W1020304050Min: 47.41 / Avg: 47.83 / Max: 48.61Min: 37.58 / Avg: 37.68 / Max: 37.75

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W1122334455SE +/- 0.47, N = 3SE +/- 0.02, N = 349.1838.70
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W1020304050Min: 48.25 / Avg: 49.18 / Max: 49.69Min: 38.66 / Avg: 38.7 / Max: 38.72

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W3691215SE +/- 0.05, N = 3SE +/- 0.00, N = 310.0910.12
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W3691215Min: 10 / Avg: 10.09 / Max: 10.19Min: 10.11 / Avg: 10.12 / Max: 10.12

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W48121620SE +/- 0.07, N = 3SE +/- 0.01, N = 315.7811.39
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: DisabledJetson NanoNano 5W48121620Min: 15.68 / Avg: 15.78 / Max: 15.92Min: 11.37 / Avg: 11.39 / Max: 11.41

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W48121620SE +/- 0.08, N = 3SE +/- 0.00, N = 316.4211.79
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: DisabledJetson NanoNano 5W48121620Min: 16.31 / Avg: 16.42 / Max: 16.58Min: 11.78 / Avg: 11.79 / Max: 11.79

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W1.22632.45263.67894.90526.1315SE +/- 0.01, N = 3SE +/- 0.01, N = 35.455.32
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 1 - DLA Cores: DisabledJetson NanoNano 5W246810Min: 5.42 / Avg: 5.45 / Max: 5.47Min: 5.29 / Avg: 5.32 / Max: 5.33

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W1020304050SE +/- 0.39, N = 3SE +/- 0.01, N = 344.4932.14
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W918273645Min: 43.71 / Avg: 44.49 / Max: 44.91Min: 32.12 / Avg: 32.14 / Max: 32.15

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W1020304050SE +/- 0.01, N = 3SE +/- 0.01, N = 346.2632.79
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W918273645Min: 46.24 / Avg: 46.26 / Max: 46.29Min: 32.78 / Avg: 32.79 / Max: 32.8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 323.8217.31
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W612182430Min: 23.78 / Avg: 23.82 / Max: 23.87Min: 17.3 / Avg: 17.31 / Max: 17.34

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W612182430SE +/- 0.06, N = 3SE +/- 0.00, N = 325.0117.65
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W612182430Min: 24.93 / Avg: 25.01 / Max: 25.13Min: 17.65 / Avg: 17.65 / Max: 17.66

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W20406080100SE +/- 1.84, N = 3SE +/- 0.03, N = 393.3369.68
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W20406080100Min: 89.67 / Avg: 93.33 / Max: 95.56Min: 69.63 / Avg: 69.68 / Max: 69.71

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W20406080100SE +/- 0.20, N = 3SE +/- 0.05, N = 398.7570.57
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W20406080100Min: 98.38 / Avg: 98.75 / Max: 99.06Min: 70.47 / Avg: 70.57 / Max: 70.62

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W1224364860SE +/- 0.34, N = 3SE +/- 0.02, N = 352.1939.60
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W1020304050Min: 51.53 / Avg: 52.19 / Max: 52.67Min: 39.57 / Avg: 39.6 / Max: 39.62

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W1224364860SE +/- 0.21, N = 3SE +/- 0.00, N = 355.4740.12
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W1122334455Min: 55.07 / Avg: 55.47 / Max: 55.77Min: 40.12 / Avg: 40.12 / Max: 40.13

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W48121620SE +/- 0.04, N = 3SE +/- 0.01, N = 316.9812.13
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: DisabledJetson NanoNano 5W48121620Min: 16.93 / Avg: 16.98 / Max: 17.05Min: 12.12 / Avg: 12.13 / Max: 12.14

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 317.2812.30
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledJetson NanoNano 5W48121620Min: 17.27 / Avg: 17.28 / Max: 17.29Min: 12.28 / Avg: 12.3 / Max: 12.31

Java 2D Microbenchmark

This test runs a series of microbenchmarks to check the performance of the OpenGL-based Java 2D pipeline and the underlying OpenGL drivers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUnits Per Second, More Is BetterJava 2D Microbenchmark 1.0Rendering Test: Text RenderingJetson Nano13002600390052006500SE +/- 34.48, N = 46226.12

OpenBenchmarking.orgUnits Per Second, More Is BetterJava 2D Microbenchmark 1.0Rendering Test: Image RenderingJetson Nano200K400K600K800K1000KSE +/- 1827.16, N = 4897658.51

OpenBenchmarking.orgUnits Per Second, More Is BetterJava 2D Microbenchmark 1.0Rendering Test: Vector Graphics RenderingJetson Nano100K200K300K400K500KSE +/- 983.45, N = 4486283.59

RAMspeed SMP

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Add - Benchmark: IntegerJetson NanoNano 5W2K4K6K8K10K794465501. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Copy - Benchmark: IntegerJetson NanoNano 5W2K4K6K8K10K954480481. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Scale - Benchmark: IntegerJetson NanoNano 5W2K4K6K8K10K914270901. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Triad - Benchmark: IntegerJetson NanoNano 5W11002200330044005500485651681. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Average - Benchmark: IntegerJetson NanoNano 5W2K4K6K8K10K784067101. (CC) gcc options: -O3 -march=native

MBW

This is a basic/simple memory (RAM) bandwidth benchmark for memory copy operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 128 MiBJetson NanoNano 5W7001400210028003500SE +/- 7.55, N = 3SE +/- 7.90, N = 3342029631. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 128 MiBJetson NanoNano 5W6001200180024003000Min: 3408.9 / Avg: 3420.37 / Max: 3434.61Min: 2951.51 / Avg: 2963.48 / Max: 2978.391. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 512 MiBJetson NanoNano 5W7001400210028003500SE +/- 12.45, N = 3SE +/- 3.31, N = 3343929771. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 512 MiBJetson NanoNano 5W6001200180024003000Min: 3424.26 / Avg: 3438.75 / Max: 3463.53Min: 2971.71 / Avg: 2976.66 / Max: 2982.941. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 128 MiBJetson NanoNano 5W7001400210028003500SE +/- 7.52, N = 3SE +/- 10.41, N = 3345029631. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 128 MiBJetson NanoNano 5W6001200180024003000Min: 3436.32 / Avg: 3450.26 / Max: 3462.11Min: 2950.36 / Avg: 2962.8 / Max: 2983.491. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 512 MiBJetson NanoNano 5W7001400210028003500SE +/- 14.16, N = 3SE +/- 1.52, N = 3344929671. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy, Fixed Block Size - Array Size: 512 MiBJetson NanoNano 5W6001200180024003000Min: 3420.87 / Avg: 3448.76 / Max: 3466.99Min: 2964.46 / Avg: 2967.13 / Max: 2969.721. (CC) gcc options: -O3 -march=native

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1Jetson NanoNano 5W20406080100SE +/- 0.23, N = 3SE +/- 0.05, N = 380.31110.701. (CC) gcc options: -pthread
OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1Jetson NanoNano 5W20406080100Min: 79.97 / Avg: 80.31 / Max: 80.75Min: 110.62 / Avg: 110.7 / Max: 110.791. (CC) gcc options: -pthread

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2Jetson NanoNano 5W918273645SE +/- 0.07, N = 3SE +/- 0.10, N = 327.3537.541. (CC) gcc options: -pthread
OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2Jetson NanoNano 5W816243240Min: 27.23 / Avg: 27.35 / Max: 27.47Min: 37.41 / Avg: 37.54 / Max: 37.721. (CC) gcc options: -pthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: BLASJetson Nano48121620SE +/- 0.10, N = 315.341. (CXX) g++ options: -lpthread -lz

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.20.1Backend: CUDA + cuDNNJetson Nano306090120150SE +/- 0.64, N = 31391. (CXX) g++ options: -lpthread -lz

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2018-09-25H.264 Video EncodingJetson NanoNano 5W1.1522.3043.4564.6085.76SE +/- 0.08, N = 3SE +/- 0.03, N = 35.122.311. (CC) gcc options: -ldl -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2018-09-25H.264 Video EncodingJetson NanoNano 5W246810Min: 4.98 / Avg: 5.12 / Max: 5.24Min: 2.25 / Avg: 2.31 / Max: 2.341. (CC) gcc options: -ldl -lm -lpthread

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestJetson NanoNano 5W9001800270036004500SE +/- 17.21, N = 3SE +/- 7.09, N = 3405020321. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestJetson NanoNano 5W7001400210028003500Min: 4023 / Avg: 4050 / Max: 4082Min: 2023 / Avg: 2032 / Max: 20461. (CXX) g++ options: -pipe -lpthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 4.18Time To CompileJetson NanoNano 5W12002400360048006000SE +/- 13.46, N = 3SE +/- 2.37, N = 323795479
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 4.18Time To CompileJetson NanoNano 5W10002000300040005000Min: 2354.13 / Avg: 2378.69 / Max: 2400.5Min: 5475.42 / Avg: 5479.27 / Max: 5483.59

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Jetson Nano1020304050SE +/- 0.86, N = 344.431. (CC) gcc options: -pthread -fvisibility=hidden -O2

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZstd Compression 1.3.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19Jetson Nano306090120150SE +/- 0.22, N = 3127.281. (CC) gcc options: -O3 -pthread -lz -llzma

71 Results Shown

CUDA Mini-Nbody:
  Original
  Cache Blocking
  Loop Unrolling
  SOA Data Layout
  Flush Denormals To Zero
GLmark2:
  800 x 600
  1024 x 768
  1280 x 1024
  1920 x 1080
NVIDIA TensorRT Inference:
  VGG16 - FP16 - 1 - Disabled
  VGG16 - FP16 - 4 - Disabled
  VGG16 - FP16 - 8 - Disabled
  VGG19 - FP16 - 1 - Disabled
  VGG19 - FP16 - 4 - Disabled
  AlexNet - FP16 - 1 - Disabled
  AlexNet - FP16 - 4 - Disabled
  AlexNet - FP16 - 8 - Disabled
  AlexNet - INT8 - 1 - Disabled
  AlexNet - INT8 - 4 - Disabled
  AlexNet - INT8 - 8 - Disabled
  AlexNet - FP16 - 16 - Disabled
  AlexNet - FP16 - 32 - Disabled
  AlexNet - INT8 - 16 - Disabled
  AlexNet - INT8 - 32 - Disabled
  ResNet50 - FP16 - 1 - Disabled
  ResNet50 - FP16 - 4 - Disabled
  ResNet50 - FP16 - 8 - Disabled
  ResNet50 - INT8 - 1 - Disabled
  ResNet50 - INT8 - 4 - Disabled
  ResNet50 - INT8 - 8 - Disabled
  GoogleNet - FP16 - 1 - Disabled
  GoogleNet - FP16 - 4 - Disabled
  GoogleNet - FP16 - 8 - Disabled
  GoogleNet - INT8 - 1 - Disabled
  GoogleNet - INT8 - 4 - Disabled
  GoogleNet - INT8 - 8 - Disabled
  ResNet152 - FP16 - 1 - Disabled
  ResNet152 - FP16 - 4 - Disabled
  ResNet152 - FP16 - 8 - Disabled
  ResNet152 - INT8 - 1 - Disabled
  ResNet50 - FP16 - 16 - Disabled
  ResNet50 - FP16 - 32 - Disabled
  ResNet50 - INT8 - 16 - Disabled
  ResNet50 - INT8 - 32 - Disabled
  GoogleNet - FP16 - 16 - Disabled
  GoogleNet - FP16 - 32 - Disabled
  GoogleNet - INT8 - 16 - Disabled
  GoogleNet - INT8 - 32 - Disabled
  ResNet152 - FP16 - 16 - Disabled
  ResNet152 - FP16 - 32 - Disabled
Java 2D Microbenchmark:
  Text Rendering
  Image Rendering
  Vector Graphics Rendering
RAMspeed SMP:
  Add - Integer
  Copy - Integer
  Scale - Integer
  Triad - Integer
  Average - Integer
MBW:
  Memory Copy - 128 MiB
  Memory Copy - 512 MiB
  Memory Copy, Fixed Block Size - 128 MiB
  Memory Copy, Fixed Block Size - 512 MiB
t-test1:
  1
  2
LeelaChessZero:
  BLAS
  CUDA + cuDNN
x264
7-Zip Compression
Timed Linux Kernel Compilation
XZ Compression
Zstd Compression