NVIDIA Jetson Nano Benchmarks ARMv8 rev 1 testing with a jetson-nano and NVIDIA Tegra X1 on Ubuntu 18.04 via the Phoronix Test Suite. Jetson Nano: Processor: ARMv8 rev 1 @ 1.43GHz (4 Cores), Motherboard: jetson-nano, Memory: 4096MB, Disk: 32GB GB1QT, Graphics: NVIDIA Tegra X1, Monitor: VE228, Network: Realtek RTL8111/8168/8411 OS: Ubuntu 18.04, Kernel: 4.9.140-tegra (aarch64), Desktop: Unity 7.5.0, Display Server: X Server 1.19.6, Display Driver: NVIDIA 32.1.0, OpenGL: 4.6.0, Vulkan: 1.1.85, Compiler: GCC 7.3.0 + CUDA 10.0, File-System: ext4, Screen Resolution: 1920x1080 CUDA Mini-Nbody 2015-11-10 Test: Original (NBody^2)/s > Higher Is Better Jetson Nano . 4.09 |=========================================================== CUDA Mini-Nbody 2015-11-10 Test: Cache Blocking (NBody^2)/s > Higher Is Better Jetson Nano . 8.47 |=========================================================== CUDA Mini-Nbody 2015-11-10 Test: Loop Unrolling (NBody^2)/s > Higher Is Better Jetson Nano . 8.93 |=========================================================== CUDA Mini-Nbody 2015-11-10 Test: SOA Data Layout (NBody^2)/s > Higher Is Better Jetson Nano . 3.66 |=========================================================== CUDA Mini-Nbody 2015-11-10 Test: Flush Denormals To Zero (NBody^2)/s > Higher Is Better Jetson Nano . 3.66 |=========================================================== GLmark2 276 Resolution: 800 x 600 Score > Higher Is Better Jetson Nano . 1915 |=========================================================== GLmark2 276 Resolution: 1024 x 768 Score > Higher Is Better Jetson Nano . 1362 |=========================================================== GLmark2 276 Resolution: 1280 x 1024 Score > Higher Is Better Jetson Nano . 904 |============================================================ GLmark2 276 Resolution: 1920 x 1080 Score > Higher Is Better Jetson Nano . 646 |============================================================ NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 10.29 |========================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 14.18 |========================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 14.60 |========================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 8.70 |=========================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 11.61 |========================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 54.86 |========================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 115.49 |========================================================= NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 133.74 |========================================================= NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 40.48 |========================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 82.30 |========================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 92.54 |========================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 168.83 |========================================================= NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 202.29 |========================================================= NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 113.76 |========================================================= NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 128.55 |========================================================= NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 27.37 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 40.65 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 42.05 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 14.61 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 20.59 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 22.16 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 65.61 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 85.12 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 85.92 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 35.87 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 47.83 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 49.18 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 10.09 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 15.78 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 16.42 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 5.45 |=========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 44.49 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 46.26 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 23.82 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 25.01 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 93.33 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 98.75 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 52.19 |========================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 55.47 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 16.98 |========================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson Nano . 17.28 |========================================================== Java 2D Microbenchmark 1.0 Rendering Test: Text Rendering Units Per Second > Higher Is Better Jetson Nano . 6226.12 |======================================================== Java 2D Microbenchmark 1.0 Rendering Test: Image Rendering Units Per Second > Higher Is Better Jetson Nano . 897658.51 |====================================================== Java 2D Microbenchmark 1.0 Rendering Test: Vector Graphics Rendering Units Per Second > Higher Is Better Jetson Nano . 486283.59 |====================================================== RAMspeed SMP 3.5.0 Type: Add - Benchmark: Integer MB/s > Higher Is Better Jetson Nano . 7943.83 |======================================================== RAMspeed SMP 3.5.0 Type: Copy - Benchmark: Integer MB/s > Higher Is Better Jetson Nano . 9544.18 |======================================================== RAMspeed SMP 3.5.0 Type: Scale - Benchmark: Integer MB/s > Higher Is Better Jetson Nano . 9141.59 |======================================================== RAMspeed SMP 3.5.0 Type: Triad - Benchmark: Integer MB/s > Higher Is Better Jetson Nano . 4856.02 |======================================================== RAMspeed SMP 3.5.0 Type: Average - Benchmark: Integer MB/s > Higher Is Better Jetson Nano . 7839.77 |======================================================== MBW 2018-09-08 Test: Memory Copy - Array Size: 128 MiB MiB/s > Higher Is Better Jetson Nano . 3420.37 |======================================================== MBW 2018-09-08 Test: Memory Copy - Array Size: 512 MiB MiB/s > Higher Is Better Jetson Nano . 3438.75 |======================================================== MBW 2018-09-08 Test: Memory Copy, Fixed Block Size - Array Size: 128 MiB MiB/s > Higher Is Better Jetson Nano . 3450.26 |======================================================== MBW 2018-09-08 Test: Memory Copy, Fixed Block Size - Array Size: 512 MiB MiB/s > Higher Is Better Jetson Nano . 3448.76 |======================================================== t-test1 2017-01-13 Threads: 1 Seconds < Lower Is Better Jetson Nano . 80.31 |========================================================== t-test1 2017-01-13 Threads: 2 Seconds < Lower Is Better Jetson Nano . 27.35 |========================================================== LeelaChessZero 0.20.1 Backend: BLAS Nodes Per Second > Higher Is Better Jetson Nano . 15.34 |========================================================== LeelaChessZero 0.20.1 Backend: CUDA + cuDNN Nodes Per Second > Higher Is Better Jetson Nano . 139 |============================================================ x264 2018-09-25 H.264 Video Encoding Frames Per Second > Higher Is Better Jetson Nano . 5.12 |=========================================================== 7-Zip Compression 16.02 Compress Speed Test MIPS > Higher Is Better Jetson Nano . 4050 |=========================================================== Timed Linux Kernel Compilation 4.18 Time To Compile Seconds < Lower Is Better Jetson Nano . 2378.69 |======================================================== XZ Compression 5.2.4 Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9 Seconds < Lower Is Better Jetson Nano . 44.43 |========================================================== Zstd Compression 1.3.4 Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19 Seconds < Lower Is Better Jetson Nano . 127.28 |=========================================================