NVIDIA Jetson Nano Benchmarks
ARMv8 rev 1 testing with a jetson-nano and NVIDIA Tegra X1 on Ubuntu 18.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/1903316-HV-NVIDIAJET61.
CUDA Mini-Nbody
Test: Original
CUDA Mini-Nbody
Test: Cache Blocking
CUDA Mini-Nbody
Test: Loop Unrolling
CUDA Mini-Nbody
Test: SOA Data Layout
CUDA Mini-Nbody
Test: Flush Denormals To Zero
GLmark2
Resolution: 800 x 600
GLmark2
Resolution: 1024 x 768
GLmark2
Resolution: 1280 x 1024
GLmark2
Resolution: 1920 x 1080
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 1 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
Java 2D Microbenchmark
Rendering Test: Text Rendering
Java 2D Microbenchmark
Rendering Test: Image Rendering
Java 2D Microbenchmark
Rendering Test: Vector Graphics Rendering
RAMspeed SMP
Type: Add - Benchmark: Integer
RAMspeed SMP
Type: Copy - Benchmark: Integer
RAMspeed SMP
Type: Scale - Benchmark: Integer
RAMspeed SMP
Type: Triad - Benchmark: Integer
RAMspeed SMP
Type: Average - Benchmark: Integer
MBW
Test: Memory Copy - Array Size: 128 MiB
MBW
Test: Memory Copy - Array Size: 512 MiB
MBW
Test: Memory Copy, Fixed Block Size - Array Size: 128 MiB
MBW
Test: Memory Copy, Fixed Block Size - Array Size: 512 MiB
t-test1
Threads: 1
t-test1
Threads: 2
LeelaChessZero
Backend: BLAS
LeelaChessZero
Backend: CUDA + cuDNN
x264
H.264 Video Encoding
7-Zip Compression
Compress Speed Test
Timed Linux Kernel Compilation
Time To Compile
XZ Compression
Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9
Zstd Compression
Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19
Phoronix Test Suite v10.8.5