Jetson Prep
ARMv8 rev 1 testing with a jetson_tx1 and NVIDIA Tegra X1 on Ubuntu 16.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/1903178-SP-1903167SP11&grs&sro.
7-Zip Compression
Compress Speed Test
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
TTSIOD 3D Renderer
Phong Rendering With Soft-Shadow Mapping
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
CUDA Mini-Nbody
Test: Original
PyBench
Total For Average Test Times
FLAC Audio Encoding
WAV To FLAC
Zstd Compression
Compressing ubuntu-16.04.3-server-i386.img, Compression Level 19
Rust Prime Benchmark
Prime Number Test To 200,000,000
C-Ray
Total Time - 4K, 16 Rays Per Pixel
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
OpenCV Benchmark
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled
Tesseract OCR
Time To OCR 7 Images
LeelaChessZero
Backend: CUDA + cuDNN FP16
LeelaChessZero
Backend: CUDA + cuDNN
LeelaChessZero
Backend: BLAS
GLmark2
Resolution: 1920 x 1080
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled
Phoronix Test Suite v10.8.5