Jetson TX2 ARMv8 rev 3 testing with a quill and NVIDIA TEGRA on Ubuntu 16.04 via the Phoronix Test Suite. Jetson: Processor: ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads), Motherboard: quill, Memory: 8192MB, Disk: 31GB 032G34, Graphics: NVIDIA TEGRA OS: Ubuntu 16.04, Kernel: 4.4.38-tegra (aarch64), Display Server: X Server 1.18.4, Display Driver: NVIDIA 1.0.0, Compiler: GCC 5.4.0 20160609 + CUDA 9.0, File-System: ext4, Screen Resolution: 640x960 NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 32.60 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 34.35 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 17.26 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 20.01 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 26.60 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 28.01 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 14.57 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 16.17 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 36.32 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 37.52 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 20.66 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 20.11 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 28.94 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 29.54 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 16.51 |=============================================================== NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 16.04 |=============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 281.60 |============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 314.27 |============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 185.75 |============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 220.27 |============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 357.10 |============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 469.83 |============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 266.68 |============================================================== NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 299.48 |============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 94.30 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 99.80 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 50.52 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 52.26 |=============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 196.39 |============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 208.49 |============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 110.39 |============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 117.37 |============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 35.79 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 38.01 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 18.16 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 19.50 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 106.98 |============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 109.31 |============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 55.50 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 60.12 |=============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 230.32 |============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 232.84 |============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 123.93 |============================================================== NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 130.26 |============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 40.15 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 42.06 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 20.89 |=============================================================== NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 - DLA Cores: Disabled Images Per Second > Higher Is Better Jetson . 22.11 |===============================================================