Jetson AGX Xavier vs. Jetson TX2 TensorRT
NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.
HTML result view exported from: https://openbenchmarking.org/result/1812240-SP-XAVIER80657&sor&gru.
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: FP16 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: VGG16 - Precision: INT8 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: FP16 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: VGG19 - Precision: INT8 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: FP16 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: AlexNet - Precision: INT8 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16
NVIDIA TensorRT Inference
Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32
GLmark2
Resolution: 1920 x 1080
Phoronix Test Suite v10.8.5