Jetson AGX Xavier vs. Jetson TX2 TensorRT

NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1812240-SP-XAVIER80657
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Jetson AGX Xavier
December 23 2018
  6 Hours, 30 Minutes
Jetson TX2
December 24 2018
  6 Hours, 26 Minutes
Invert Hiding All Results Option
  6 Hours, 28 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Jetson AGX Xavier vs. Jetson TX2 TensorRT NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix. ,,"Jetson AGX Xavier","Jetson TX2" Processor,,ARMv8 rev 0 @ 2.27GHz (8 Cores),ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads) Motherboard,,jetson-xavier,quill Memory,,16384MB,8192MB Disk,,31GB HBG4a2,31GB 032G34 Graphics,,NVIDIA Tegra Xavier,NVIDIA Tegra X2 Monitor,,ASUS VP28U,VE228 OS,,Ubuntu 18.04,Ubuntu 16.04 Kernel,,4.9.108-tegra (aarch64),4.4.38-tegra (aarch64) Desktop,,Unity 7.5.0,Unity 7.4.0 Display Server,,X Server 1.19.6,X Server 1.18.4 Display Driver,,NVIDIA 31.0.2,NVIDIA 28.2.1 OpenGL,,4.6.0,4.5.0 Vulkan,,1.1.76, Compiler,,GCC 7.3.0 + CUDA 10.0,GCC 5.4.0 20160609 + CUDA 9.0 File-System,,ext4,ext4 Screen Resolution,,1920x1080,1920x1080 ,,"Jetson AGX Xavier","Jetson TX2" "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 (Images/sec)",HIB,865.46,50.39 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 (Images/sec)",HIB,546,202 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16 (Images/sec)",HIB,445.22,20.77 "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 (Images/sec)",HIB,542.80,93.61 "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 (Images/sec)",HIB,1106.13,57.18 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 (Images/sec)",HIB,1340,125 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 (Images/sec)",HIB,262.17,14.62 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 (Images/sec)",HIB,350.28,17.97 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 (Images/sec)",HIB,172.15,26.49 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: FP16 - Batch Size: 8 (Images/sec)",HIB,184.43,27.02 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: INT8 - Batch Size: 8 (Images/sec)",HIB,341.20,19.89 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 (Images/sec)",HIB,286.64,17.98 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 (Images/sec)",HIB,215.68,33.68 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 (Images/sec)",HIB,652,114 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: INT8 - Batch Size: 8 (Images/sec)",HIB,296.94,16.02 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 (Images/sec)",HIB,219.08,35.60 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: FP16 - Batch Size: 16 (Images/sec)",HIB,228.75,36.44 "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 (Images/sec)",HIB,593,106 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 (Images/sec)",HIB,246.76,37.24 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 (Images/sec)",HIB,858,218 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: INT8 - Batch Size: 16 (Images/sec)",HIB,381.33,20.50 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 (Images/sec)",HIB,224.60,40.19 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 (Images/sec)",HIB,449.96,19.87 "NVIDIA TensorRT Inference - Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 (Images/sec)",HIB,195.45,32.30 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: FP16 - Batch Size: 16 (Images/sec)",HIB,180.03,28.97 "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 (Images/sec)",HIB,582.36,99.05 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 (Images/sec)",HIB,201.53,29.57 "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 (Images/sec)",HIB,944.46,51.07 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: INT8 - Batch Size: 16 (Images/sec)",HIB,362.08,16.38 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 (Images/sec)",HIB,863,198 "NVIDIA TensorRT Inference - Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 (Images/sec)",HIB,390.57,15.99 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 (Images/sec)",HIB,1049,117 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 (Images/sec)",HIB,799,261 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 (Images/sec)",HIB,234.84,36.71 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 (Images/sec)",HIB,1247,300 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8 (Images/sec)",HIB,407.01,19.48 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 (Images/sec)",HIB,975,179 "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 (Images/sec)",HIB,613,110 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 (Images/sec)",HIB,1237,222 "NVIDIA TensorRT Inference - Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 (Images/sec)",HIB,1184.50,59.45 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 (Images/sec)",HIB,1435,370 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 (Images/sec)",HIB,956,230 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 (Images/sec)",HIB,1900,472 "NVIDIA TensorRT Inference - Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 (Images/sec)",HIB,1622,130 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 (Images/sec)",HIB,1879,258 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 (Images/sec)",HIB,253.34,41.87 "NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 (Images/sec)",HIB,2666,307 "NVIDIA TensorRT Inference - Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 (Images/sec)",HIB,485.22,22.05 "GLmark2 - Resolution: 1920 x 1080 (Score)",HIB,2861,