Jetson AGX Xavier vs. Jetson TX2 TensorRT

NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.

HTML result view exported from: https://openbenchmarking.org/result/1812240-SP-XAVIER80657.

Jetson AGX Xavier vs. Jetson TX2 TensorRTProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson AGX XavierJetson TX2ARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierASUS VP28UUbuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ext41920x1080ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA Tegra X2VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.0X Server 1.18.4NVIDIA 28.2.14.5.0GCC 5.4.0 20160609 + CUDA 9.0OpenBenchmarking.orgProcessor Details- Scaling Governor: tegra_cpufreq schedutil

Jetson AGX Xavier vs. Jetson TX2 TensorRTglmark2: 1920 x 1080tensorrt-inference: VGG16 - FP16 - 4tensorrt-inference: VGG16 - FP16 - 8tensorrt-inference: VGG16 - INT8 - 4tensorrt-inference: VGG16 - INT8 - 8tensorrt-inference: VGG19 - FP16 - 4tensorrt-inference: VGG19 - FP16 - 8tensorrt-inference: VGG19 - INT8 - 4tensorrt-inference: VGG19 - INT8 - 8tensorrt-inference: VGG16 - FP16 - 16tensorrt-inference: VGG16 - FP16 - 32tensorrt-inference: VGG16 - INT8 - 16tensorrt-inference: VGG16 - INT8 - 32tensorrt-inference: VGG19 - FP16 - 16tensorrt-inference: VGG19 - FP16 - 32tensorrt-inference: VGG19 - INT8 - 16tensorrt-inference: VGG19 - INT8 - 32tensorrt-inference: AlexNet - FP16 - 4tensorrt-inference: AlexNet - FP16 - 8tensorrt-inference: AlexNet - INT8 - 4tensorrt-inference: AlexNet - INT8 - 8tensorrt-inference: AlexNet - FP16 - 16tensorrt-inference: AlexNet - FP16 - 32tensorrt-inference: AlexNet - INT8 - 16tensorrt-inference: AlexNet - INT8 - 32tensorrt-inference: ResNet50 - FP16 - 4tensorrt-inference: ResNet50 - FP16 - 8tensorrt-inference: ResNet50 - INT8 - 4tensorrt-inference: ResNet50 - INT8 - 8tensorrt-inference: GoogleNet - FP16 - 4tensorrt-inference: GoogleNet - FP16 - 8tensorrt-inference: GoogleNet - INT8 - 4tensorrt-inference: GoogleNet - INT8 - 8tensorrt-inference: ResNet152 - FP16 - 4tensorrt-inference: ResNet152 - FP16 - 8tensorrt-inference: ResNet152 - INT8 - 4tensorrt-inference: ResNet152 - INT8 - 8tensorrt-inference: ResNet50 - FP16 - 16tensorrt-inference: ResNet50 - FP16 - 32tensorrt-inference: ResNet50 - INT8 - 16tensorrt-inference: ResNet50 - INT8 - 32tensorrt-inference: GoogleNet - FP16 - 16tensorrt-inference: GoogleNet - FP16 - 32tensorrt-inference: GoogleNet - INT8 - 16tensorrt-inference: GoogleNet - INT8 - 32tensorrt-inference: ResNet152 - FP16 - 16tensorrt-inference: ResNet152 - FP16 - 32tensorrt-inference: ResNet152 - INT8 - 16tensorrt-inference: ResNet152 - INT8 - 32Jetson AGX XavierJetson TX22861195.45215.68286.64341.20172.15184.43262.17296.94228.75246.76381.33449.96180.03201.53362.08390.57799124797512371435190018792666542.80582.36865.46944.465468636521049219.08234.84350.28407.015936131106.131184.5085895613401622224.60253.34445.22485.2232.3033.6817.9819.8926.4927.0214.6216.0236.4437.2420.5019.8728.9729.5716.3815.9926130017922237047225830793.6199.0550.3951.0720219811411735.6036.7117.9719.4810611057.1859.4521823012513040.1941.8720.7722.05OpenBenchmarking.org

GLmark2

Resolution: 1920 x 1080

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX Xavier60012001800240030002861

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX24080120160200SE +/- 3.17, N = 12SE +/- 0.30, N = 3195.4532.30

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX250100150200250SE +/- 3.36, N = 5SE +/- 0.24, N = 3215.6833.68

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX260120180240300SE +/- 3.98, N = 3SE +/- 0.06, N = 3286.6417.98

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX270140210280350SE +/- 1.08, N = 3SE +/- 0.05, N = 3341.2019.89

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX24080120160200SE +/- 1.25, N = 3SE +/- 0.15, N = 3172.1526.49

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX24080120160200SE +/- 2.36, N = 3SE +/- 0.14, N = 3184.4327.02

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX260120180240300SE +/- 0.96, N = 3SE +/- 0.10, N = 3262.1714.62

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX260120180240300SE +/- 1.42, N = 3SE +/- 0.06, N = 3296.9416.02

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX250100150200250SE +/- 1.63, N = 3SE +/- 0.11, N = 3228.7536.44

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: FP16 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX250100150200250SE +/- 0.17, N = 3SE +/- 0.14, N = 3246.7637.24

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX280160240320400SE +/- 10.09, N = 12SE +/- 0.03, N = 3381.3320.50

NVIDIA TensorRT Inference

Neural Network: VGG16 - Precision: INT8 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX2100200300400500SE +/- 4.97, N = 10SE +/- 0.03, N = 3449.9619.87

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX24080120160200SE +/- 11.67, N = 10SE +/- 0.11, N = 3180.0328.97

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: FP16 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX24080120160200SE +/- 1.68, N = 3SE +/- 0.09, N = 3201.5329.57

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX280160240320400SE +/- 0.66, N = 3SE +/- 0.02, N = 3362.0816.38

NVIDIA TensorRT Inference

Neural Network: VGG19 - Precision: INT8 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX280160240320400SE +/- 1.67, N = 3SE +/- 0.03, N = 3390.5715.99

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX22004006008001000SE +/- 97.79, N = 9SE +/- 5.89, N = 12799261

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX230060090012001500SE +/- 45.66, N = 12SE +/- 7.60, N = 121247300

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX22004006008001000SE +/- 55.83, N = 12SE +/- 2.69, N = 4975179

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX230060090012001500SE +/- 99.61, N = 12SE +/- 3.23, N = 31237222

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX230060090012001500SE +/- 89.56, N = 9SE +/- 6.40, N = 121435370

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX2400800120016002000SE +/- 23.33, N = 3SE +/- 6.74, N = 31900472

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX2400800120016002000SE +/- 91.41, N = 12SE +/- 3.45, N = 31879258

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: INT8 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX26001200180024003000SE +/- 248.85, N = 9SE +/- 0.88, N = 32666307

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2120240360480600SE +/- 0.39, N = 3SE +/- 1.46, N = 3542.8093.61

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX2130260390520650SE +/- 0.24, N = 3SE +/- 1.23, N = 3582.3699.05

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX22004006008001000SE +/- 14.20, N = 3SE +/- 0.64, N = 3865.4650.39

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000SE +/- 40.28, N = 12SE +/- 0.54, N = 3944.4651.07

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2120240360480600SE +/- 96.56, N = 9SE +/- 0.88, N = 3546202

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000SE +/- 14.25, N = 12SE +/- 3.70, N = 3863198

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX2140280420560700SE +/- 140.60, N = 12SE +/- 2.00, N = 3652114

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000SE +/- 121.56, N = 10SE +/- 2.12, N = 31049117

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX250100150200250SE +/- 3.18, N = 3SE +/- 0.44, N = 3219.0835.60

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX250100150200250SE +/- 0.36, N = 3SE +/- 0.67, N = 9234.8436.71

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX280160240320400SE +/- 5.48, N = 3SE +/- 0.19, N = 3350.2817.97

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX290180270360450SE +/- 6.98, N = 3SE +/- 0.27, N = 3407.0119.48

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX2130260390520650SE +/- 7.03, N = 3SE +/- 0.59, N = 3593106

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX2130260390520650SE +/- 9.12, N = 3SE +/- 1.29, N = 3613110

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX22004006008001000SE +/- 11.53, N = 12SE +/- 0.10, N = 31106.1357.18

NVIDIA TensorRT Inference

Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX230060090012001500SE +/- 6.54, N = 3SE +/- 0.19, N = 31184.5059.45

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX22004006008001000SE +/- 55.00, N = 9SE +/- 3.60, N = 3858218

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX22004006008001000SE +/- 14.46, N = 12SE +/- 3.59, N = 3956230

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX230060090012001500SE +/- 152.29, N = 9SE +/- 1.16, N = 31340125

NVIDIA TensorRT Inference

Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX230060090012001500SE +/- 5.04, N = 3SE +/- 0.91, N = 31622130

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX250100150200250SE +/- 15.50, N = 9SE +/- 0.17, N = 3224.6040.19

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX260120180240300SE +/- 2.84, N = 3SE +/- 0.14, N = 3253.3441.87

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX2100200300400500SE +/- 4.04, N = 3SE +/- 0.09, N = 3445.2220.77

NVIDIA TensorRT Inference

Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX2110220330440550SE +/- 1.47, N = 3SE +/- 0.03, N = 3485.2222.05


Phoronix Test Suite v10.8.4