NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1812240-SP-XAVIER80657 Jetson AGX Xavier vs. Jetson TX2 TensorRT - Phoronix Test Suite Jetson AGX Xavier vs. Jetson TX2 TensorRT NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.
HTML result view exported from: https://openbenchmarking.org/result/1812240-SP-XAVIER80657&rdt&grt&export=pdf .
Jetson AGX Xavier vs. Jetson TX2 TensorRT Processor Motherboard Memory Disk Graphics Monitor OS Kernel Desktop Display Server Display Driver OpenGL Vulkan Compiler File-System Screen Resolution Jetson AGX Xavier Jetson TX2 ARMv8 rev 0 @ 2.27GHz (8 Cores) jetson-xavier 16384MB 31GB HBG4a2 NVIDIA Tegra Xavier ASUS VP28U Ubuntu 18.04 4.9.108-tegra (aarch64) Unity 7.5.0 X Server 1.19.6 NVIDIA 31.0.2 4.6.0 1.1.76 GCC 7.3.0 + CUDA 10.0 ext4 1920x1080 ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads) quill 8192MB 31GB 032G34 NVIDIA Tegra X2 VE228 Ubuntu 16.04 4.4.38-tegra (aarch64) Unity 7.4.0 X Server 1.18.4 NVIDIA 28.2.1 4.5.0 GCC 5.4.0 20160609 + CUDA 9.0 OpenBenchmarking.org Processor Details - Scaling Governor: tegra_cpufreq schedutil
Jetson AGX Xavier vs. Jetson TX2 TensorRT glmark2: 1920 x 1080 tensorrt-inference: VGG16 - FP16 - 4 tensorrt-inference: VGG16 - FP16 - 8 tensorrt-inference: VGG16 - INT8 - 4 tensorrt-inference: VGG16 - INT8 - 8 tensorrt-inference: VGG19 - FP16 - 4 tensorrt-inference: VGG19 - FP16 - 8 tensorrt-inference: VGG19 - INT8 - 4 tensorrt-inference: VGG19 - INT8 - 8 tensorrt-inference: VGG16 - FP16 - 16 tensorrt-inference: VGG16 - FP16 - 32 tensorrt-inference: VGG16 - INT8 - 16 tensorrt-inference: VGG16 - INT8 - 32 tensorrt-inference: VGG19 - FP16 - 16 tensorrt-inference: VGG19 - FP16 - 32 tensorrt-inference: VGG19 - INT8 - 16 tensorrt-inference: VGG19 - INT8 - 32 tensorrt-inference: AlexNet - FP16 - 4 tensorrt-inference: AlexNet - FP16 - 8 tensorrt-inference: AlexNet - INT8 - 4 tensorrt-inference: AlexNet - INT8 - 8 tensorrt-inference: AlexNet - FP16 - 16 tensorrt-inference: AlexNet - FP16 - 32 tensorrt-inference: AlexNet - INT8 - 16 tensorrt-inference: AlexNet - INT8 - 32 tensorrt-inference: ResNet50 - FP16 - 4 tensorrt-inference: ResNet50 - FP16 - 8 tensorrt-inference: ResNet50 - INT8 - 4 tensorrt-inference: ResNet50 - INT8 - 8 tensorrt-inference: GoogleNet - FP16 - 4 tensorrt-inference: GoogleNet - FP16 - 8 tensorrt-inference: GoogleNet - INT8 - 4 tensorrt-inference: GoogleNet - INT8 - 8 tensorrt-inference: ResNet152 - FP16 - 4 tensorrt-inference: ResNet152 - FP16 - 8 tensorrt-inference: ResNet152 - INT8 - 4 tensorrt-inference: ResNet152 - INT8 - 8 tensorrt-inference: ResNet50 - FP16 - 16 tensorrt-inference: ResNet50 - FP16 - 32 tensorrt-inference: ResNet50 - INT8 - 16 tensorrt-inference: ResNet50 - INT8 - 32 tensorrt-inference: GoogleNet - FP16 - 16 tensorrt-inference: GoogleNet - FP16 - 32 tensorrt-inference: GoogleNet - INT8 - 16 tensorrt-inference: GoogleNet - INT8 - 32 tensorrt-inference: ResNet152 - FP16 - 16 tensorrt-inference: ResNet152 - FP16 - 32 tensorrt-inference: ResNet152 - INT8 - 16 tensorrt-inference: ResNet152 - INT8 - 32 Jetson AGX Xavier Jetson TX2 2861 195.45 215.68 286.64 341.20 172.15 184.43 262.17 296.94 228.75 246.76 381.33 449.96 180.03 201.53 362.08 390.57 799 1247 975 1237 1435 1900 1879 2666 542.80 582.36 865.46 944.46 546 863 652 1049 219.08 234.84 350.28 407.01 593 613 1106.13 1184.50 858 956 1340 1622 224.60 253.34 445.22 485.22 32.30 33.68 17.98 19.89 26.49 27.02 14.62 16.02 36.44 37.24 20.50 19.87 28.97 29.57 16.38 15.99 261 300 179 222 370 472 258 307 93.61 99.05 50.39 51.07 202 198 114 117 35.60 36.71 17.97 19.48 106 110 57.18 59.45 218 230 125 130 40.19 41.87 20.77 22.05 OpenBenchmarking.org
GLmark2 Resolution: 1920 x 1080 OpenBenchmarking.org Score, More Is Better GLmark2 Resolution: 1920 x 1080 Jetson AGX Xavier 600 1200 1800 2400 3000 2861
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 40 80 120 160 200 SE +/- 3.17, N = 12 SE +/- 0.30, N = 3 195.45 32.30
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 50 100 150 200 250 SE +/- 3.36, N = 5 SE +/- 0.24, N = 3 215.68 33.68
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 60 120 180 240 300 SE +/- 3.98, N = 3 SE +/- 0.06, N = 3 286.64 17.98
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 70 140 210 280 350 SE +/- 1.08, N = 3 SE +/- 0.05, N = 3 341.20 19.89
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 40 80 120 160 200 SE +/- 1.25, N = 3 SE +/- 0.15, N = 3 172.15 26.49
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 40 80 120 160 200 SE +/- 2.36, N = 3 SE +/- 0.14, N = 3 184.43 27.02
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 60 120 180 240 300 SE +/- 0.96, N = 3 SE +/- 0.10, N = 3 262.17 14.62
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 60 120 180 240 300 SE +/- 1.42, N = 3 SE +/- 0.06, N = 3 296.94 16.02
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 50 100 150 200 250 SE +/- 1.63, N = 3 SE +/- 0.11, N = 3 228.75 36.44
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 50 100 150 200 250 SE +/- 0.17, N = 3 SE +/- 0.14, N = 3 246.76 37.24
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 80 160 240 320 400 SE +/- 10.09, N = 12 SE +/- 0.03, N = 3 381.33 20.50
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 100 200 300 400 500 SE +/- 4.97, N = 10 SE +/- 0.03, N = 3 449.96 19.87
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 40 80 120 160 200 SE +/- 11.67, N = 10 SE +/- 0.11, N = 3 180.03 28.97
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 40 80 120 160 200 SE +/- 1.68, N = 3 SE +/- 0.09, N = 3 201.53 29.57
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 80 160 240 320 400 SE +/- 0.66, N = 3 SE +/- 0.02, N = 3 362.08 16.38
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 80 160 240 320 400 SE +/- 1.67, N = 3 SE +/- 0.03, N = 3 390.57 15.99
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 97.79, N = 9 SE +/- 5.89, N = 12 799 261
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 300 600 900 1200 1500 SE +/- 45.66, N = 12 SE +/- 7.60, N = 12 1247 300
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 55.83, N = 12 SE +/- 2.69, N = 4 975 179
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 300 600 900 1200 1500 SE +/- 99.61, N = 12 SE +/- 3.23, N = 3 1237 222
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 300 600 900 1200 1500 SE +/- 89.56, N = 9 SE +/- 6.40, N = 12 1435 370
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 400 800 1200 1600 2000 SE +/- 23.33, N = 3 SE +/- 6.74, N = 3 1900 472
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 400 800 1200 1600 2000 SE +/- 91.41, N = 12 SE +/- 3.45, N = 3 1879 258
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 600 1200 1800 2400 3000 SE +/- 248.85, N = 9 SE +/- 0.88, N = 3 2666 307
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 120 240 360 480 600 SE +/- 0.39, N = 3 SE +/- 1.46, N = 3 542.80 93.61
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 130 260 390 520 650 SE +/- 0.24, N = 3 SE +/- 1.23, N = 3 582.36 99.05
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 14.20, N = 3 SE +/- 0.64, N = 3 865.46 50.39
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 40.28, N = 12 SE +/- 0.54, N = 3 944.46 51.07
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 120 240 360 480 600 SE +/- 96.56, N = 9 SE +/- 0.88, N = 3 546 202
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 14.25, N = 12 SE +/- 3.70, N = 3 863 198
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 140 280 420 560 700 SE +/- 140.60, N = 12 SE +/- 2.00, N = 3 652 114
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 121.56, N = 10 SE +/- 2.12, N = 3 1049 117
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 50 100 150 200 250 SE +/- 3.18, N = 3 SE +/- 0.44, N = 3 219.08 35.60
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 50 100 150 200 250 SE +/- 0.36, N = 3 SE +/- 0.67, N = 9 234.84 36.71
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 Jetson AGX Xavier Jetson TX2 80 160 240 320 400 SE +/- 5.48, N = 3 SE +/- 0.19, N = 3 350.28 17.97
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8 Jetson AGX Xavier Jetson TX2 90 180 270 360 450 SE +/- 6.98, N = 3 SE +/- 0.27, N = 3 407.01 19.48
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 130 260 390 520 650 SE +/- 7.03, N = 3 SE +/- 0.59, N = 3 593 106
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 130 260 390 520 650 SE +/- 9.12, N = 3 SE +/- 1.29, N = 3 613 110
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 11.53, N = 12 SE +/- 0.10, N = 3 1106.13 57.18
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 300 600 900 1200 1500 SE +/- 6.54, N = 3 SE +/- 0.19, N = 3 1184.50 59.45
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 55.00, N = 9 SE +/- 3.60, N = 3 858 218
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 200 400 600 800 1000 SE +/- 14.46, N = 12 SE +/- 3.59, N = 3 956 230
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 300 600 900 1200 1500 SE +/- 152.29, N = 9 SE +/- 1.16, N = 3 1340 125
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 300 600 900 1200 1500 SE +/- 5.04, N = 3 SE +/- 0.91, N = 3 1622 130
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 50 100 150 200 250 SE +/- 15.50, N = 9 SE +/- 0.17, N = 3 224.60 40.19
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 60 120 180 240 300 SE +/- 2.84, N = 3 SE +/- 0.14, N = 3 253.34 41.87
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16 Jetson AGX Xavier Jetson TX2 100 200 300 400 500 SE +/- 4.04, N = 3 SE +/- 0.09, N = 3 445.22 20.77
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 Jetson AGX Xavier Jetson TX2 110 220 330 440 550 SE +/- 1.47, N = 3 SE +/- 0.03, N = 3 485.22 22.05
Phoronix Test Suite v10.8.4