NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1812240-SP-XAVIER80657 Jetson AGX Xavier vs. Jetson TX2 TensorRT - Phoronix Test Suite Jetson AGX Xavier vs. Jetson TX2 TensorRT NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.
HTML result view exported from: https://openbenchmarking.org/result/1812240-SP-XAVIER80657&obr_sor=y&obr_rro=y&rdt&grs .
Jetson AGX Xavier vs. Jetson TX2 TensorRT Processor Motherboard Memory Disk Graphics Monitor OS Kernel Desktop Display Server Display Driver OpenGL Vulkan Compiler File-System Screen Resolution Jetson AGX Xavier Jetson TX2 ARMv8 rev 0 @ 2.27GHz (8 Cores) jetson-xavier 16384MB 31GB HBG4a2 NVIDIA Tegra Xavier ASUS VP28U Ubuntu 18.04 4.9.108-tegra (aarch64) Unity 7.5.0 X Server 1.19.6 NVIDIA 31.0.2 4.6.0 1.1.76 GCC 7.3.0 + CUDA 10.0 ext4 1920x1080 ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads) quill 8192MB 31GB 032G34 NVIDIA Tegra X2 VE228 Ubuntu 16.04 4.4.38-tegra (aarch64) Unity 7.4.0 X Server 1.18.4 NVIDIA 28.2.1 4.5.0 GCC 5.4.0 20160609 + CUDA 9.0 OpenBenchmarking.org Processor Details - Scaling Governor: tegra_cpufreq schedutil
Jetson AGX Xavier vs. Jetson TX2 TensorRT tensorrt-inference: VGG19 - FP16 - 8 tensorrt-inference: VGG19 - FP16 - 32 tensorrt-inference: VGG16 - FP16 - 32 tensorrt-inference: VGG19 - FP16 - 4 tensorrt-inference: VGG16 - FP16 - 8 tensorrt-inference: ResNet152 - FP16 - 8 tensorrt-inference: VGG16 - FP16 - 16 tensorrt-inference: ResNet152 - FP16 - 4 tensorrt-inference: VGG16 - FP16 - 4 tensorrt-inference: ResNet152 - FP16 - 32 tensorrt-inference: ResNet50 - FP16 - 8 tensorrt-inference: ResNet50 - FP16 - 4 tensorrt-inference: ResNet50 - FP16 - 16 tensorrt-inference: ResNet50 - FP16 - 32 tensorrt-inference: GoogleNet - FP16 - 8 tensorrt-inference: GoogleNet - FP16 - 32 tensorrt-inference: AlexNet - FP16 - 32 tensorrt-inference: VGG19 - INT8 - 32 tensorrt-inference: VGG16 - INT8 - 32 tensorrt-inference: VGG19 - INT8 - 16 tensorrt-inference: ResNet152 - INT8 - 32 tensorrt-inference: ResNet152 - INT8 - 16 tensorrt-inference: ResNet152 - INT8 - 8 tensorrt-inference: ResNet50 - INT8 - 32 tensorrt-inference: ResNet152 - INT8 - 4 tensorrt-inference: ResNet50 - INT8 - 16 tensorrt-inference: VGG19 - INT8 - 8 tensorrt-inference: VGG19 - INT8 - 4 tensorrt-inference: ResNet50 - INT8 - 4 tensorrt-inference: VGG16 - INT8 - 8 tensorrt-inference: VGG16 - INT8 - 4 tensorrt-inference: GoogleNet - INT8 - 32 glmark2: 1920 x 1080 tensorrt-inference: ResNet152 - FP16 - 16 tensorrt-inference: GoogleNet - INT8 - 16 tensorrt-inference: GoogleNet - FP16 - 16 tensorrt-inference: GoogleNet - INT8 - 8 tensorrt-inference: GoogleNet - INT8 - 4 tensorrt-inference: GoogleNet - FP16 - 4 tensorrt-inference: ResNet50 - INT8 - 8 tensorrt-inference: AlexNet - INT8 - 32 tensorrt-inference: AlexNet - INT8 - 16 tensorrt-inference: AlexNet - FP16 - 16 tensorrt-inference: AlexNet - INT8 - 8 tensorrt-inference: AlexNet - INT8 - 4 tensorrt-inference: AlexNet - FP16 - 8 tensorrt-inference: AlexNet - FP16 - 4 tensorrt-inference: VGG19 - FP16 - 16 tensorrt-inference: VGG16 - INT8 - 16 Jetson AGX Xavier Jetson TX2 184.43 201.53 246.76 172.15 215.68 234.84 228.75 219.08 195.45 253.34 582.36 542.80 593 613 863 956 1900 390.57 449.96 362.08 485.22 445.22 407.01 1184.50 350.28 1106.13 296.94 262.17 865.46 341.20 286.64 1622 2861 224.60 1340 858 1049 652 546 944.46 2666 1879 1435 1237 975 1247 799 180.03 381.33 27.02 29.57 37.24 26.49 33.68 36.71 36.44 35.60 32.30 41.87 99.05 93.61 106 110 198 230 472 15.99 19.87 16.38 22.05 20.77 19.48 59.45 17.97 57.18 16.02 14.62 50.39 19.89 17.98 130 40.19 125 218 117 114 202 51.07 307 258 370 222 179 300 261 28.97 20.50 OpenBenchmarking.org
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 40 80 120 160 200 SE +/- 0.14, N = 3 SE +/- 2.36, N = 3 27.02 184.43
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 40 80 120 160 200 SE +/- 0.09, N = 3 SE +/- 1.68, N = 3 29.57 201.53
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 50 100 150 200 250 SE +/- 0.14, N = 3 SE +/- 0.17, N = 3 37.24 246.76
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 40 80 120 160 200 SE +/- 0.15, N = 3 SE +/- 1.25, N = 3 26.49 172.15
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 50 100 150 200 250 SE +/- 0.24, N = 3 SE +/- 3.36, N = 5 33.68 215.68
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 50 100 150 200 250 SE +/- 0.67, N = 9 SE +/- 0.36, N = 3 36.71 234.84
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 50 100 150 200 250 SE +/- 0.11, N = 3 SE +/- 1.63, N = 3 36.44 228.75
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 50 100 150 200 250 SE +/- 0.44, N = 3 SE +/- 3.18, N = 3 35.60 219.08
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: FP16 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 40 80 120 160 200 SE +/- 0.30, N = 3 SE +/- 3.17, N = 12 32.30 195.45
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 60 120 180 240 300 SE +/- 0.14, N = 3 SE +/- 2.84, N = 3 41.87 253.34
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 130 260 390 520 650 SE +/- 1.23, N = 3 SE +/- 0.24, N = 3 99.05 582.36
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 120 240 360 480 600 SE +/- 1.46, N = 3 SE +/- 0.39, N = 3 93.61 542.80
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 130 260 390 520 650 SE +/- 0.59, N = 3 SE +/- 7.03, N = 3 106 593
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: FP16 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 130 260 390 520 650 SE +/- 1.29, N = 3 SE +/- 9.12, N = 3 110 613
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 3.70, N = 3 SE +/- 14.25, N = 12 198 863
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 3.59, N = 3 SE +/- 14.46, N = 12 230 956
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 400 800 1200 1600 2000 SE +/- 6.74, N = 3 SE +/- 23.33, N = 3 472 1900
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 80 160 240 320 400 SE +/- 0.03, N = 3 SE +/- 1.67, N = 3 15.99 390.57
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 100 200 300 400 500 SE +/- 0.03, N = 3 SE +/- 4.97, N = 10 19.87 449.96
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 80 160 240 320 400 SE +/- 0.02, N = 3 SE +/- 0.66, N = 3 16.38 362.08
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 110 220 330 440 550 SE +/- 0.03, N = 3 SE +/- 1.47, N = 3 22.05 485.22
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 100 200 300 400 500 SE +/- 0.09, N = 3 SE +/- 4.04, N = 3 20.77 445.22
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 90 180 270 360 450 SE +/- 0.27, N = 3 SE +/- 6.98, N = 3 19.48 407.01
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 300 600 900 1200 1500 SE +/- 0.19, N = 3 SE +/- 6.54, N = 3 59.45 1184.50
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: INT8 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 80 160 240 320 400 SE +/- 0.19, N = 3 SE +/- 5.48, N = 3 17.97 350.28
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 0.10, N = 3 SE +/- 11.53, N = 12 57.18 1106.13
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 60 120 180 240 300 SE +/- 0.06, N = 3 SE +/- 1.42, N = 3 16.02 296.94
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: INT8 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 60 120 180 240 300 SE +/- 0.10, N = 3 SE +/- 0.96, N = 3 14.62 262.17
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 0.64, N = 3 SE +/- 14.20, N = 3 50.39 865.46
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 70 140 210 280 350 SE +/- 0.05, N = 3 SE +/- 1.08, N = 3 19.89 341.20
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 60 120 180 240 300 SE +/- 0.06, N = 3 SE +/- 3.98, N = 3 17.98 286.64
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 300 600 900 1200 1500 SE +/- 0.91, N = 3 SE +/- 5.04, N = 3 130 1622
GLmark2 Resolution: 1920 x 1080 OpenBenchmarking.org Score, More Is Better GLmark2 Resolution: 1920 x 1080 Jetson AGX Xavier 600 1200 1800 2400 3000 2861
NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet152 - Precision: FP16 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 50 100 150 200 250 SE +/- 0.17, N = 3 SE +/- 15.50, N = 9 40.19 224.60
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 300 600 900 1200 1500 SE +/- 1.16, N = 3 SE +/- 152.29, N = 9 125 1340
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 3.60, N = 3 SE +/- 55.00, N = 9 218 858
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 2.12, N = 3 SE +/- 121.56, N = 10 117 1049
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: INT8 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 140 280 420 560 700 SE +/- 2.00, N = 3 SE +/- 140.60, N = 12 114 652
NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: GoogleNet - Precision: FP16 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 120 240 360 480 600 SE +/- 0.88, N = 3 SE +/- 96.56, N = 9 202 546
NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: ResNet50 - Precision: INT8 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 0.54, N = 3 SE +/- 40.28, N = 12 51.07 944.46
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 32 Jetson TX2 Jetson AGX Xavier 600 1200 1800 2400 3000 SE +/- 0.88, N = 3 SE +/- 248.85, N = 9 307 2666
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 400 800 1200 1600 2000 SE +/- 3.45, N = 3 SE +/- 91.41, N = 12 258 1879
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 300 600 900 1200 1500 SE +/- 6.40, N = 12 SE +/- 89.56, N = 9 370 1435
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 300 600 900 1200 1500 SE +/- 3.23, N = 3 SE +/- 99.61, N = 12 222 1237
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: INT8 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 2.69, N = 4 SE +/- 55.83, N = 12 179 975
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 8 Jetson TX2 Jetson AGX Xavier 300 600 900 1200 1500 SE +/- 7.60, N = 12 SE +/- 45.66, N = 12 300 1247
NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: AlexNet - Precision: FP16 - Batch Size: 4 Jetson TX2 Jetson AGX Xavier 200 400 600 800 1000 SE +/- 5.89, N = 12 SE +/- 97.79, N = 9 261 799
NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG19 - Precision: FP16 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 40 80 120 160 200 SE +/- 0.11, N = 3 SE +/- 11.67, N = 10 28.97 180.03
NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 16 OpenBenchmarking.org Images Per Second, More Is Better NVIDIA TensorRT Inference Neural Network: VGG16 - Precision: INT8 - Batch Size: 16 Jetson TX2 Jetson AGX Xavier 80 160 240 320 400 SE +/- 0.03, N = 3 SE +/- 10.09, N = 12 20.50 381.33
Phoronix Test Suite v10.8.4