Jetson AGX Xavier vs. Jetson TX2 TensorRT

NVIDIA Jetson TensorRT inference benchmarks by Michael Larabel for a future article on Phoronix.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1812240-SP-XAVIER80657
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Jetson AGX Xavier
December 23 2018
  6 Hours, 30 Minutes
Jetson TX2
December 24 2018
  6 Hours, 26 Minutes
Invert Hiding All Results Option
  6 Hours, 28 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Jetson AGX Xavier vs. Jetson TX2 TensorRTProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionJetson AGX XavierJetson TX2ARMv8 rev 0 @ 2.27GHz (8 Cores)jetson-xavier16384MB31GB HBG4a2NVIDIA Tegra XavierASUS VP28UUbuntu 18.044.9.108-tegra (aarch64)Unity 7.5.0X Server 1.19.6NVIDIA 31.0.24.6.01.1.76GCC 7.3.0 + CUDA 10.0ext41920x1080ARMv8 rev 3 @ 2.04GHz (4 Cores / 6 Threads)quill8192MB31GB 032G34NVIDIA Tegra X2VE228Ubuntu 16.044.4.38-tegra (aarch64)Unity 7.4.0X Server 1.18.4NVIDIA 28.2.14.5.0GCC 5.4.0 20160609 + CUDA 9.0OpenBenchmarking.orgProcessor Details- Scaling Governor: tegra_cpufreq schedutil

Jetson AGX Xavier vs. Jetson TX2 ComparisonPhoronix Test SuiteBaseline+585.7%+585.7%+1171.4%+1171.4%+1757.1%+1757.1%GoogleNet - INT8 - 8796.6%AlexNet - INT8 - 32768.4%AlexNet - INT8 - 16628.3%VGG19 - FP16 - 8582.6%VGG19 - FP16 - 32581.5%VGG16 - FP16 - 32562.6%VGG19 - FP16 - 4549.9%VGG16 - FP16 - 8540.4%ResNet152 - FP16 - 8539.7%VGG16 - FP16 - 16527.7%VGG19 - FP16 - 16521.4%ResNet152 - FP16 - 4515.4%VGG16 - FP16 - 4505.1%ResNet152 - FP16 - 32505.1%ResNet50 - FP16 - 8487.9%ResNet50 - FP16 - 4479.9%GoogleNet - INT8 - 4471.9%ResNet50 - FP16 - 16459.4%ResNet152 - FP16 - 16458.8%ResNet50 - FP16 - 32457.3%AlexNet - INT8 - 8457.2%AlexNet - INT8 - 4444.7%GoogleNet - FP16 - 8335.9%AlexNet - FP16 - 8315.7%GoogleNet - FP16 - 32315.7%AlexNet - FP16 - 32302.5%GoogleNet - FP16 - 16293.6%AlexNet - FP16 - 16287.8%AlexNet - FP16 - 4206.1%VGG19 - INT8 - 322342.6%VGG16 - INT8 - 322164.5%VGG19 - INT8 - 162110.5%ResNet152 - INT8 - 322100.5%ResNet152 - INT8 - 162043.6%ResNet152 - INT8 - 81989.4%GoogleNet - FP16 - 4170.3%ResNet50 - INT8 - 321892.4%ResNet152 - INT8 - 41849.2%ResNet50 - INT8 - 161834.5%VGG16 - INT8 - 161760.1%VGG19 - INT8 - 81753.6%ResNet50 - INT8 - 81749.3%VGG19 - INT8 - 41693.2%ResNet50 - INT8 - 41617.5%VGG16 - INT8 - 81615.4%VGG16 - INT8 - 41494.2%GoogleNet - INT8 - 321147.7%GoogleNet - INT8 - 16972%NVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceNVIDIA TensorRT InferenceJetson AGX XavierJetson TX2

Jetson AGX Xavier vs. Jetson TX2 TensorRTglmark2: 1920 x 1080tensorrt-inference: VGG16 - FP16 - 4tensorrt-inference: VGG16 - FP16 - 8tensorrt-inference: VGG16 - INT8 - 4tensorrt-inference: VGG16 - INT8 - 8tensorrt-inference: VGG19 - FP16 - 4tensorrt-inference: VGG19 - FP16 - 8tensorrt-inference: VGG19 - INT8 - 4tensorrt-inference: VGG19 - INT8 - 8tensorrt-inference: VGG16 - FP16 - 16tensorrt-inference: VGG16 - FP16 - 32tensorrt-inference: VGG16 - INT8 - 16tensorrt-inference: VGG16 - INT8 - 32tensorrt-inference: VGG19 - FP16 - 16tensorrt-inference: VGG19 - FP16 - 32tensorrt-inference: VGG19 - INT8 - 16tensorrt-inference: VGG19 - INT8 - 32tensorrt-inference: AlexNet - FP16 - 4tensorrt-inference: AlexNet - FP16 - 8tensorrt-inference: AlexNet - INT8 - 4tensorrt-inference: AlexNet - INT8 - 8tensorrt-inference: AlexNet - FP16 - 16tensorrt-inference: AlexNet - FP16 - 32tensorrt-inference: AlexNet - INT8 - 16tensorrt-inference: AlexNet - INT8 - 32tensorrt-inference: ResNet50 - FP16 - 4tensorrt-inference: ResNet50 - FP16 - 8tensorrt-inference: ResNet50 - INT8 - 4tensorrt-inference: ResNet50 - INT8 - 8tensorrt-inference: GoogleNet - FP16 - 4tensorrt-inference: GoogleNet - FP16 - 8tensorrt-inference: GoogleNet - INT8 - 4tensorrt-inference: GoogleNet - INT8 - 8tensorrt-inference: ResNet152 - FP16 - 4tensorrt-inference: ResNet152 - FP16 - 8tensorrt-inference: ResNet152 - INT8 - 4tensorrt-inference: ResNet152 - INT8 - 8tensorrt-inference: ResNet50 - FP16 - 16tensorrt-inference: ResNet50 - FP16 - 32tensorrt-inference: ResNet50 - INT8 - 16tensorrt-inference: ResNet50 - INT8 - 32tensorrt-inference: GoogleNet - FP16 - 16tensorrt-inference: GoogleNet - FP16 - 32tensorrt-inference: GoogleNet - INT8 - 16tensorrt-inference: GoogleNet - INT8 - 32tensorrt-inference: ResNet152 - FP16 - 16tensorrt-inference: ResNet152 - FP16 - 32tensorrt-inference: ResNet152 - INT8 - 16tensorrt-inference: ResNet152 - INT8 - 32Jetson AGX XavierJetson TX22861195.45215.68286.64341.20172.15184.43262.17296.94228.75246.76381.33449.96180.03201.53362.08390.57799124797512371435190018792666542.80582.36865.46944.465468636521049219.08234.84350.28407.015936131106.131184.5085895613401622224.60253.34445.22485.2232.3033.6817.9819.8926.4927.0214.6216.0236.4437.2420.5019.8728.9729.5716.3815.9926130017922237047225830793.6199.0550.3951.0720219811411735.6036.7117.9719.4810611057.1859.4521823012513040.1941.8720.7722.05OpenBenchmarking.org

GLmark2

This is a test of any system-installed GLMark2 OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2Resolution: 1920 x 1080Jetson AGX Xavier60012001800240030002861

NVIDIA TensorRT Inference

This test profile uses any existing system installation of NVIDIA TensorRT for carrying out inference benchmarks with various neural networks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX24080120160200SE +/- 3.17, N = 12SE +/- 0.30, N = 3195.4532.30
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX24080120160200Min: 173.05 / Avg: 195.45 / Max: 206.28Min: 31.91 / Avg: 32.3 / Max: 32.88

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX250100150200250SE +/- 3.36, N = 5SE +/- 0.24, N = 3215.6833.68
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX24080120160200Min: 206.9 / Avg: 215.68 / Max: 221.59Min: 33.34 / Avg: 33.68 / Max: 34.15

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX260120180240300SE +/- 3.98, N = 3SE +/- 0.06, N = 3286.6417.98
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX250100150200250Min: 279.44 / Avg: 286.64 / Max: 293.2Min: 17.87 / Avg: 17.98 / Max: 18.04

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX270140210280350SE +/- 1.08, N = 3SE +/- 0.05, N = 3341.2019.89
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX260120180240300Min: 339.16 / Avg: 341.2 / Max: 342.86Min: 19.81 / Avg: 19.89 / Max: 19.98

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX24080120160200SE +/- 1.25, N = 3SE +/- 0.15, N = 3172.1526.49
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2306090120150Min: 169.81 / Avg: 172.15 / Max: 174.08Min: 26.3 / Avg: 26.49 / Max: 26.78

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX24080120160200SE +/- 2.36, N = 3SE +/- 0.14, N = 3184.4327.02
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX2306090120150Min: 179.89 / Avg: 184.43 / Max: 187.82Min: 26.81 / Avg: 27.02 / Max: 27.27

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX260120180240300SE +/- 0.96, N = 3SE +/- 0.10, N = 3262.1714.62
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX250100150200250Min: 260.34 / Avg: 262.17 / Max: 263.59Min: 14.43 / Avg: 14.62 / Max: 14.74

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX260120180240300SE +/- 1.42, N = 3SE +/- 0.06, N = 3296.9416.02
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX250100150200250Min: 294.43 / Avg: 296.94 / Max: 299.32Min: 15.92 / Avg: 16.02 / Max: 16.12

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX250100150200250SE +/- 1.63, N = 3SE +/- 0.11, N = 3228.7536.44
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX24080120160200Min: 226.25 / Avg: 228.75 / Max: 231.82Min: 36.28 / Avg: 36.44 / Max: 36.64

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX250100150200250SE +/- 0.17, N = 3SE +/- 0.14, N = 3246.7637.24
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX24080120160200Min: 246.55 / Avg: 246.76 / Max: 247.1Min: 36.97 / Avg: 37.24 / Max: 37.44

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX280160240320400SE +/- 10.09, N = 12SE +/- 0.03, N = 3381.3320.50
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX270140210280350Min: 332.54 / Avg: 381.33 / Max: 432.87Min: 20.47 / Avg: 20.5 / Max: 20.55

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX2100200300400500SE +/- 4.97, N = 10SE +/- 0.03, N = 3449.9619.87
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG16 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX280160240320400Min: 434.03 / Avg: 449.96 / Max: 475Min: 19.81 / Avg: 19.87 / Max: 19.9

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX24080120160200SE +/- 11.67, N = 10SE +/- 0.11, N = 3180.0328.97
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX2306090120150Min: 75.48 / Avg: 180.03 / Max: 194.77Min: 28.77 / Avg: 28.97 / Max: 29.15

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX24080120160200SE +/- 1.68, N = 3SE +/- 0.09, N = 3201.5329.57
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX24080120160200Min: 198.31 / Avg: 201.53 / Max: 204Min: 29.41 / Avg: 29.57 / Max: 29.72

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX280160240320400SE +/- 0.66, N = 3SE +/- 0.02, N = 3362.0816.38
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX260120180240300Min: 361.25 / Avg: 362.08 / Max: 363.39Min: 16.35 / Avg: 16.38 / Max: 16.41

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX280160240320400SE +/- 1.67, N = 3SE +/- 0.03, N = 3390.5715.99
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: VGG19 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX270140210280350Min: 387.49 / Avg: 390.57 / Max: 393.24Min: 15.92 / Avg: 15.99 / Max: 16.03

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX22004006008001000SE +/- 97.79, N = 9SE +/- 5.89, N = 12799261
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2140280420560700Min: 101.61 / Avg: 799.42 / Max: 1096.96Min: 219.22 / Avg: 260.61 / Max: 290.22

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX230060090012001500SE +/- 45.66, N = 12SE +/- 7.60, N = 121247300
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000Min: 1029.36 / Avg: 1246.7 / Max: 1506.02Min: 240.39 / Avg: 299.97 / Max: 339.85

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX22004006008001000SE +/- 55.83, N = 12SE +/- 2.69, N = 4975179
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX22004006008001000Min: 549.61 / Avg: 975.18 / Max: 1119.83Min: 175.03 / Avg: 179.23 / Max: 187.04

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX230060090012001500SE +/- 99.61, N = 12SE +/- 3.23, N = 31237222
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000Min: 369.21 / Avg: 1236.73 / Max: 1638.77Min: 216.23 / Avg: 222.19 / Max: 227.35

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX230060090012001500SE +/- 89.56, N = 9SE +/- 6.40, N = 121435370
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX230060090012001500Min: 756.73 / Avg: 1434.82 / Max: 1618.24Min: 343.41 / Avg: 369.6 / Max: 408.09

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX2400800120016002000SE +/- 23.33, N = 3SE +/- 6.74, N = 31900472
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX230060090012001500Min: 1859.02 / Avg: 1900.36 / Max: 1939.78Min: 458.81 / Avg: 471.89 / Max: 481.26

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX2400800120016002000SE +/- 91.41, N = 12SE +/- 3.45, N = 31879258
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX230060090012001500Min: 1391.55 / Avg: 1879.49 / Max: 2256.73Min: 254.12 / Avg: 258.35 / Max: 265.18

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX26001200180024003000SE +/- 248.85, N = 9SE +/- 0.88, N = 32666307
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: AlexNet - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX25001000150020002500Min: 734.1 / Avg: 2665.62 / Max: 3136.33Min: 305.06 / Avg: 306.54 / Max: 308.11

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2120240360480600SE +/- 0.39, N = 3SE +/- 1.46, N = 3542.8093.61
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2100200300400500Min: 542.02 / Avg: 542.8 / Max: 543.2Min: 91.46 / Avg: 93.61 / Max: 96.39

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX2130260390520650SE +/- 0.24, N = 3SE +/- 1.23, N = 3582.3699.05
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX2100200300400500Min: 582.04 / Avg: 582.36 / Max: 582.82Min: 97.75 / Avg: 99.05 / Max: 101.5

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX22004006008001000SE +/- 14.20, N = 3SE +/- 0.64, N = 3865.4650.39
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX2150300450600750Min: 837.1 / Avg: 865.46 / Max: 881.04Min: 49.13 / Avg: 50.39 / Max: 51.21

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000SE +/- 40.28, N = 12SE +/- 0.54, N = 3944.4651.07
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX2170340510680850Min: 529.43 / Avg: 944.46 / Max: 1047.41Min: 50.27 / Avg: 51.07 / Max: 52.1

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2120240360480600SE +/- 96.56, N = 9SE +/- 0.88, N = 3546202
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX2100200300400500Min: 66.78 / Avg: 546.1 / Max: 789.2Min: 200.64 / Avg: 201.65 / Max: 203.41

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000SE +/- 14.25, N = 12SE +/- 3.70, N = 3863198
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX2150300450600750Min: 779.18 / Avg: 863.23 / Max: 909.77Min: 191.3 / Avg: 197.75 / Max: 204.1

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX2140280420560700SE +/- 140.60, N = 12SE +/- 2.00, N = 3652114
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX2110220330440550Min: 103.13 / Avg: 651.85 / Max: 1131.48Min: 109.96 / Avg: 113.93 / Max: 116.36

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000SE +/- 121.56, N = 10SE +/- 2.12, N = 31049117
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX22004006008001000Min: 126.9 / Avg: 1049.23 / Max: 1368.3Min: 114.11 / Avg: 117.06 / Max: 121.17

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX250100150200250SE +/- 3.18, N = 3SE +/- 0.44, N = 3219.0835.60
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 4Jetson AGX XavierJetson TX24080120160200Min: 213.19 / Avg: 219.08 / Max: 224.09Min: 34.73 / Avg: 35.6 / Max: 36.15

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX250100150200250SE +/- 0.36, N = 3SE +/- 0.67, N = 9234.8436.71
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 8Jetson AGX XavierJetson TX24080120160200Min: 234.16 / Avg: 234.84 / Max: 235.39Min: 33 / Avg: 36.71 / Max: 39.12

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX280160240320400SE +/- 5.48, N = 3SE +/- 0.19, N = 3350.2817.97
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 4Jetson AGX XavierJetson TX260120180240300Min: 340.79 / Avg: 350.28 / Max: 359.79Min: 17.62 / Avg: 17.97 / Max: 18.27

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX290180270360450SE +/- 6.98, N = 3SE +/- 0.27, N = 3407.0119.48
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 8Jetson AGX XavierJetson TX270140210280350Min: 393.46 / Avg: 407.01 / Max: 416.7Min: 19.1 / Avg: 19.48 / Max: 20.01

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX2130260390520650SE +/- 7.03, N = 3SE +/- 0.59, N = 3593106
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX2100200300400500Min: 579.4 / Avg: 593.33 / Max: 601.97Min: 105.09 / Avg: 106.27 / Max: 106.97

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX2130260390520650SE +/- 9.12, N = 3SE +/- 1.29, N = 3613110
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX2110220330440550Min: 603.57 / Avg: 613.05 / Max: 631.29Min: 107.91 / Avg: 110.48 / Max: 112.02

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX22004006008001000SE +/- 11.53, N = 12SE +/- 0.10, N = 31106.1357.18
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX22004006008001000Min: 998.56 / Avg: 1106.13 / Max: 1134.57Min: 57.06 / Avg: 57.18 / Max: 57.38

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX230060090012001500SE +/- 6.54, N = 3SE +/- 0.19, N = 31184.5059.45
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet50 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX22004006008001000Min: 1174.68 / Avg: 1184.5 / Max: 1196.9Min: 59.1 / Avg: 59.45 / Max: 59.73

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX22004006008001000SE +/- 55.00, N = 9SE +/- 3.60, N = 3858218
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX2150300450600750Min: 433.13 / Avg: 857.89 / Max: 974.38Min: 212.49 / Avg: 217.69 / Max: 224.61

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX22004006008001000SE +/- 14.46, N = 12SE +/- 3.59, N = 3956230
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX22004006008001000Min: 887.88 / Avg: 955.66 / Max: 1006.48Min: 223.99 / Avg: 230.17 / Max: 236.41

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX230060090012001500SE +/- 152.29, N = 9SE +/- 1.16, N = 31340125
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX22004006008001000Min: 131.59 / Avg: 1339.83 / Max: 1571.88Min: 123.03 / Avg: 125.21 / Max: 127.01

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX230060090012001500SE +/- 5.04, N = 3SE +/- 0.91, N = 31622130
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: GoogleNet - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX230060090012001500Min: 1614.41 / Avg: 1622.46 / Max: 1631.75Min: 128.46 / Avg: 130.27 / Max: 131.25

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX250100150200250SE +/- 15.50, N = 9SE +/- 0.17, N = 3224.6040.19
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 16Jetson AGX XavierJetson TX24080120160200Min: 100.72 / Avg: 224.6 / Max: 242.03Min: 40.02 / Avg: 40.19 / Max: 40.53

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX260120180240300SE +/- 2.84, N = 3SE +/- 0.14, N = 3253.3441.87
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: FP16 - Batch Size: 32Jetson AGX XavierJetson TX250100150200250Min: 248.76 / Avg: 253.34 / Max: 258.53Min: 41.61 / Avg: 41.87 / Max: 42.1

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX2100200300400500SE +/- 4.04, N = 3SE +/- 0.09, N = 3445.2220.77
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 16Jetson AGX XavierJetson TX280160240320400Min: 437.14 / Avg: 445.22 / Max: 449.46Min: 20.62 / Avg: 20.77 / Max: 20.93

OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX2110220330440550SE +/- 1.47, N = 3SE +/- 0.03, N = 3485.2222.05
OpenBenchmarking.orgImages Per Second, More Is BetterNVIDIA TensorRT InferenceNeural Network: ResNet152 - Precision: INT8 - Batch Size: 32Jetson AGX XavierJetson TX290180270360450Min: 483.5 / Avg: 485.22 / Max: 488.15Min: 22.02 / Avg: 22.05 / Max: 22.12