NVIDIA TensorRT Inference

This test profile uses any existing system installation of NVIDIA TensorRT for carrying out inference benchmarks with various neural networks.

This test profile requires you have already installed TensorRT to /usr/src/tensorrt.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark system/tensorrt-inference.

Use with caution this test profile is currently marked Experimental.

Test Created

23 December 2018

Test Maintainer

Michael Larabel 

Test Type


Average Install Time

2 Seconds

Average Run Time

8 Minutes, 32 Seconds


5k+ Downloads

Supported Platforms

Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page Views ***OpenBenchmarking.orgEventsNVIDIA TensorRT Inference Popularity Statisticssystem/tensorrt-inference2018.122019.032019.052019.072019.092020.012020.052021.032021.052021.072021.092021.112022.012022.032022.052022.072022.092022.112023.012023.032023.052023.072023.092023.112024.012024.032024.057001400210028003500
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
*** Test profile page view reporting began March 2021.
Data updated weekly as of 20 June 2024.
ResNet15210.1%AlexNet32.3%VGG195.1%ResNet5023.0%GoogleNet24.4%VGG165.1%Neural Network Option PopularityOpenBenchmarking.org
FP1665.0%INT835.0%Precision Option PopularityOpenBenchmarking.org
447.9%3247.0%85.1%Batch Size Option PopularityOpenBenchmarking.org

Revision History

system/tensorrt-inference-1.0.0   [View Source]   Sun, 23 Dec 2018 18:52:47 GMT
Initial commit of NVIDIA TensorRT benchmark from the Jetson Xavier refeenence guide.

Performance Metrics

Analyze Test Configuration:

NVIDIA TensorRT Inference

Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled

OpenBenchmarking.org metrics for this test profile configuration based on 20 public results since 23 December 2018 with the latest data as of 18 August 2019.

Additional benchmark metrics will come after OpenBenchmarking.org has collected a sufficient data-set.

OpenBenchmarking.orgDistribution Of Public Results - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled20 Results Range From 149 To 17017 Images Per Second1494878251163150118392177251528533191352938674205454348815219555758956233657169097247758579238261859989379275961399511028910627109651130311641119791231712655129931333113669140071434514683150211535915697160351637316711170493691215

Based on OpenBenchmarking.org data, the selected test / test configuration (NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled) has an average run-time of 4 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledRun-Time3691215Min: 1 / Avg: 3.15 / Max: 10

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.9%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsNeural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: DisabledDeviation246810Min: 0 / Avg: 0.89 / Max: 4

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
(Many Processors)
ARMv8 64-bit
ARMv8 rev 0 8-Core, ARMv8 rev 1 2-Core, ARMv8 rev 1 4-Core, ARMv8 rev 3 4-Core