* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data current as of 22 September 2022.
system/tensorrt-inference-1.0.0 [View Source] Sun, 23 Dec 2018 18:52:47 GMT Initial commit of NVIDIA TensorRT benchmark from the Jetson Xavier refeenence guide.
OpenBenchmarking.org metrics for this test profile configuration based on 20 public results since 23 December 2018 with the latest data as of 18 August 2019.
Additional benchmark metrics will come after OpenBenchmarking.org has collected a sufficient data-set.
Based on OpenBenchmarking.org data, the selected test / test configuration (NVIDIA TensorRT Inference - Neural Network: AlexNet - Precision: FP16 - Batch Size: 32 - DLA Cores: Disabled) has an average run-time of 4 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.9%.
Tested CPU Architectures
This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.