ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark onnx .
Test Created 16 January 2021
Last Updated 21 August 2024
Test Type System
Average Install Time 14 Minutes, 39 Seconds
Average Run Time 5 Minutes, 18 Seconds
Test Dependencies Python + Git + C/C++ Compiler Toolchain + CMake
Accolades 60k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views *** OpenBenchmarking.org Events ONNX Runtime Popularity Statistics pts/onnx 2021.01 2021.03 2021.05 2021.07 2021.09 2021.11 2022.01 2022.03 2022.05 2022.07 2022.09 2022.11 2023.01 2023.03 2023.05 2023.07 2023.09 2023.11 2024.01 2024.03 2024.05 2024.07 2024.09 2024.11 6K 12K 18K 24K 30K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data updated weekly as of 22 November 2024.
ArcFace ResNet-100 8.0% T5 Encoder 8.5% Faster R-CNN R-50-FPN-int8 7.7% yolov4 8.7% ZFNet-512 8.2% super-resolution-10 8.5% bertsquad-12 7.9% GPT-2 7.9% fcn-resnet101-11 8.6% ResNet101_DUC_HDC-12 8.8% CaffeNet 12-int8 8.5% ResNet50 v1-12-int8 8.6% Model Option Popularity OpenBenchmarking.org
Standard 51.1% Parallel 48.9% Executor Option Popularity OpenBenchmarking.org
Revision Historypts/onnx-1.19.0 [View Source ] Wed, 21 Aug 2024 11:06:46 GMT Update against ONNX Runtime 1.19 upstream.
pts/onnx-1.17.0 [View Source ] Fri, 02 Feb 2024 20:20:26 GMT Update against ONNX Runtime 1.17, update download links.
pts/onnx-1.6.0 [View Source ] Sat, 11 Feb 2023 07:19:48 GMT Update against ONNX Runtime 1.14 upstream.
pts/onnx-1.5.0 [View Source ] Wed, 30 Mar 2022 14:24:29 GMT Add standard/sequential and parallel executor option.
pts/onnx-1.4.0 [View Source ] Sat, 26 Mar 2022 18:00:24 GMT Update against ONNX 1.11 upstream, switch to parallel executor, add some new models too.
pts/onnx-1.3.0 [View Source ] Fri, 03 Dec 2021 19:38:42 GMT Update against upstream ONNX-Runtime 1.10.
pts/onnx-1.2.1 [View Source ] Sat, 30 Oct 2021 05:02:16 GMT Add cmake as necessary external dependency.
pts/onnx-1.2.0 [View Source ] Fri, 29 Oct 2021 17:00:07 GMT Update against ONNX 1.9.1 upstream. Changes based on https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560 while also needing to adjust the XML options for removal of bertsquad. Closes: https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560
pts/onnx-1.1.0 [View Source ] Mon, 23 Aug 2021 19:03:17 GMT Update against ONNX 1.8.2 upstream due to prior version now having build problems on newer distributions.
pts/onnx-1.0.1 [View Source ] Sun, 17 Jan 2021 08:51:00 GMT Increase run time limit to help lower deviation on laptops.
pts/onnx-1.0.0 [View Source ] Sat, 16 Jan 2021 20:09:33 GMT Initial commit of Microsoft ONNX Runtime.
Performance MetricsAnalyze Test Configuration: pts/onnx-1.19.x - Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: ZFNet-512 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: ZFNet-512 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ZFNet-512 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: ZFNet-512 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.19.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.19.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.19.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.5.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.4.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: bertsquad-12 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: GPT-2 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: ArcFace ResNet-100 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: shufflenet-v2-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: shufflenet-v2-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: fcn-resnet101-11 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: super-resolution-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: yolov4 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: bertsquad-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: shufflenet-v2-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: fcn-resnet101-11 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: super-resolution-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: yolov4 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: shufflenet-v2-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: bertsquad-10 - Device: OpenMP CPU (Inferences Per Minute) ONNX Runtime 1.19 Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard OpenBenchmarking.org metrics for this test profile configuration based on 121 public results since 21 August 2024 with the latest data as of 13 November 2024 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
Inference Time Cost (ms) (Average)
OpenBenchmarking.org Distribution Of Public Results - Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard 121 Results Range From 111 To 3501 Inference Time Cost (ms) 111 179 247 315 383 451 519 587 655 723 791 859 927 995 1063 1131 1199 1267 1335 1403 1471 1539 1607 1675 1743 1811 1879 1947 2015 2083 2151 2219 2287 2355 2423 2491 2559 2627 2695 2763 2831 2899 2967 3035 3103 3171 3239 3307 3375 3443 3511 9 18 27 36 45
Based on OpenBenchmarking.org data, the selected test / test configuration (ONNX Runtime 1.19 - Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard ) has an average run-time of 7 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Run-Time 4 8 12 16 20 Min: 3 / Avg: 7.07 / Max: 16
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.3% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard Deviation 2 4 6 8 10 Min: 0 / Avg: 0.3 / Max: 2
Notable Instruction Set Usage Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
Instruction Set
Support
Instructions Detected
SSE2 (SSE2)
Used by default on supported hardware.
MOVDQA PUNPCKLQDQ MOVDQU CVTSI2SD DIVSD MOVAPD COMISD UNPCKLPD MULSD CVTTSD2SI ADDSD SUBSD MOVUPD MOVD PUNPCKHQDQ PSUBQ PSHUFD PSRLDQ UCOMISD CVTSS2SD PADDQ ANDPD CVTSD2SS DIVPD MULPD SUBPD PMULUDQ ADDPD SQRTSD CVTPS2PD MINPD MAXPD CVTTPD2DQ CVTDQ2PD CMPLTPD PSHUFLW MAXSD MINSD XORPD MOVLPD SQRTPD CVTTPS2DQ CVTDQ2PS ANDNPD ORPD CVTPD2PS CMPLESD UNPCKHPD CMPLEPD SHUFPD MOVHPD CVTPS2DQ
Used by default on supported hardware. Found on Intel processors since Sandy Bridge (2011). Found on AMD processors since Bulldozer (2011).
VBROADCASTSD VINSERTF128 VZEROUPPER VBROADCASTSS VMASKMOVPS VEXTRACTF128 VZEROALL VPERM2F128 VPERMILPS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Excavator (2016).
VPBROADCASTD VPMASKMOVD VEXTRACTI128 VINSERTI128 VPERM2I128 VPERMD VPBROADCASTW VPERMQ VPBROADCASTQ VPMASKMOVQ VPBROADCASTB VPSLLVD VPSRAVD VPSRLVD VPSLLVQ VPSRLVQ VPERMPD VGATHERQPD VPGATHERDQ VPGATHERQQ
AVX Vector Neural Network Instructions (AVX-VNNI)
Used by default on supported hardware.
VPDPBUSDS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Bulldozer (2011).
VFMADD231PD VFMADD213PD VFMADD231PS VFMADD213PS VFMADD132SS VFMADD132PS VFMADD132SD VFMADD213SS VFMADD132PD VFNMADD231PD VFMADD231SS VFNMADD213PS VFNMADD132PS VFNMADD213SS VFNMADD132SS VFNMADD231PS VFNMADD132PD VFNMADD132SD VFMSUB132SS VFMSUB132SD VFMADD231SD VFMADD213SD VFNMADD213PD VFNMADD213SD VFNMADD231SS VFNMADD231SD
The test / benchmark does honor compiler flag changes.
Last automated analysis: 10 May 2021
This test profile binary relies on the shared libraries libonnxruntime.so.1.6.0, libdl.so.2, libpthread.so.0, libc.so.6, libgomp.so.1, libm.so.6 .
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-V2, ARMv8 Neoverse-V2 72-Core, AmpereOne 192-Core
Recent Test Results
Featured Processor Comparison
Featured Processor Comparison
Featured Processor Comparison
Featured Processor Comparison
1 System - 323 Benchmark Results
AMD Ryzen Threadripper 7960X 24-Cores - Gigabyte TRX50 AERO D - AMD Device 14a4
Ubuntu 24.04 - 6.8.0-48-generic - GNOME Shell 46.0
Featured Processor Comparison
2 Systems - 541 Benchmark Results
ARMv8 Cortex-A76 - Raspberry Pi 5 Model B Rev 1.0 - Broadcom BCM2712
Ubuntu 24.04 - 6.8.0-1013-raspi - KDE Plasma 5.27.11
2 Systems - 525 Benchmark Results
ARMv8 Cortex-A76 - Raspberry Pi 5 Model B Rev 1.0 - Broadcom BCM2712
Ubuntu 24.04 - 6.8.0-1012-raspi - KDE Plasma 5.27.11
Featured Kernel Comparison
1 System - 27 Benchmark Results
AMD Ryzen 7 9800X3D 8-Core - 32GB - 1789GB
Ubuntu 20.04 - 4.4.0-22621-Microsoft - X Server
1 System - 342 Benchmark Results
Intel Core i9-12900K - ASUS PRIME Z790-V AX - Intel Raptor Lake-S PCH
Ubuntu 24.04 - 6.8.0-47-generic - GNOME Shell 46.0
1 System - 495 Benchmark Results
2 Systems - 147 Benchmark Results
ARMv8 Neoverse-V2 - Pegatron JIMBO P4352 - 1 x 480GB LPDDR5-6400MT
Ubuntu 24.04 - 6.8.0-45-generic-64k - NVIDIA
1 System - 147 Benchmark Results
ARMv8 Neoverse-V2 - Pegatron JIMBO P4352 - 1 x 480GB LPDDR5-6400MT
Ubuntu 24.04 - 6.8.0-45-generic-64k - NVIDIA
6 Systems - 344 Benchmark Results
Intel Core i7-1280P - MSI Prestige 14Evo A12M MS-14C6 - Intel Alder Lake PCH
Ubuntu 24.10 - 6.11.0-rc6-phx - GNOME Shell 47.0
Most Popular Test Results
5 Systems - 60 Benchmark Results
AMD Ryzen Threadripper 7980X 64-Cores - System76 Thelio Major - AMD Device 14a4
Ubuntu 24.10 - 6.8.0-31-generic - GNOME Shell
Featured Processor Comparison
AMD Ryzen 9 9950X 16-Core - ASUS ROG STRIX X670E-E GAMING WIFI - AMD Device 14d8
Ubuntu 24.04 - 6.10.0-phx - GNOME Shell 46.0
5 Systems - 58 Benchmark Results
Intel Core i9-14900K - ASUS PRIME Z790-P WIFI - Intel Raptor Lake-S PCH
Ubuntu 24.04 - 6.10.0-061000rc6daily20240706-generic - GNOME Shell 46.0
5 Systems - 63 Benchmark Results
AMD Ryzen 9 9950X 16-Core - ASUS ROG STRIX X670E-E GAMING WIFI - AMD Device 14d8
Ubuntu 24.04 - 6.10.0-phx - GNOME Shell 46.0
3 Systems - 86 Benchmark Results
Intel Core i7-1185G7 - Dell XPS 13 9310 0DXP1F - Intel Tiger Lake-LP
Ubuntu 24.04 - 6.10.0-061000rc4daily20240621-generic - GNOME Shell 46.0
4 Systems - 63 Benchmark Results
AMD Ryzen AI 9 HX 370 - ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 - AMD Device 1507
Ubuntu 24.04 - 6.10.0-phx - GNOME Shell 46.0
4 Systems - 68 Benchmark Results
AMD Ryzen AI 9 365 - ASUS Zenbook S 16 UM5606WA_UM5606WA UM5606WA v1.0 - AMD Device 1507
Ubuntu 24.04 - 6.10.0-phx - GNOME Shell 46.0
2 Systems - 81 Benchmark Results
ARMv8 Neoverse-V2 - Amazon EC2 r8g.48xlarge - Amazon Device 0200
Ubuntu 24.04 - 6.8.0-41-generic-64k - GCC 13.2.0
2 Systems - 137 Benchmark Results
AMD Ryzen 5 9600X 6-Core - ASUS ROG STRIX X670E-E GAMING WIFI - AMD Device 14d8
Ubuntu 24.04 - 6.10.0-phx - GNOME Shell 46.0
3 Systems - 95 Benchmark Results
2 x AMD EPYC 9124 16-Core - AMD Titanite_4G - AMD Device 14a4
Ubuntu 24.04 - 6.8.0-22-generic - GCC 13.2.0
3 Systems - 69 Benchmark Results
2 x AMD EPYC 9124 16-Core - AMD Titanite_4G - AMD Device 14a4
Ubuntu 24.04 - 6.8.0-22-generic - GCC 13.2.0
7 Systems - 440 Benchmark Results
Intel Core Ultra 7 256V - ASUS Zenbook S 14 UX5406SA_UX5406SA UX5406SA v1.0 - Intel Device a87f
Ubuntu 24.10 - 6.12.0-rc3-phx-aipt - GNOME Shell 47.0
2 Systems - 81 Benchmark Results
ARMv8 Neoverse-V2 - Amazon EC2 r8g.48xlarge - Amazon Device 0200
Ubuntu 24.04 - 6.8.0-41-generic-64k - GCC 13.2.0
2 Systems - 81 Benchmark Results
ARMv8 Neoverse-V2 - Amazon EC2 r8g.48xlarge - Amazon Device 0200
Ubuntu 24.04 - 6.8.0-41-generic-64k - GCC 13.2.0
2 Systems - 81 Benchmark Results
ARMv8 Neoverse-V2 - Amazon EC2 r8g.48xlarge - Amazon Device 0200
Ubuntu 24.04 - 6.8.0-41-generic-64k - GCC 13.2.0