ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark onnx .
Test Created 16 January 2021
Last Updated 2 February 2024
Test Type System
Average Install Time 14 Minutes, 54 Seconds
Average Run Time 5 Minutes, 18 Seconds
Test Dependencies Python + Git + C/C++ Compiler Toolchain + CMake
Accolades 40k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views *** OpenBenchmarking.org Events ONNX Runtime Popularity Statistics pts/onnx 2021.01 2021.02 2021.03 2021.04 2021.05 2021.06 2021.07 2021.08 2021.09 2021.10 2021.11 2021.12 2022.01 2022.02 2022.03 2022.04 2022.05 2022.06 2022.07 2022.08 2022.09 2022.10 2022.11 2022.12 2023.01 2023.02 2023.03 2023.04 2023.05 2023.06 2023.07 2023.08 2023.09 2023.10 2023.11 2023.12 2024.01 2024.02 2024.03 2024.04 5K 10K 15K 20K 25K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data updated weekly as of 13 April 2024.
T5 Encoder 10.4% GPT-2 9.8% ResNet50 v1-12-int8 10.0% bertsquad-12 10.0% yolov4 10.6% ArcFace ResNet-100 9.8% CaffeNet 12-int8 10.4% Faster R-CNN R-50-FPN-int8 8.9% fcn-resnet101-11 10.4% super-resolution-10 9.6% Model Option Popularity OpenBenchmarking.org
Parallel 46.7% Standard 53.3% Executor Option Popularity OpenBenchmarking.org
Revision Historypts/onnx-1.17.0 [View Source ] Fri, 02 Feb 2024 20:20:26 GMT Update against ONNX Runtime 1.17, update download links.
pts/onnx-1.6.0 [View Source ] Sat, 11 Feb 2023 07:19:48 GMT Update against ONNX Runtime 1.14 upstream.
pts/onnx-1.5.0 [View Source ] Wed, 30 Mar 2022 14:24:29 GMT Add standard/sequential and parallel executor option.
pts/onnx-1.4.0 [View Source ] Sat, 26 Mar 2022 18:00:24 GMT Update against ONNX 1.11 upstream, switch to parallel executor, add some new models too.
pts/onnx-1.3.0 [View Source ] Fri, 03 Dec 2021 19:38:42 GMT Update against upstream ONNX-Runtime 1.10.
pts/onnx-1.2.1 [View Source ] Sat, 30 Oct 2021 05:02:16 GMT Add cmake as necessary external dependency.
pts/onnx-1.2.0 [View Source ] Fri, 29 Oct 2021 17:00:07 GMT Update against ONNX 1.9.1 upstream. Changes based on https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560 while also needing to adjust the XML options for removal of bertsquad. Closes: https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560
pts/onnx-1.1.0 [View Source ] Mon, 23 Aug 2021 19:03:17 GMT Update against ONNX 1.8.2 upstream due to prior version now having build problems on newer distributions.
pts/onnx-1.0.1 [View Source ] Sun, 17 Jan 2021 08:51:00 GMT Increase run time limit to help lower deviation on laptops.
pts/onnx-1.0.0 [View Source ] Sat, 16 Jan 2021 20:09:33 GMT Initial commit of Microsoft ONNX Runtime.
Performance MetricsAnalyze Test Configuration: pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: T5 Encoder - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.17.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.17.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Standard (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Second) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.6.x - Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Second) pts/onnx-1.6.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inference Time Cost (ms)) pts/onnx-1.5.x - Model: GPT-2 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: yolov4 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: bertsquad-12 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: super-resolution-10 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Standard (Inferences Per Minute) pts/onnx-1.5.x - Model: GPT-2 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: super-resolution-10 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: bertsquad-12 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.5.x - Model: yolov4 - Device: CPU - Executor: Parallel (Inferences Per Minute) pts/onnx-1.4.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: bertsquad-12 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: GPT-2 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: ArcFace ResNet-100 - Device: CPU (Inferences Per Minute) pts/onnx-1.4.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: shufflenet-v2-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.3.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: super-resolution-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: yolov4 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: shufflenet-v2-10 - Device: CPU (Inferences Per Minute) pts/onnx-1.2.x - Model: fcn-resnet101-11 - Device: CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: fcn-resnet101-11 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: super-resolution-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: yolov4 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: bertsquad-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.1.x - Model: shufflenet-v2-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: fcn-resnet101-11 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: super-resolution-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: yolov4 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: shufflenet-v2-10 - Device: OpenMP CPU (Inferences Per Minute) pts/onnx-1.0.x - Model: bertsquad-10 - Device: OpenMP CPU (Inferences Per Minute) ONNX Runtime 1.6 Model: yolov4 - Device: OpenMP CPU OpenBenchmarking.org metrics for this test profile configuration based on 601 public results since 16 January 2021 with the latest data as of 31 January 2022 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
Inferences Per Minute (Average)
OpenBenchmarking.org Distribution Of Public Results - Model: yolov4 - Device: OpenMP CPU 601 Results Range From 18 To 661 Inferences Per Minute 18 43 68 93 118 143 168 193 218 243 268 293 318 343 368 393 418 443 468 493 518 543 568 593 618 643 668 15 30 45 60 75
Based on OpenBenchmarking.org data, the selected test / test configuration (ONNX Runtime 1.6 - Model: yolov4 - Device: OpenMP CPU ) has an average run-time of 7 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Model: yolov4 - Device: OpenMP CPU Run-Time 6 12 18 24 30 Min: 3 / Avg: 6.71 / Max: 24
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.7% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs Model: yolov4 - Device: OpenMP CPU Deviation 3 6 9 12 15 Min: 0 / Avg: 0.68 / Max: 7
Does It Scale Well With Increasing Cores? Yes , based on the automated analysis of the collected public benchmark data, this test / test settings does generally scale well with increasing CPU core counts. Data based on publicly available results for this test / test settings, separated by vendor, result divided by the reference CPU clock speed, grouped by matching physical CPU core count, and normalized against the smallest core count tested from each vendor for each CPU having a sufficient number of test samples and statistically significant data.
AMD Intel OpenBenchmarking.org Relative Core Scaling To Base ONNX Runtime CPU Core Scaling Model: yolov4 - Device: OpenMP CPU 4 6 8 12 16 32 48 64 1.1328 2.2656 3.3984 4.5312 5.664
Notable Instruction Set Usage Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
Instruction Set
Support
Instructions Detected
SSE2 (SSE2)
Used by default on supported hardware.
MOVDQA PUNPCKLQDQ MOVDQU CVTSI2SD DIVSD MOVAPD COMISD UNPCKLPD MULSD CVTTSD2SI ADDSD SUBSD MOVUPD MOVD PUNPCKHQDQ PSUBQ PSHUFD PSRLDQ UCOMISD CVTSS2SD PADDQ ANDPD CVTSD2SS DIVPD MULPD SUBPD PMULUDQ ADDPD SQRTSD CVTPS2PD MINPD MAXPD CVTTPD2DQ CVTDQ2PD CMPLTPD PSHUFLW MAXSD MINSD XORPD MOVLPD SQRTPD CVTTPS2DQ CVTDQ2PS ANDNPD ORPD CVTPD2PS CMPLESD UNPCKHPD CMPLEPD SHUFPD MOVHPD CVTPS2DQ
Used by default on supported hardware. Found on Intel processors since Sandy Bridge (2011). Found on AMD processors since Bulldozer (2011).
VBROADCASTSD VINSERTF128 VZEROUPPER VBROADCASTSS VMASKMOVPS VEXTRACTF128 VZEROALL VPERM2F128 VPERMILPS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Excavator (2016).
VPBROADCASTD VPMASKMOVD VEXTRACTI128 VINSERTI128 VPERM2I128 VPERMD VPBROADCASTW VPERMQ VPBROADCASTQ VPMASKMOVQ VPBROADCASTB VPSLLVD VPSRAVD VPSRLVD VPSLLVQ VPSRLVQ VPERMPD VGATHERQPD VPGATHERDQ VPGATHERQQ
AVX Vector Neural Network Instructions (AVX-VNNI)
Used by default on supported hardware.
VPDPBUSDS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Bulldozer (2011).
VFMADD231PD VFMADD213PD VFMADD231PS VFMADD213PS VFMADD132SS VFMADD132PS VFMADD132SD VFMADD213SS VFMADD132PD VFNMADD231PD VFMADD231SS VFNMADD213PS VFNMADD132PS VFNMADD213SS VFNMADD132SS VFNMADD231PS VFNMADD132PD VFNMADD132SD VFMSUB132SS VFMSUB132SD VFMADD231SD VFMADD213SD VFNMADD213PD VFNMADD213SD VFNMADD231SS VFNMADD231SD
The test / benchmark does honor compiler flag changes.
Last automated analysis: 10 May 2021
This test profile binary relies on the shared libraries libonnxruntime.so.1.6.0, libdl.so.2, libpthread.so.0, libc.so.6, libgomp.so.1, libm.so.6 .
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Cortex-A57 4-Core, ARMv8 Cortex-A72 4-Core, Ampere Altra ARMv8 Neoverse-N1 160-Core, Ampere eMAG ARMv8 32-Core