ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark onnx.

Project Site

onnxruntime.ai

Source Repository

github.com

Test Created

16 January 2021

Last Updated

2 February 2024

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

14 Minutes, 54 Seconds

Average Run Time

5 Minutes, 18 Seconds

Test Dependencies

Python + Git + C/C++ Compiler Toolchain + CMake

Accolades

40k+ Downloads

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page Views ***OpenBenchmarking.orgEventsONNX Runtime Popularity Statisticspts/onnx2021.012021.022021.032021.042021.052021.062021.072021.082021.092021.102021.112021.122022.012022.022022.032022.042022.052022.062022.072022.082022.092022.102022.112022.122023.012023.022023.032023.042023.052023.062023.072023.082023.092023.102023.112023.122024.012024.022024.032024.045K10K15K20K25K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
*** Test profile page view reporting began March 2021.
Data updated weekly as of 13 April 2024.
T5 Encoder10.4%GPT-29.8%ResNet50 v1-12-int810.0%bertsquad-1210.0%yolov410.6%ArcFace ResNet-1009.8%CaffeNet 12-int810.4%Faster R-CNN R-50-FPN-int88.9%fcn-resnet101-1110.4%super-resolution-109.6%Model Option PopularityOpenBenchmarking.org
Parallel46.7%Standard53.3%Executor Option PopularityOpenBenchmarking.org

Revision History

pts/onnx-1.17.0   [View Source]   Fri, 02 Feb 2024 20:20:26 GMT
Update against ONNX Runtime 1.17, update download links.

pts/onnx-1.6.0   [View Source]   Sat, 11 Feb 2023 07:19:48 GMT
Update against ONNX Runtime 1.14 upstream.

pts/onnx-1.5.0   [View Source]   Wed, 30 Mar 2022 14:24:29 GMT
Add standard/sequential and parallel executor option.

pts/onnx-1.4.0   [View Source]   Sat, 26 Mar 2022 18:00:24 GMT
Update against ONNX 1.11 upstream, switch to parallel executor, add some new models too.

pts/onnx-1.3.0   [View Source]   Fri, 03 Dec 2021 19:38:42 GMT
Update against upstream ONNX-Runtime 1.10.

pts/onnx-1.2.1   [View Source]   Sat, 30 Oct 2021 05:02:16 GMT
Add cmake as necessary external dependency.

pts/onnx-1.2.0   [View Source]   Fri, 29 Oct 2021 17:00:07 GMT
Update against ONNX 1.9.1 upstream. Changes based on https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560 while also needing to adjust the XML options for removal of bertsquad. Closes: https://github.com/phoronix-test-suite/phoronix-test-suite/pull/560

pts/onnx-1.1.0   [View Source]   Mon, 23 Aug 2021 19:03:17 GMT
Update against ONNX 1.8.2 upstream due to prior version now having build problems on newer distributions.

pts/onnx-1.0.1   [View Source]   Sun, 17 Jan 2021 08:51:00 GMT
Increase run time limit to help lower deviation on laptops.

pts/onnx-1.0.0   [View Source]   Sat, 16 Jan 2021 20:09:33 GMT
Initial commit of Microsoft ONNX Runtime.

Suites Using This Test

Machine Learning

HPC - High Performance Computing


Performance Metrics

Analyze Test Configuration:

ONNX Runtime 1.6

Model: yolov4 - Device: OpenMP CPU

OpenBenchmarking.org metrics for this test profile configuration based on 601 public results since 16 January 2021 with the latest data as of 31 January 2022.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
Inferences Per Minute (Average)
100th
4
561 +/- 1
100th
5
555 +/- 1
98th
3
509 +/- 3
97th
3
485 +/- 2
92nd
14
454 +/- 5
91st
3
444 +/- 1
86th
8
438 +/- 17
83rd
3
401 +/- 1
79th
5
379 +/- 2
78th
5
364 +/- 1
77th
3
359 +/- 1
76th
8
354 +/- 15
76th
6
353 +/- 18
Mid-Tier
75th
< 352
75th
4
351 +/- 1
74th
3
346 +/- 7
68th
3
315 +/- 1
64th
3
288 +/- 4
63rd
3
286 +/- 1
57th
11
274 +/- 37
55th
5
269 +/- 13
52nd
5
265 +/- 16
51st
4
262 +/- 1
Median
50th
262
50th
18
261 +/- 13
50th
3
261 +/- 1
47th
5
254 +/- 3
37th
8
233 +/- 1
34th
3
227 +/- 3
34th
8
226 +/- 20
33rd
6
222 +/- 20
30th
3
208 +/- 6
30th
3
205 +/- 1
27th
9
198 +/- 1
Low-Tier
25th
< 196
23rd
9
186 +/- 25
22nd
8
183 +/- 5
20th
4
180 +/- 2
17th
3
174 +/- 1
17th
3
166 +/- 1
16th
3
165 +/- 1
15th
6
162 +/- 5
15th
6
159 +/- 4
12th
3
139 +/- 2
12th
4
133 +/- 1
11th
3
128 +/- 1
10th
3
126 +/- 1
10th
5
114 +/- 8
9th
3
108 +/- 1
7th
3
87 +/- 1
OpenBenchmarking.orgDistribution Of Public Results - Model: yolov4 - Device: OpenMP CPU601 Results Range From 18 To 661 Inferences Per Minute184368931181431681932182432682933183433683934184434684935185435685936186436681530456075

Based on OpenBenchmarking.org data, the selected test / test configuration (ONNX Runtime 1.6 - Model: yolov4 - Device: OpenMP CPU) has an average run-time of 7 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkModel: yolov4 - Device: OpenMP CPURun-Time612182430Min: 3 / Avg: 6.71 / Max: 24

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.7%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsModel: yolov4 - Device: OpenMP CPUDeviation3691215Min: 0 / Avg: 0.68 / Max: 7

Does It Scale Well With Increasing Cores?

Yes, based on the automated analysis of the collected public benchmark data, this test / test settings does generally scale well with increasing CPU core counts. Data based on publicly available results for this test / test settings, separated by vendor, result divided by the reference CPU clock speed, grouped by matching physical CPU core count, and normalized against the smallest core count tested from each vendor for each CPU having a sufficient number of test samples and statistically significant data.

AMDIntelOpenBenchmarking.orgRelative Core Scaling To BaseONNX Runtime CPU Core ScalingModel: yolov4 - Device: OpenMP CPU46812163248641.13282.26563.39844.53125.664

Notable Instruction Set Usage

Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.

Instruction Set
Support
Instructions Detected
SSE2 (SSE2)
Used by default on supported hardware.
 
MOVDQA PUNPCKLQDQ MOVDQU CVTSI2SD DIVSD MOVAPD COMISD UNPCKLPD MULSD CVTTSD2SI ADDSD SUBSD MOVUPD MOVD PUNPCKHQDQ PSUBQ PSHUFD PSRLDQ UCOMISD CVTSS2SD PADDQ ANDPD CVTSD2SS DIVPD MULPD SUBPD PMULUDQ ADDPD SQRTSD CVTPS2PD MINPD MAXPD CVTTPD2DQ CVTDQ2PD CMPLTPD PSHUFLW MAXSD MINSD XORPD MOVLPD SQRTPD CVTTPS2DQ CVTDQ2PS ANDNPD ORPD CVTPD2PS CMPLESD UNPCKHPD CMPLEPD SHUFPD MOVHPD CVTPS2DQ
Used by default on supported hardware.
Found on Intel processors since Sandy Bridge (2011).
Found on AMD processors since Bulldozer (2011).

 
VBROADCASTSD VINSERTF128 VZEROUPPER VBROADCASTSS VMASKMOVPS VEXTRACTF128 VZEROALL VPERM2F128 VPERMILPS
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Excavator (2016).

 
VPBROADCASTD VPMASKMOVD VEXTRACTI128 VINSERTI128 VPERM2I128 VPERMD VPBROADCASTW VPERMQ VPBROADCASTQ VPMASKMOVQ VPBROADCASTB VPSLLVD VPSRAVD VPSRLVD VPSLLVQ VPSRLVQ VPERMPD VGATHERQPD VPGATHERDQ VPGATHERQQ
AVX Vector Neural Network Instructions (AVX-VNNI)
Used by default on supported hardware.
 
VPDPBUSDS
FMA (FMA)
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Bulldozer (2011).

 
VFMADD231PD VFMADD213PD VFMADD231PS VFMADD213PS VFMADD132SS VFMADD132PS VFMADD132SD VFMADD213SS VFMADD132PD VFNMADD231PD VFMADD231SS VFNMADD213PS VFNMADD132PS VFNMADD213SS VFNMADD132SS VFNMADD231PS VFNMADD132PD VFNMADD132SD VFMSUB132SS VFMSUB132SD VFMADD231SD VFMADD213SD VFNMADD213PD VFNMADD213SD VFNMADD231SS VFNMADD231SD
The test / benchmark does honor compiler flag changes.
Last automated analysis: 10 May 2021

This test profile binary relies on the shared libraries libonnxruntime.so.1.6.0, libdl.so.2, libpthread.so.0, libc.so.6, libgomp.so.1, libm.so.6.

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Cortex-A57 4-Core, ARMv8 Cortex-A72 4-Core, Ampere Altra ARMv8 Neoverse-N1 160-Core, Ampere eMAG ARMv8 32-Core