ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark onnx.

Project Site

onnxruntime.ai

Test Created

16 January 2021

Last Updated

17 January 2021

Test Maintainer

Michael Larabel 

Test Type

System

Average Install Time

13 Minutes, 4 Seconds

Average Run Time

5 Minutes, 18 Seconds

Test Dependencies

Python + Git + C/C++ Compiler Toolchain

Supported Platforms


Public Result UploadsReported Installs*Test Completions*OpenBenchmarking.orgEventsONNX Runtime Popularity Statisticspts/onnx2021.012021.022021.032021.04400800120016002000
* Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
Data current as of Thu, 15 Apr 2021 06:45:35 GMT.
super-resolution-1020.4%fcn-resnet101-1121.8%bertsquad-1018.9%shufflenet-v2-1019.2%yolov419.6%Model Option PopularityOpenBenchmarking.org

Revision History

pts/onnx-1.0.1   [View Source]   Sun, 17 Jan 2021 08:51:00 GMT
Increase run time limit to help lower deviation on laptops.

pts/onnx-1.0.0   [View Source]   Sat, 16 Jan 2021 20:09:33 GMT
Initial commit of Microsoft ONNX Runtime.

Suites Using This Test

Machine Learning

HPC - High Performance Computing


Performance Metrics

Analyze Test Configuration:

ONNX Runtime 1.6

Model: fcn-resnet101-11 - Device: OpenMP CPU

OpenBenchmarking.org metrics for this test profile configuration based on 511 public results since 16 January 2021 with the latest data as of 15 April 2021.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Matching Public Results
Inferences Per Minute (Average)
100th
5
144 +/- 1
97th
4
140 +/- 1
96th
3
131 +/- 1
95th
4
118 +/- 1
95th
4
115 +/- 6
76th
13
84 +/- 1
Mid-Tier
75th
< 84
Median
50th
63
50th
3
62 +/- 1
37th
3
48 +/- 1
33rd
5
43 +/- 2
30th
17
41 +/- 3
Low-Tier
25th
< 39
25th
7
38 +/- 1
14th
4
26 +/- 1
13th
4
24 +/- 1
12th
3
22 +/- 1
8th
3
15 +/- 1
OpenBenchmarking.orgDistribution Of Public Results - Model: fcn-resnet101-11 - Device: OpenMP CPU511 Results Range From 3 To 187 Inferences Per Minute317314559738710111512914315717118519920406080100

Based on OpenBenchmarking.org data, the selected test / test configuration (ONNX Runtime 1.6 - Model: fcn-resnet101-11 - Device: OpenMP CPU) has an average run-time of 6 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkModel: fcn-resnet101-11 - Device: OpenMP CPURun-Time48121620Min: 1 / Avg: 5.5 / Max: 17

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.3%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsModel: fcn-resnet101-11 - Device: OpenMP CPUDeviation246810Min: 0 / Avg: 0.29 / Max: 3

Does It Scale Well With Increasing Cores?

Yes, based on the automated analysis of the collected public benchmark data, this test / test settings does generally scale well with increasing CPU core counts. Data based on publicly available results for this test / test settings, separated by vendor, result divided by the reference CPU clock speed, grouped by matching physical CPU core count, and normalized against the smallest core count tested from each vendor for each CPU having a sufficient number of test samples and statistically significant data.

AMDIntelOpenBenchmarking.orgRelative Core Scaling To BaseONNX Runtime CPU Core ScalingModel: fcn-resnet101-11 - Device: OpenMP CPU46812163248641.763.525.287.048.8

Recent Test Results

OpenBenchmarking.org Results Compare

5 Systems - 4 Benchmark Results

Intel Core i5-4250U - Intel D54250WYK - Intel Haswell-ULT DRAM

Debian 10 - 4.19.0-16-amd64 - GCC 8.3.0 + Open64 PARSE ERROR

4 Systems - 4 Benchmark Results

ARMv8 Cortex-A72 - BCM2835 Raspberry Pi 400 Rev 1.0 - 4096MB

Debian 10 - 5.10.17-v8+ - GCC 8.3.0

12 Systems - 453 Benchmark Results

Intel Core i5-11600K - ASUS ROG MAXIMUS XIII HERO - Intel Device 43ef

Ubuntu 21.04 - 5.12.0-051200rc3daily20210315-generic - GNOME Shell 3.38.3

3 Systems - 4 Benchmark Results

ARMv8 Cortex-A57 - NVIDIA Jetson Nano 2GB Developer Kit - 2048MB

Ubuntu 18.04 - 4.9.201-tegra - LXDE 0.9.3

2 Systems - 4 Benchmark Results

ARMv8 Cortex-A72 - BCM2835 Raspberry Pi 400 Rev 1.0 - 4096MB

Debian 10 - 5.10.17-v8+ - GCC 8.3.0

1 System - 4 Benchmark Results

ARMv8 Cortex-A72 - BCM2835 Raspberry Pi 400 Rev 1.0 - 4096MB

Debian 10 - 5.10.17-v8+ - GCC 8.3.0

9 Systems - 442 Benchmark Results

AMD Ryzen 9 5900X 12-Core - ASUS ROG CROSSHAIR VIII HERO - AMD Starship

Ubuntu 21.04 - 5.12.0-051200rc3daily20210315-generic - GNOME Shell 3.38.3

1 System - 323 Benchmark Results

Intel Core i9-11900K - ASUS ROG MAXIMUS XIII HERO - Intel Tiger Lake-H

Ubuntu 21.04 - 5.12.0-051200rc3daily20210315-generic - GNOME Shell 3.38.3

2 Systems - 96 Benchmark Results

Intel Core i9-11900K - ASUS ROG MAXIMUS XIII HERO - Intel Tiger Lake-H

Ubuntu 21.04 - 5.12.0-051200rc3daily20210315-generic - GNOME Shell 3.38.3

Most Popular Test Results

Find More Test Results