ONNX Runtime
ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo.
ONNX Runtime 1.19
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
OpenBenchmarking.org metrics for this test profile configuration based on 161 public results since 21 August 2024 with the latest data as of 20 December 2024.
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Details
Percentile Rank
# Compatible Public Results
Inference Time Cost (ms) (Average)
Zen 5 [128 Cores / 256 Threads]
99th
4
112
Zen 5 [192 Cores / 384 Threads]
94th
4
132 +/- 12
Zen 5 [192 Cores / 384 Threads]
91st
4
140
Zen 5 [128 Cores / 256 Threads]
84th
11
145 +/- 18
Zen 5 [96 Cores / 192 Threads]
84th
8
152 +/- 11
Zen 5 [96 Cores / 192 Threads]
79th
7
159 +/- 5
Zen 4 [64 Cores / 128 Threads]
72nd
6
372 +/- 2
Zen 5 [16 Cores / 32 Threads]
64th
15
382 +/- 6
Zen 4 [16 Cores / 32 Threads]
56th
7
648 +/- 10
Zen 4 [16 Cores / 32 Threads]
54th
7
650 +/- 5
Zen 5 [6 Cores / 12 Threads]
47th
4
765 +/- 1
Zen 4 [12 Cores / 24 Threads]
44th
7
852 +/- 2
Zen 4 [12 Cores / 24 Threads]
40th
5
868 +/- 4
Raptor Lake [24 Cores / 32 Threads]
36th
5
890 +/- 3
Zen 4 [8 Cores / 16 Threads]
33rd
3
1093 +/- 3
Zen 5 [12 Cores / 24 Threads]
25th
7
1969 +/- 120
Zen 5 [10 Cores / 20 Threads]
25th
7
1969 +/- 92
Zen 4 [4 Cores / 8 Threads]
17th
5
2154 +/- 3
Meteor Lake [16 Cores / 22 Threads]
13th
3
2464
Lunar Lake [8 Cores / 8 Threads]
12th
4
2892 +/- 375
Alder Lake [14 Cores / 20 Threads]
8th
3
3228
Tiger Lake [4 Cores / 8 Threads]
4th
6
3451 +/- 73