ONNX Runtime
ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo.
ONNX Runtime 1.19
Model: yolov4 - Device: CPU - Executor: Standard
OpenBenchmarking.org metrics for this test profile configuration based on 149 public results since 21 August 2024 with the latest data as of 20 December 2024.
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Details
Percentile Rank
# Compatible Public Results
Inferences Per Second (Average)
Raptor Lake [24 Cores / 32 Threads]
98th
5
17.36 +/- 0.19
Zen 5 [6 Cores / 12 Threads]
93rd
4
15.23 +/- 0.10
Zen 5 [16 Cores / 32 Threads]
92nd
15
14.39 +/- 0.72
Zen 5 [128 Cores / 256 Threads]
83rd
4
13.73 +/- 0.12
Zen 5 [96 Cores / 192 Threads]
77th
8
12.85 +/- 0.35
Zen 5 [128 Cores / 256 Threads]
70th
8
12.14 +/- 0.18
Zen 5 [96 Cores / 192 Threads]
68th
7
11.99 +/- 0.19
Zen 4 [16 Cores / 32 Threads]
63rd
6
11.33 +/- 0.02
Zen 4 [16 Cores / 32 Threads]
57th
6
11.02 +/- 0.05
Zen 5 [192 Cores / 384 Threads]
55th
4
10.75 +/- 0.13
Zen 4 [12 Cores / 24 Threads]
53rd
6
10.72 +/- 0.01
Zen 4 [12 Cores / 24 Threads]
48th
4
10.49 +/- 0.01
Zen 5 [192 Cores / 384 Threads]
45th
4
9.77 +/- 0.45
Zen 4 [64 Cores / 128 Threads]
40th
5
8.88 +/- 0.18
Zen 4 [32 Cores / 64 Threads]
35th
6
8.17 +/- 0.30
Zen 4 [4 Cores / 8 Threads]
31st
4
7.16
Zen 5 [10 Cores / 20 Threads]
24th
7
6.42 +/- 0.31
Meteor Lake [16 Cores / 22 Threads]
24th
3
6.35
Zen 5 [12 Cores / 24 Threads]
21st
7
6.06 +/- 0.86
Alder Lake [14 Cores / 20 Threads]
16th
3
5.37
Tiger Lake [4 Cores / 8 Threads]
6th
6
4.71 +/- 0.15