oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported.


oneDNN MKL-DNN 1.3

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.org metrics for this test profile configuration based on 139 public results since 9 April 2020 with the latest data as of 17 December 2024.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Details
Percentile Rank
# Compatible Public Results
ms (Average)
Zen 2 [32 Cores / 64 Threads]
94th
9
1
Zen 2 [64 Cores / 128 Threads]
90th
5
1
Mid-Tier
75th
> 3
Zen 2 [16 Cores / 32 Threads]
75th
4
3
Zen 3 [12 Cores / 24 Threads]
73rd
3
3
Comet Lake [10 Cores / 20 Threads]
65th
5
3
Zen 3 [8 Cores / 16 Threads]
58th
4
4
Median
50th
5
Zen 3 [6 Cores / 12 Threads]
47th
5
5
Comet Lake [6 Cores / 12 Threads]
44th
4
5
Comet Lake [4 Cores / 8 Threads]
30th
3
8
Low-Tier
25th
> 10
Ivy Bridge [20 Cores / 40 Threads]
24th
3
10 +/- 1
Skylake [4 Cores / 8 Threads]
16th
3
14