oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark onednn.

Project Site

intel.com

Source Repository

github.com

Test Created

17 June 2020

Last Updated

1 March 2024

Test Maintainer

Michael Larabel 

Test Type

Processor

Average Install Time

5 Minutes, 19 Seconds

Average Run Time

2 Minutes, 16 Seconds

Test Dependencies

C/C++ Compiler Toolchain + CMake

Accolades

70k+ Downloads

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page Views ***OpenBenchmarking.orgEventsoneDNN Popularity Statisticspts/onednn2020.062020.082020.102020.122021.022021.042021.062021.082021.102021.122022.022022.042022.062022.082022.102022.122023.022023.042023.062023.082023.102023.122024.022024.0420K40K60K80K100K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
*** Test profile page view reporting began March 2021.
Data updated weekly as of 21 April 2024.

Revision History

pts/onednn-3.4.0   [View Source]   Fri, 01 Mar 2024 13:02:43 GMT
Update against oneDNN 3.4 upstream.

pts/onednn-3.3.0   [View Source]   Thu, 12 Oct 2023 11:14:07 GMT
Update against oneDNN 3.3 upstream.

pts/onednn-3.1.0   [View Source]   Fri, 31 Mar 2023 18:14:37 GMT
Update against oneDNN 3.1 upstream.

pts/onednn-3.0.0   [View Source]   Mon, 19 Dec 2022 21:07:39 GMT
Update against oneDNN 3.0 upstream.

pts/onednn-2.7.0   [View Source]   Wed, 28 Sep 2022 13:00:44 GMT
Update against oneDNN 2.7 upstream.

pts/onednn-1.8.0   [View Source]   Tue, 29 Mar 2022 19:55:25 GMT
Update against oneDNN 2.6 upstream.

pts/onednn-1.7.0   [View Source]   Sat, 13 Mar 2021 07:49:33 GMT
Update against oneDNN 2.1.2 upstream.

pts/onednn-1.6.1   [View Source]   Sun, 20 Dec 2020 09:58:16 GMT
This test profile builds and works fine on macOS so enable it (MacOSX).

pts/onednn-1.6.0   [View Source]   Wed, 09 Dec 2020 13:47:31 GMT
Update against oneDNN 2.0 upstream.

pts/onednn-1.5.0   [View Source]   Wed, 17 Jun 2020 16:26:39 GMT
Initial commit of oneDNN test profile based on Intel oneDNN 1.5, forked from existing mkl-dnn test profile that is named from MKL-DNN before it was renamed to DNNL and then oneDNN. So create new test profile to match Intel naming convention.

Suites Using This Test

Multi-Core

Machine Learning

HPC - High Performance Computing

Intel oneAPI

CPU Massive

Creator Workloads

Server CPU Tests


Performance Metrics

Analyze Test Configuration:

oneDNN 2.0

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.org metrics for this test profile configuration based on 490 public results since 9 December 2020 with the latest data as of 29 June 2021.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
ms (Average)
93rd
4
832 +/- 7
92nd
7
840 +/- 19
89th
4
905 +/- 48
89th
3
909 +/- 4
89th
5
911 +/- 50
88th
6
922 +/- 1
84th
6
1082 +/- 79
84th
6
1100 +/- 91
81st
6
1177 +/- 31
76th
3
1754 +/- 13
Mid-Tier
75th
> 1763
75th
11
1768 +/- 103
75th
12
1774 +/- 170
73rd
3
1849 +/- 6
72nd
3
1874 +/- 8
71st
4
1909 +/- 22
70th
10
2008 +/- 293
69th
6
2084 +/- 12
66th
6
2196 +/- 43
66th
6
2210 +/- 9
63rd
11
2249 +/- 112
62nd
3
2262 +/- 1
62nd
8
2277 +/- 56
58th
8
2404 +/- 5
56th
3
2471 +/- 3
54th
3
2564 +/- 24
53rd
4
2616 +/- 7
52nd
6
2651 +/- 42
Median
50th
2711
50th
6
2721 +/- 234
49th
3
2768 +/- 2
49th
3
3082 +/- 24
48th
3
3191 +/- 14
47th
3
3277 +/- 12
46th
3
3337 +/- 52
45th
5
3428 +/- 77
44th
5
3540 +/- 17
43rd
5
3588 +/- 343
43rd
3
3593 +/- 1
42nd
3
3676 +/- 6
40th
4
3792 +/- 10
38th
6
3837 +/- 73
37th
3
3870 +/- 8
36th
3
3884 +/- 10
34th
6
3950 +/- 14
34th
3
3952 +/- 3
33rd
9
4015 +/- 155
32nd
3
4109 +/- 22
30th
3
4240 +/- 16
27th
7
4640 +/- 12
26th
3
4722 +/- 26
26th
9
4741 +/- 99
Low-Tier
25th
> 4742
25th
3
4749 +/- 35
23rd
3
4890 +/- 16
22nd
3
5276 +/- 16
22nd
6
5280 +/- 167
20th
7
5539 +/- 42
19th
3
5893 +/- 97
18th
3
6612 +/- 9
18th
3
6612 +/- 216
17th
4
6727 +/- 235
16th
3
6922 +/- 28
15th
3
7256 +/- 265
15th
3
7271 +/- 133
14th
6
7399 +/- 15
13th
4
7505 +/- 10
12th
7
7636 +/- 72
11th
3
7784 +/- 59
9th
3
10677 +/- 25
9th
3
11279 +/- 47
8th
3
11759 +/- 10
8th
3
12810 +/- 35
7th
3
15810 +/- 172
6th
3
16339 +/- 48
5th
6
17293 +/- 374
4th
4
22767 +/- 10
3rd
3
33983 +/- 3
2nd
4
35291 +/- 459
1st
3
40816 +/- 261
1st
3
47275 +/- 83
OpenBenchmarking.orgDistribution Of Public Results - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU490 Results Range From 545 To 47363 ms54514822419335642935230616771048041897899151085211789127261366314600155371647417411183481928520222211592209623033239702490725844267812771828655295923052931466324033334034277352143615137088380253896239899408364177342710436474458445521464584739520406080100

Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN 2.0 - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU) has an average run-time of 6 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkHarness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun-Time510152025Min: 4 / Avg: 5.8 / Max: 22

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.6%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsHarness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUDeviation3691215Min: 0 / Avg: 0.59 / Max: 8

Does It Scale Well With Increasing Cores?

Yes, based on the automated analysis of the collected public benchmark data, this test / test settings does generally scale well with increasing CPU core counts. Data based on publicly available results for this test / test settings, separated by vendor, result divided by the reference CPU clock speed, grouped by matching physical CPU core count, and normalized against the smallest core count tested from each vendor for each CPU having a sufficient number of test samples and statistically significant data.

AMDIntelOpenBenchmarking.orgRelative Core Scaling To BaseoneDNN CPU Core ScalingHarness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU46812162432486448121620

Notable Instruction Set Usage

Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.

Instruction Set
Support
Instructions Detected
Used by default on supported hardware.
Found on Intel processors since Sandy Bridge (2011).
Found on AMD processors since Bulldozer (2011).

 
VZEROUPPER VBROADCASTSS VINSERTF128 VPERMILPS VBROADCASTSD VEXTRACTF128 VPERMILPD VPERM2F128 VMASKMOVPS
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Excavator (2016).

 
VPBROADCASTQ VINSERTI128 VPBROADCASTD VPBLENDD VPSLLVD VEXTRACTI128 VPSRAVD VPERM2I128 VPGATHERQQ VGATHERQPS VPERMQ VPBROADCASTW VPSRLVQ VPBROADCASTB VPGATHERDQ VPGATHERQD VPSLLVQ VPMASKMOVQ VPERMD
FMA (FMA)
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Bulldozer (2011).

 
VFMADD231SS VFMADD213SS VFMADD132SS VFMADD132SD VFMADD132PS VFMADD231PS VFMADD213PS VFNMADD132PS VFNMSUB231PS VFNMSUB132SS VFNMADD132SS VFNMSUB231SS VFNMADD231PS VFNMADD231SS VFNMADD213SS VFMADD231SD VFMSUB132SS VFMADD132PD VFMADD231PD VFMADD213PD VFMSUB231SS VFMSUB213PS VFMSUB132PS VFMSUB213SS VFMSUB231SD
Advanced Vector Extensions 512 (AVX512)
Requires passing a supported compiler/build flag (verified with targets: cascadelake, sapphirerapids).
 
(ZMM REGISTER USE)
The test / benchmark does honor compiler flag changes.
Last automated analysis: 2 March 2024

This test profile binary relies on the shared libraries libdnnl.so.3, libm.so.6, libgomp.so.1, libc.so.6.

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
IBM POWER (PowerPC) 64-bit
ppc64le
POWER9 4-Core, POWER9 44-Core
ARMv8 64-bit
aarch64
ARMv8 Cortex-A72 16-Core, Ampere Altra ARMv8 Neoverse-N1 160-Core, Ampere eMAG ARMv8 32-Core