oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark onednn.

Project Site

github.com

Test Created

17 June 2020

Last Updated

20 December 2020

Test Maintainer

Michael Larabel

Test Type

Processor

Average Install Time

8 Minutes, 38 Seconds

Average Run Time

2 Minutes, 2 Seconds

Test Dependencies

C/C++ Compiler Toolchain + CMake

Accolades

5k+ Downloads

Supported Platforms


Public Result UploadsReported Installs*Test Completions*OpenBenchmarking.orgEventsoneDNN Popularity Statisticspts/onednn2020.062020.072020.082020.092020.102020.112020.122021.014K8K12K16K20K
* Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
Data current as of Tue, 26 Jan 2021 07:06:13 GMT.
Deconvolution Batch shapes_3d11.4%Recurrent Neural Network Training15.9%IP Shapes 1D11.5%Convolution Batch Shapes Auto11.5%Matrix Multiply Batch Shapes Transformer11.3%Recurrent Neural Network Inference15.6%IP Shapes 3D11.5%Deconvolution Batch shapes_1d11.4%Harness Option PopularityOpenBenchmarking.org
bf16bf16bf1613.9%u8s8f3239.8%f3246.3%Data Type Option PopularityOpenBenchmarking.org

Revision History

pts/onednn-1.6.1   [View Source]   Sun, 20 Dec 2020 09:58:16 GMT
This test profile builds and works fine on macOS so enable it (MacOSX).

pts/onednn-1.6.0   [View Source]   Wed, 09 Dec 2020 13:47:31 GMT
Update against oneDNN 2.0 upstream.

pts/onednn-1.5.0   [View Source]   Wed, 17 Jun 2020 16:26:39 GMT
Initial commit of oneDNN test profile based on Intel oneDNN 1.5, forked from existing mkl-dnn test profile that is named from MKL-DNN before it was renamed to DNNL and then oneDNN. So create new test profile to match Intel naming convention.

Suites Using This Test

Multi-Core

Machine Learning

Intel oneAPI

CPU Massive

Server CPU Tests

Creator Workloads

HPC - High Performance Computing


Performance Metrics

Analyze Test Configuration:

oneDNN 1.5

Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.org metrics for this test profile configuration based on 92 public results since 20 June 2020 with the latest data as of 20 November 2020.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Matching Public Results
ms (Average)
91st
13
7.2 +/- 0.5
Mid-Tier
75th
> 8.7
69th
21
9.1 +/- 0.2
Median
50th
10.5
46th
6
10.8 +/- 0.4
Low-Tier
25th
> 17.4
24th
7
17.5 +/- 0.2
20th
4
19.7 +/- 0.2
14th
5
26.2 +/- 0.2
8th
4
30.3 +/- 0.1
OpenBenchmarking.orgDistribution Of Public Results - Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU92 Results Range From 3 To 94 ms31119273543515967758391991224364860

Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN 1.5 - Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU) has an average run-time of 2 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkHarness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPURun-Time246810Min: 1 / Avg: 1 / Max: 1

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.1%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsHarness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPUDeviation246810Min: 0 / Avg: 0.05 / Max: 2

Recent Test Results

OpenBenchmarking.org Results Compare

3 Systems - 376 Benchmark Results

2 x AMD EPYC 7F72 24-Core - Supermicro H11DSi-NT v2.00 - AMD Starship

Ubuntu 20.10 - 5.11.0-rc4-max-boost-inv-patch - GNOME Shell 3.38.1

2 Systems - 220 Benchmark Results

AMD Ryzen 9 5950X 16-Core - ASUS ROG CROSSHAIR VIII HERO - AMD Starship

Ubuntu 20.10 - 5.11.0-rc4-max-boost-inv-patch - GNOME Shell 3.38.1

3 Systems - 376 Benchmark Results

2 x AMD EPYC 7F72 24-Core - Supermicro H11DSi-NT v2.00 - AMD Starship

Ubuntu 20.10 - 5.11.0-rc4-max-boost-inv-patch - GNOME Shell 3.38.1

1 System - 466 Benchmark Results

2 x AMD EPYC 7F72 24-Core - Supermicro H11DSi-NT v2.00 - AMD Starship

Ubuntu 20.10 - 5.10.9-051009-generic - GNOME Shell 3.38.1

3 Systems - 74 Benchmark Results

2 x AMD EPYC 7601 32-Core - Dell 02MJ3T - AMD 17h

Ubuntu 19.10 - 5.9.0-050900rc6daily20200922-generic - GNOME Shell 3.34.1

2 Systems - 129 Benchmark Results

AMD Ryzen 9 5950X 16-Core - ASUS ROG CROSSHAIR VIII HERO - AMD Starship

Ubuntu 20.10 - 5.11.0-051100rc2daily20210108-generic - GNOME Shell 3.38.1

3 Systems - 191 Benchmark Results

AMD Ryzen 3 2200G - ASUS PRIME B350M-E - AMD Raven

Ubuntu 20.10 - 5.8.0-38-generic - GNOME Shell 3.38.1

4 Systems - 104 Benchmark Results

AMD Ryzen 5 2400G - MSI B350M GAMING PRO - AMD Raven

Ubuntu 19.10 - 5.3.0-64-generic - GNOME Shell 3.34.1

4 Systems - 61 Benchmark Results

Intel Core i9-7960X - MSI X299 SLI PLUS - Intel Sky Lake-E DMI3 Registers

Ubuntu 20.04 - 5.4.0-58-generic - X Server 1.20.8

3 Systems - 113 Benchmark Results

Intel Core i9-7980XE - ASUS PRIME X299-A - Intel Sky Lake-E DMI3 Registers

Ubuntu 20.10 - 5.8.0-36-generic - GNOME Shell 3.38.1

Most Popular Test Results

Find More Test Results

OpenBenchmarking.org Community User Comments

Post A Comment