oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported.

The pts/mkl-dnn test profile has been succeeded by the pts/onednn test profile to match the naming convetion used by Intel following their renaming to oneDNN. Run pts/onednn test profile to use the latest version of Intel oneDNN benchmark.

To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark mkl-dnn.

Project Site

github.com

Test Created

16 April 2019

Last Updated

17 June 2020

Test Maintainer

Michael Larabel 

Test Type

Processor

Average Install Time

3 Minutes, 29 Seconds

Average Run Time

4 Minutes, 36 Seconds

Test Dependencies

C/C++ Compiler Toolchain + CMake

Accolades

50k+ Downloads

Supported Platforms


Public Result Uploads *Reported Installs **Reported Test Completions **Test Profile Page Views ***OpenBenchmarking.orgEventsoneDNN MKL-DNN Popularity Statisticspts/mkl-dnn2019.042019.062019.082019.102019.122020.022020.042020.062020.082020.102020.122021.022021.042021.062021.082021.102021.122022.022022.042022.062022.082022.102022.122023.022023.042023.062023.082023.102023.122024.022024.042024.062024.082024.1010K20K30K40K50K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.
** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.
*** Test profile page view reporting began March 2021.
Data updated weekly as of 9 November 2024.
Deconvolution Batch deconv_3d19.2%Deconvolution Batch deconv_1d21.5%IP Batch All19.6%IP Batch 1D19.8%Recurrent Neural Network Training9.7%Recurrent Neural Network Inference10.2%Harness Option PopularityOpenBenchmarking.org
u8s8f3232.8%f3262.1%bf16bf16bf165.2%Data Type Option PopularityOpenBenchmarking.org

Revision History

pts/mkl-dnn-1.3.1   [View Source]   Wed, 17 Jun 2020 16:31:48 GMT
NOTICE: The pts/mkl-dnn test profile has been succeeded by the pts/onednn test profile to match the naming convetion used by Intel following their renaming to oneDNN. Run pts/onednn test profile to use the latest version of Intel oneDNN benchmark

pts/mkl-dnn-1.3.0   [View Source]   Thu, 09 Apr 2020 18:14:05 GMT
Fix as did not mean to expose f16 option.

pts/mkl-dnn-1.2.0   [View Source]   Thu, 09 Apr 2020 18:10:17 GMT
Update against oneDNN 1.3 sources.

pts/mkl-dnn-1.1.1   [View Source]   Thu, 03 Oct 2019 19:35:40 GMT
Update data types.

pts/mkl-dnn-1.1.0   [View Source]   Thu, 03 Oct 2019 16:09:23 GMT
Update against MKL-DNN DNNL 1.1.

pts/mkl-dnn-1.0.2   [View Source]   Thu, 18 Apr 2019 20:44:30 GMT
Fixes per reported test argument reporting and setting of OMP env vars.

pts/mkl-dnn-1.0.1   [View Source]   Wed, 17 Apr 2019 09:20:39 GMT
MKLDNN_ARCH_OPT_FLAGS="-O3 $CXXFLAGS"

pts/mkl-dnn-1.0.0   [View Source]   Tue, 16 Apr 2019 20:44:47 GMT
Initial commit of Intel MKL-DNN benchdnn benchmark.


Performance Metrics

Analyze Test Configuration:

oneDNN MKL-DNN 1.3

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.org metrics for this test profile configuration based on 138 public results since 9 April 2020 with the latest data as of 1 October 2023.

Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.

Component
Percentile Rank
# Compatible Public Results
ms (Average)
Mid-Tier
75th
> 3
Median
50th
5
Low-Tier
25th
> 9
OpenBenchmarking.orgDistribution Of Public Results - Harness: Deconvolution Batch deconv_1d - Data Type: f32138 Results Range From 1 To 303 ms11835526986103120137154171188205222239256273290307306090120150

Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN MKL-DNN 1.3 - Harness: Deconvolution Batch deconv_1d - Data Type: f32) has an average run-time of 2 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.

OpenBenchmarking.orgMinutesTime Required To Complete BenchmarkHarness: Deconvolution Batch deconv_1d - Data Type: f32Run-Time246810Min: 1 / Avg: 1.4 / Max: 5

Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.8%.

OpenBenchmarking.orgPercent, Fewer Is BetterAverage Deviation Between RunsHarness: Deconvolution Batch deconv_1d - Data Type: f32Deviation3691215Min: 0 / Avg: 0.8 / Max: 8

Notable Instruction Set Usage

Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.

Instruction Set
Support
Instructions Detected
Used by default on supported hardware.
Found on Intel processors since Sandy Bridge (2011).
Found on AMD processors since Bulldozer (2011).

 
VZEROUPPER VBROADCASTSS VEXTRACTF128 VINSERTF128 VPERMILPS VBROADCASTSD VPERM2F128
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Excavator (2016).

 
VPBROADCASTQ VGATHERQPS VINSERTI128 VPBROADCASTD VEXTRACTI128 VPSRAVD VPSLLVD VPBROADCASTW VPSRLVQ VPERMQ VGATHERDPS VPBLENDD VPGATHERQD VPBROADCASTB VPGATHERQQ VPSLLVQ VPERM2I128
FMA (FMA)
Used by default on supported hardware.
Found on Intel processors since Haswell (2013).
Found on AMD processors since Bulldozer (2011).

 
VFMADD231SS VFMADD132SS VFMADD132PS VFMADD231PS VFMADD213PS VFNMADD132PS VFNMSUB231PS VFNMSUB132SS VFNMADD132SS VFNMSUB231SS VFNMADD231PS VFNMADD231SS VFMADD213SS VFMSUB231SS VFMADD132PD VFMADD132SD VFNMADD213SS VFMADD231PD VFMADD231SD VFMADD213PD VFMSUB231SD VFMSUB231PS
Last automated analysis: 17 January 2022

This test profile binary relies on the shared libraries libdnnl.so.1, libm.so.6, libgomp.so.1, libc.so.6.

Tested CPU Architectures

This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)

Recent Test Results

OpenBenchmarking.org Results Compare

1 System - 80 Benchmark Results

AMD Ryzen 7 PRO 7840U - LENOVO ThinkPad T14s Gen 4 21F80041GE - AMD Device 14e8

Gentoo 2.15 - 6.10.3-gentoo-dist - GNOME Shell 45.5

1 System - 80 Benchmark Results

AMD Ryzen 7 PRO 7840U - LENOVO ThinkPad T14s Gen 4 21F80041GE - AMD Device 14e8

Gentoo 2.15 - 6.10.2-gentoo-dist - GNOME Shell 45.5

Find More Test Results