oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark onednn .
Test Created 17 June 2020
Last Updated 20 December 2020
Test Maintainer Michael Larabel
Test Type Processor
Average Install Time 9 Minutes, 3 Seconds
Average Run Time 2 Minutes, 2 Seconds
Accolades 5k+ Downloads Public Result Uploads Reported Installs* Test Completions* OpenBenchmarking.org Events oneDNN Popularity Statistics pts/onednn 2020.06 2020.07 2020.08 2020.09 2020.10 2020.11 2020.12 2021.01 4K 8K 12K 16K 20K
* Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. Data current as of Mon, 18 Jan 2021 15:01:31 GMT.
Deconvolution Batch shapes_3d 11.4% Recurrent Neural Network Training 15.9% IP Shapes 1D 11.5% Convolution Batch Shapes Auto 11.5% Matrix Multiply Batch Shapes Transformer 11.3% Recurrent Neural Network Inference 15.7% IP Shapes 3D 11.5% Deconvolution Batch shapes_1d 11.4% Harness Option Popularity OpenBenchmarking.org
bf16bf16bf16 14.1% u8s8f32 40.0% f32 45.9% Data Type Option Popularity OpenBenchmarking.org
Revision Historypts/onednn-1.6.1 [View Source ] Sun, 20 Dec 2020 09:58:16 GMT This test profile builds and works fine on macOS so enable it (MacOSX).
pts/onednn-1.6.0 [View Source ] Wed, 09 Dec 2020 13:47:31 GMT Update against oneDNN 2.0 upstream.
pts/onednn-1.5.0 [View Source ] Wed, 17 Jun 2020 16:26:39 GMT Initial commit of oneDNN test profile based on Intel oneDNN 1.5, forked from existing mkl-dnn test profile that is named from MKL-DNN before it was renamed to DNNL and then oneDNN. So create new test profile to match Intel naming convention.
Performance MetricsAnalyze Test Configuration: pts/onednn-1.6.x - oneDNN 2.0 - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - oneDNN 2.0 - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: IP Batch All - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: IP Batch 1D - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - oneDNN 1.5 - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU oneDNN 1.5 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org metrics for this test profile configuration based on 551 public results since 17 June 2020 with the latest data as of 1 December 2020 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Matching Public Results
ms (Average)
OpenBenchmarking.org Distribution Of Public Results - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU 551 Results Range From 0 To 558 ms 0 24 48 72 96 120 144 168 192 216 240 264 288 312 336 360 384 408 432 456 480 504 528 552 576 110 220 330 440 550
Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN 1.5 - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ) has an average run-time of 2 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Run-Time 2 4 6 8 10 Min: 1 / Avg: 1 / Max: 1
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.5% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Deviation 2 4 6 8 10 Min: 0 / Avg: 0.55 / Max: 4
Recent Test Results
3 Systems - 191 Benchmark Results
AMD Ryzen 3 2200G - ASUS PRIME B350M-E - AMD Raven
Ubuntu 20.10 - 5.8.0-38-generic - GNOME Shell 3.38.1
4 Systems - 104 Benchmark Results
AMD Ryzen 5 2400G - MSI B350M GAMING PRO - AMD Raven
Ubuntu 19.10 - 5.3.0-64-generic - GNOME Shell 3.34.1
4 Systems - 61 Benchmark Results
Intel Core i9-7960X - MSI X299 SLI PLUS - Intel Sky Lake-E DMI3 Registers
Ubuntu 20.04 - 5.4.0-58-generic - X Server 1.20.8
3 Systems - 113 Benchmark Results
Intel Core i9-7980XE - ASUS PRIME X299-A - Intel Sky Lake-E DMI3 Registers
Ubuntu 20.10 - 5.8.0-36-generic - GNOME Shell 3.38.1
Featured Graphics Comparison
3 Systems - 253 Benchmark Results
Intel Core i9-10885H - HP 8736 - Intel Comet Lake PCH
Ubuntu 20.04 - 5.6.0-1034-oem - GNOME Shell 3.36.4
4 Systems - 210 Benchmark Results
POWER9 - PowerNV T2P9D01 REV 1.01 - 64GB
Ubuntu 20.10 - 5.9.10-050910-generic - X Server
Featured Graphics Comparison
1 System - 219 Benchmark Results
AMD EPYC 7302P 16-Core - Supermicro H11SSL-i v2.00 - AMD Starship
Ubuntu 20.04 - 5.4.0-42-generic - GNOME Shell 3.36.4
2 Systems - 183 Benchmark Results
AMD FX-8370 Eight-Core - MSI 970 GAMING - AMD RD9x0
Ubuntu 20.10 - 5.8.0-33-generic - GNOME Shell 3.38.1
3 Systems - 313 Benchmark Results
Intel Core i7-5960X - ASRock X99 Extreme3 - Intel Xeon E7 v3
Ubuntu 20.04 - 5.4.0-58-generic - GNOME Shell 3.36.4
2 Systems - 85 Benchmark Results
Intel Core i5-1135G7 - Dell 0THX8P - Intel Device a0ef
Ubuntu 20.04 - 5.6.0-1036-oem - GNOME Shell 3.36.4
Most Popular Test Results User Comments