oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark onednn .
Test Created 17 June 2020
Last Updated 19 December 2022
Test Type Processor
Average Install Time 7 Minutes, 42 Seconds
Average Run Time 2 Minutes, 5 Seconds
Test Dependencies C/C++ Compiler Toolchain + CMake
Accolades 20k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views *** OpenBenchmarking.org Events oneDNN Popularity Statistics pts/onednn 2020.06 2020.07 2020.08 2020.09 2020.10 2020.11 2020.12 2021.01 2021.02 2021.03 2021.04 2021.05 2021.06 2021.07 2021.08 2021.09 2021.10 2021.11 2021.12 2022.01 2022.02 2022.03 2022.04 2022.05 2022.06 2022.07 2022.08 2022.09 2022.10 2022.11 2022.12 2023.01 12K 24K 36K 48K 60K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data current as of 31 January 2023.
Matrix Multiply Batch Shapes Transformer 12.0% Recurrent Neural Network Training 14.6% Deconvolution Batch shapes_1d 11.9% Deconvolution Batch shapes_3d 11.8% IP Shapes 1D 11.4% Recurrent Neural Network Inference 14.8% IP Shapes 3D 11.8% Convolution Batch Shapes Auto 11.7% Harness Option Popularity OpenBenchmarking.org
bf16bf16bf16 27.6% f32 36.7% u8s8f32 35.7% Data Type Option Popularity OpenBenchmarking.org
Revision Historypts/onednn-3.0.0 [View Source ] Mon, 19 Dec 2022 21:07:39 GMT Update against oneDNN 3.0 upstream.
pts/onednn-2.7.0 [View Source ] Wed, 28 Sep 2022 13:00:44 GMT Update against oneDNN 2.7 upstream.
pts/onednn-1.8.0 [View Source ] Tue, 29 Mar 2022 19:55:25 GMT Update against oneDNN 2.6 upstream.
pts/onednn-1.7.0 [View Source ] Sat, 13 Mar 2021 07:49:33 GMT Update against oneDNN 2.1.2 upstream.
pts/onednn-1.6.1 [View Source ] Sun, 20 Dec 2020 09:58:16 GMT This test profile builds and works fine on macOS so enable it (MacOSX).
pts/onednn-1.6.0 [View Source ] Wed, 09 Dec 2020 13:47:31 GMT Update against oneDNN 2.0 upstream.
pts/onednn-1.5.0 [View Source ] Wed, 17 Jun 2020 16:26:39 GMT Initial commit of oneDNN test profile based on Intel oneDNN 1.5, forked from existing mkl-dnn test profile that is named from MKL-DNN before it was renamed to DNNL and then oneDNN. So create new test profile to match Intel naming convention.
Performance MetricsAnalyze Test Configuration: pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org metrics for this test profile configuration based on 171 public results since 19 December 2022 with the latest data as of 31 January 2023 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
ms (Average)
OpenBenchmarking.org Distribution Of Public Results - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU 167 Results Range From 460 To 10021 ms 460 652 844 1036 1228 1420 1612 1804 1996 2188 2380 2572 2764 2956 3148 3340 3532 3724 3916 4108 4300 4492 4684 4876 5068 5260 5452 5644 5836 6028 6220 6412 6604 6796 6988 7180 7372 7564 7756 7948 8140 8332 8524 8716 8908 9100 9292 9484 9676 9868 10060 5 10 15 20 25
Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN 3.0 - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ) has an average run-time of 10 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU Run-Time 6 12 18 24 30 Min: 4 / Avg: 9.9 / Max: 25
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.6% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU Deviation 3 6 9 12 15 Min: 0 / Avg: 0.56 / Max: 7
Notable Instruction Set Usage Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
Instruction Set
Support
Instructions Detected
Used by default on supported hardware. Found on Intel processors since at least 2010. Found on AMD processors since Bulldozer (2011).
POPCNT
Used by default on supported hardware. Found on Intel processors since Sandy Bridge (2011). Found on AMD processors since Bulldozer (2011).
VZEROUPPER VBROADCASTSS VINSERTF128 VPERMILPS VBROADCASTSD VEXTRACTF128 VPERMILPD VPERM2F128 VMASKMOVPS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Excavator (2016).
VPBROADCASTQ VINSERTI128 VPBROADCASTD VPBLENDD VPSLLVD VEXTRACTI128 VPSRAVD VPGATHERQQ VGATHERQPS VPERMQ VPBROADCASTW VPSRLVQ VPBROADCASTB VGATHERDPS VPGATHERDQ VPGATHERQD VPERM2I128 VPERMD VPSLLVQ VPMASKMOVQ
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Bulldozer (2011).
VFMADD213SS VFMADD132SS VFMADD132SD VFNMADD132SD VFMADD231SS VFMADD132PS VFMADD231PS VFMADD213PS VFNMADD132PS VFNMSUB231PS VFNMSUB132SS VFNMADD132SS VFNMSUB231SS VFNMADD231PS VFNMADD231SS VFNMADD213SS VFMSUB132SS VFMADD132PD VFMADD231PD VFMADD231SD VFMADD213PD VFMSUB231SS VFMSUB231SD VFMSUB213PS VFMSUB132PS VFMSUB213SS
Advanced Vector Extensions 512 (AVX512)
Requires passing a supported compiler/build flag (verified with targets: cascadelake, sapphirerapids).
(ZMM REGISTER USE)
The test / benchmark does honor compiler flag changes.
Last automated analysis: 23 December 2022
This test profile binary relies on the shared libraries libdnnl.so.3, libm.so.6, libgomp.so.1, libc.so.6 .
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
Recent Test Results
33 Systems - 47 Benchmark Results
Intel Core i9-9900 - Gigabyte Z370 AORUS Ultra Gaming-CF - Intel 8th
ManjaroLinux 20.2 - 5.4.74-1-MANJARO - Xfce 4.14
33 Systems - 46 Benchmark Results
Intel Core i9-9900 - Gigabyte Z370 AORUS Ultra Gaming-CF - Intel 8th
ManjaroLinux 20.2 - 5.4.74-1-MANJARO - Xfce 4.14
32 Systems - 45 Benchmark Results
Intel Core i9-9900 - Gigabyte Z370 AORUS Ultra Gaming-CF - Intel 8th
ManjaroLinux 20.2 - 5.4.74-1-MANJARO - Xfce 4.14
4 Systems - 297 Benchmark Results
2 x AMD EPYC 9654 96-Core - AMD Titanite_4G - AMD Device 14a4
Clear Linux OS 38100 - 6.1.7-1247.native - X Server 1.21.1.6
3 Systems - 318 Benchmark Results
2 x Intel Xeon Platinum 8490H - Quanta Cloud S6Q-MB-MPS - Intel Device 1bce
Clear Linux OS 38100 - 6.1.7-1247.native - X Server 1.21.1.6
1 System - 1 Benchmark Result
Intel Core i7-8650U - LENOVO 20L70025US - Intel Xeon E3-1200 v6
Pop 22.04 - 6.0.12-76060006-generic - GNOME Shell 42.3.1
2 Systems - 162 Benchmark Results
Intel Core i9-12900K - ASUS PRIME Z690M-HZ - Intel Alder Lake-S PCH
Debian - 6.1.7-x64v3-xanmod1 - Xfce 4.18
1 System - 130 Benchmark Results
Intel Core i9-12900K - ASUS PRIME Z690M-HZ - Intel Alder Lake-S PCH
Debian - 6.1.7-x64v3-xanmod1 - Xfce 4.18
128 Systems - 1156 Benchmark Results
2 x Intel Xeon E5-2680 v2 - Supermicro X9DR3-F v0123456789 - Intel Xeon E7 v2
Arch Linux - 6.1.6-arch1-1 - GCC 12.2.0
1 System - 46 Benchmark Results
2 x Intel Xeon E5-2680 v2 - Supermicro X9DR3-F v0123456789 - Intel Xeon E7 v2
Arch Linux - 6.1.6-arch1-1 - GCC 12.2.0
3 Systems - 44 Benchmark Results
AMD Ryzen 5 7600 6-Core - ASUS ROG CROSSHAIR X670E HERO - AMD Device 14d8
Ubuntu 22.04 - 6.0.0-060000rc1daily20220820-generic - GNOME Shell 42.2
32 Systems - 44 Benchmark Results
Intel Core i9-9900 - Gigabyte Z370 AORUS Ultra Gaming-CF - Intel 8th
ManjaroLinux 20.2 - 5.4.74-1-MANJARO - Xfce 4.14
2 Systems - 131 Benchmark Results
2 x Intel Xeon Platinum 8280 - GIGABYTE MD61-SC2-00 v01000100 - Intel Sky Lake-E DMI3 Registers
Ubuntu 21.04 - 5.11.0-49-generic - GNOME Shell 3.38.4
3 Systems - 211 Benchmark Results
Intel Core i7-5960X - Gigabyte X99-UD4-CF - Intel Xeon E7 v3
Debian 11 - 5.10.0-10-amd64 - 1.0.2
3 Systems - 99 Benchmark Results
Most Popular Test Results
2 Systems - 133 Benchmark Results
2 x AMD EPYC 9654 96-Core - AMD Titanite_4G - AMD Device 14a4
Ubuntu 22.10 - 5.19.0-26-generic - GNOME Shell 43.0
2 Systems - 89 Benchmark Results
AMD Ryzen 7 3800XT 8-Core - MSI X370 XPOWER GAMING TITANIUM - AMD Starship
Debian 11 - 5.10.0-20-amd64 - X Server 1.20.11
3 Systems - 73 Benchmark Results
Intel Core i7-1165G7 - Dell 0GG9PT - Intel Tiger Lake-LP
Ubuntu 21.10 - 5.13.0-52-generic - GNOME Shell 40.5
3 Systems - 110 Benchmark Results
Intel Core i7-10700T - Logic Supply RXM-181 - Intel Comet Lake PCH
Ubuntu 22.04 - 5.15.0-52-generic - GNOME Shell 42.2
3 Systems - 28 Benchmark Results
AMD Ryzen 5 3400G - ASUS PRIME B450M-A - AMD Raven
Ubuntu 19.10 - 5.3.0-46-generic - GNOME Shell 3.34.1
2 Systems - 361 Benchmark Results
AMD Ryzen Threadripper 3960X 24-Core - MSI Creator TRX40 - AMD Starship
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2
2 Systems - 70 Benchmark Results
2 x AMD EPYC 7773X 64-Core - AMD DAYTONA_X - AMD Starship
Ubuntu 20.04 - 6.1.0-rc8-phx - X Server
3 Systems - 67 Benchmark Results
AMD Ryzen 7 PRO 5850U - HP 8A78 - AMD Renoir
Pop 22.04 - 5.19.0-76051900-generic - GNOME Shell 42.3.1
3 Systems - 24 Benchmark Results
2 x Intel Xeon Gold 5220R - TYAN S7106 - Intel Sky Lake-E DMI3 Registers
Ubuntu 20.04 - 6.1.0-phx - GNOME Shell 3.36.9
3 Systems - 69 Benchmark Results
Intel Xeon Silver 4216 - TYAN S7100AG2NR - Intel Sky Lake-E DMI3 Registers
Debian 11 - 5.10.0-10-amd64 - X Server
4 Systems - 24 Benchmark Results
AMD Ryzen 9 7950X 16-Core - ASUS ROG CROSSHAIR X670E HERO - AMD Device 14d8
Ubuntu 22.04 - 5.15.0-56-generic - GNOME Shell 42.5
3 Systems - 18 Benchmark Results
Intel Core i9-13900K - ASUS PRIME Z790-P WIFI - Intel Device 7a27
Ubuntu 22.10 - 5.19.0-26-generic - GNOME Shell 43.1
3 Systems - 69 Benchmark Results
AMD Ryzen 7 4700U - LENOVO LNVNB161216 - AMD Renoir
Ubuntu 22.04 - 5.18.8-051808-generic - GNOME Shell 42.2
4 Systems - 18 Benchmark Results
Intel Core i7-1280P - MSI MS-14C6 - Intel Alder Lake PCH
Ubuntu 22.10 - 5.19.0-26-generic - GNOME Shell 43.0
3 Systems - 70 Benchmark Results
Intel Core i9-10980XE - ASRock X299 Steel Legend - Intel Sky Lake-E DMI3 Registers
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2