oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit.
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark onednn .
Test Created 17 June 2020
Last Updated 31 March 2023
Test Type Processor
Average Install Time 7 Minutes, 27 Seconds
Average Run Time 2 Minutes, 5 Seconds
Test Dependencies C/C++ Compiler Toolchain + CMake
Accolades 40k+ Downloads Convolution Batch Shapes Auto 13.7% IP Shapes 1D 13.7% Deconvolution Batch shapes_1d 13.7% Recurrent Neural Network Training 16.0% IP Shapes 3D 13.8% Deconvolution Batch shapes_3d 13.7% Recurrent Neural Network Inference 15.6% Harness Option Popularity OpenBenchmarking.org
bf16bf16bf16 23.2% u8s8f32 31.2% f32 45.5% Data Type Option Popularity OpenBenchmarking.org
Revision Historypts/onednn-3.1.0 [View Source ] Fri, 31 Mar 2023 18:14:37 GMT Update against oneDNN 3.1 upstream.
pts/onednn-3.0.0 [View Source ] Mon, 19 Dec 2022 21:07:39 GMT Update against oneDNN 3.0 upstream.
pts/onednn-2.7.0 [View Source ] Wed, 28 Sep 2022 13:00:44 GMT Update against oneDNN 2.7 upstream.
pts/onednn-1.8.0 [View Source ] Tue, 29 Mar 2022 19:55:25 GMT Update against oneDNN 2.6 upstream.
pts/onednn-1.7.0 [View Source ] Sat, 13 Mar 2021 07:49:33 GMT Update against oneDNN 2.1.2 upstream.
pts/onednn-1.6.1 [View Source ] Sun, 20 Dec 2020 09:58:16 GMT This test profile builds and works fine on macOS so enable it (MacOSX).
pts/onednn-1.6.0 [View Source ] Wed, 09 Dec 2020 13:47:31 GMT Update against oneDNN 2.0 upstream.
pts/onednn-1.5.0 [View Source ] Wed, 17 Jun 2020 16:26:39 GMT Initial commit of oneDNN test profile based on Intel oneDNN 1.5, forked from existing mkl-dnn test profile that is named from MKL-DNN before it was renamed to DNNL and then oneDNN. So create new test profile to match Intel naming convention.
Performance MetricsAnalyze Test Configuration: pts/onednn-3.1.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.7.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.7.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.6.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.6.x - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.5.x - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU oneDNN 3.1 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org metrics for this test profile configuration based on 112 public results since 31 March 2023 with the latest data as of 31 May 2023 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
ms (Average)
OpenBenchmarking.org Distribution Of Public Results - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU 110 Results Range From 708 To 14823 ms 708 991 1274 1557 1840 2123 2406 2689 2972 3255 3538 3821 4104 4387 4670 4953 5236 5519 5802 6085 6368 6651 6934 7217 7500 7783 8066 8349 8632 8915 9198 9481 9764 10047 10330 10613 10896 11179 11462 11745 12028 12311 12594 12877 13160 13443 13726 14009 14292 14575 14858 5 10 15 20 25
Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN 3.1 - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ) has an average run-time of 12 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Run-Time 7 14 21 28 35 Min: 4 / Avg: 11.28 / Max: 30
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.4% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Deviation 3 6 9 12 15 Min: 0 / Avg: 0.43 / Max: 11
Notable Instruction Set Usage Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
Instruction Set
Support
Instructions Detected
Used by default on supported hardware. Found on Intel processors since at least 2010. Found on AMD processors since Bulldozer (2011).
POPCNT
Used by default on supported hardware. Found on Intel processors since Sandy Bridge (2011). Found on AMD processors since Bulldozer (2011).
VZEROUPPER VEXTRACTF128 VBROADCASTSS VINSERTF128 VPERMILPS VBROADCASTSD VPERMILPD VPERM2F128 VMASKMOVPS
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Excavator (2016).
VPBROADCASTQ VPGATHERQQ VGATHERQPS VPGATHERQD VPERM2I128 VINSERTI128 VPBROADCASTD VPBLENDD VPSLLVD VEXTRACTI128 VPSRAVD VPBROADCASTW VPERMQ VPSRLVQ VPBROADCASTB VGATHERDPS VPGATHERDQ VPERMD VPSLLVQ VPMASKMOVQ
Used by default on supported hardware. Found on Intel processors since Haswell (2013). Found on AMD processors since Bulldozer (2011).
VFMADD231SS VFMADD213SS VFMADD132SS VFMADD132SD VFNMADD132SD VFMADD132PS VFMADD231PS VFMADD213PS VFNMADD132PS VFNMSUB231PS VFNMSUB132SS VFNMADD132SS VFNMSUB231SS VFNMADD231PS VFNMADD231SS VFNMADD213SS VFMSUB132SS VFMADD132PD VFMADD231PD VFMADD231SD VFMADD213PD VFMSUB231SS VFMSUB231SD VFMSUB213PS VFMSUB132PS VFMSUB213SS
Advanced Vector Extensions 512 (AVX512)
Requires passing a supported compiler/build flag (verified with targets: cascadelake, sapphirerapids).
(ZMM REGISTER USE)
The test / benchmark does honor compiler flag changes.
Last automated analysis: 31 March 2023
This test profile binary relies on the shared libraries libdnnl.so.3, libm.so.6, libgomp.so.1, libc.so.6 .
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
Recent Test Results
1 System - 96 Benchmark Results
Intel Core i7-12700H - Intel NUC12SNKi72 - Intel Alder Lake PCH
Ubuntu 22.04 - 5.19.0-42-generic - GNOME Shell 42.5
1 System - 253 Benchmark Results
Intel Core i7-12700H - Intel NUC12SNKi72 - Intel Alder Lake PCH
Ubuntu 22.04 - 5.19.0-42-generic - GNOME Shell 42.5
1 System - 96 Benchmark Results
Intel Core i7-12700H - Intel NUC12SNKi72 - Intel Alder Lake PCH
Ubuntu 22.04 - 5.19.0-41-generic - GNOME Shell 42.5
1 System - 48 Benchmark Results
AMD Ryzen 9 7950X3D 16-Core - ASRockRack B665D4U-1L v3.03 - AMD Device 14d8
Debian 12 - 6.3.3-x64v3-xanmod1 - Xfce 4.18
1 System - 118 Benchmark Results
AMD Ryzen 9 7950X3D 16-Core - ASRockRack B665D4U-1L v3.03 - AMD Device 14d8
Debian 12 - 6.3.3-x64v3-xanmod1 - Xfce 4.18
1 System - 143 Benchmark Results
AMD Ryzen 9 7950X3D 16-Core - ASRockRack B665D4U-1L v3.03 - AMD Device 14d8
Debian 12 - 6.3.3-x64v3-xanmod1 - Xfce 4.18
1 System - 119 Benchmark Results
Intel Core i9-12900K - ASUS PRIME Z690M-HZ - Intel Alder Lake-S PCH
Debian 12 - 6.3.2-x64v3-xanmod1 - Xfce 4.18
1 System - 84 Benchmark Results
2 x Intel Xeon Platinum 8362 - Lenovo 7Z73CTOLWW v05 - Intel Device 0998
Ubuntu 22.04 - 5.15.0-71-generic - 1.3.224
1 System - 7 Benchmark Results
Intel Xeon w9-3495X - Supermicro X13SWA-TF v1.01 - Intel Alder Lake-S PCH
Ubuntu 22.10 - 6.0.0-060000-generic - GNOME Shell 43.1
1 System - 3 Benchmark Results
Intel Xeon w9-3495X - Supermicro X13SWA-TF v1.01 - Intel Alder Lake-S PCH
Ubuntu 22.10 - 6.0.0-060000-generic - GNOME Shell 43.1
1 System - 3 Benchmark Results
Intel Xeon w9-3495X - Supermicro X13SWA-TF v1.01 - Intel Alder Lake-S PCH
Ubuntu 22.10 - 6.0.0-060000-generic - GNOME Shell 43.1
1 System - 11 Benchmark Results
Intel Xeon w9-3495X - Supermicro X13SWA-TF v1.01 - Intel Alder Lake-S PCH
Ubuntu 22.10 - 6.0.0-060000-generic - GNOME Shell 43.1
1 System - 116 Benchmark Results
Intel Xeon Gold 6438N - Nokia Solutions and s AE-SER2UES-A/AF1854.01 - Intel Device 1bce
Rocky Linux 8.7 - 4.18.0-425.19.2.el8_7.x86_64 - GNOME Shell 3.32.2
1 System - 348 Benchmark Results
Intel Celeron J6413 - (5.19 BIOS) - Intel Elkhart Lake PMC
Ubuntu 23.04 - 6.2.0-20-generic - GCC 12.2.0
1 System - 343 Benchmark Results
2 x Intel Xeon Silver 4314 - Intel M20NTP2SB - Intel Device 0998
Ubuntu 22.04 - 5.15.0-71-generic - 1.3.224
Most Popular Test Results
3 Systems - 131 Benchmark Results
AMD Ryzen 7 5800X3D 8-Core - ASUS ROG CROSSHAIR VIII HERO - AMD Starship
Ubuntu 22.04 - 5.17.0-1019-oem - GNOME Shell 42.2
5 Systems - 72 Benchmark Results
AMD Ryzen 9 7950X 16-Core - ASUS ROG CROSSHAIR X670E HERO - AMD Device 14d8
Ubuntu 22.04 - 6.3.0-060300rc7daily20230417-generic - GNOME Shell 42.5
2 Systems - 40 Benchmark Results
AMD Ryzen 7 PRO 5850U - HP 8A78 - AMD Renoir
Pop 22.04 - 5.19.0-76051900-generic - GNOME Shell 42.3.1
2 Systems - 93 Benchmark Results
AMD Ryzen 7 7800X3D 8-Core - ASUS ROG CROSSHAIR X670E HERO - AMD Device 14d8
Ubuntu 23.04 - 6.2.8-060208-generic - GNOME Shell 44.0
3 Systems - 231 Benchmark Results
2 x Intel Xeon Platinum 8380 - Intel M50CYP2SB2U - Intel Ice Lake IEH
Ubuntu 22.10 - 6.2.0-rc5-phx-dodt - GNOME Shell 43.0
3 Systems - 21 Benchmark Results
2 x AMD EPYC 9654 96-Core - AMD Titanite_4G - AMD Device 14a4
Clear Linux OS 38660 - 6.2.8-1293.native - X Server
4 Systems - 215 Benchmark Results
AMD EPYC 7343 16-Core - Supermicro H12SSL-i v1.02 - 8 x 64 GB DDR4-3200MT
AlmaLinux 9.1 - 5.14.0-162.12.1.el9_1.x86_64 - GCC 11.3.1 20220421
3 Systems - 21 Benchmark Results
Intel Core i9-10980XE - ASRock X299 Steel Legend - Intel Sky Lake-E DMI3 Registers
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2
3 Systems - 21 Benchmark Results
AMD Ryzen Threadripper 3960X 24-Core - MSI Creator TRX40 - AMD Starship
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2
3 Systems - 21 Benchmark Results
Intel Core i7-1165G7 - Dell 0GG9PT - Intel Tiger Lake-LP
Ubuntu 22.10 - 5.19.0-38-generic - GNOME Shell 43.0
3 Systems - 7 Benchmark Results
AMD Ryzen 9 5900HX - ASUS G513QY v1.0 - AMD Renoir
Ubuntu 22.10 - 5.19.0-38-generic - GNOME Shell 43.0
3 Systems - 46 Benchmark Results
AMD Ryzen 7 4700U - LENOVO LNVNB161216 - AMD Renoir
Ubuntu 22.04 - 5.19.0-35-generic - GNOME Shell 42.2
4 Systems - 26 Benchmark Results
2 x AMD EPYC 7513 32-Core - Supermicro H12DSi-N6 v1.02 - 512GB
AlmaLinux 9.1 - 5.14.0-162.18.1.el9_1.x86_64 - GCC 11.3.1 20220421
Featured OpenGL Comparison
AMD Ryzen Threadripper 3990X 64-Core - Gigabyte TRX40 AORUS PRO WIFI - AMD Starship
Ubuntu 23.04 - 6.2.0-18-generic - GNOME Shell 44.0
Featured OpenGL Comparison
Intel Core i7-10700T - Logic Supply RXM-181 - Intel Comet Lake PCH
openSUSE 20230303 - 6.2.1-1-default - KDE Plasma 5.27.2