This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported.
The pts/mkl-dnn test profile has been succeeded by the pts/onednn test profile to match the naming convetion used by Intel following their renaming to oneDNN. Run pts/onednn test profile to use the latest version of Intel oneDNN benchmark.
To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark mkl-dnn.
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. *** Test profile page view reporting began March 2021. Data updated weekly as of 9 November 2024.
Revision History
pts/mkl-dnn-1.3.1 [View Source] Wed, 17 Jun 2020 16:31:48 GMT NOTICE: The pts/mkl-dnn test profile has been succeeded by the pts/onednn test profile to match the naming convetion used by Intel following their renaming to oneDNN. Run pts/onednn test profile to use the latest version of Intel oneDNN benchmark
pts/mkl-dnn-1.3.0 [View Source] Thu, 09 Apr 2020 18:14:05 GMT Fix as did not mean to expose f16 option.
Harness: Deconvolution Batch deconv_1d - Data Type: f32
OpenBenchmarking.org metrics for this test profile configuration based on 138 public results since 9 April 2020 with the latest data as of 1 October 2023.
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Based on OpenBenchmarking.org data, the selected test / test configuration (oneDNN MKL-DNN 1.3 - Harness: Deconvolution Batch deconv_1d - Data Type: f32) has an average run-time of 2 minutes. By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.8%.
Notable Instruction Set Usage
Notable instruction set extensions supported by this test, based on an automatic analysis by the Phoronix Test Suite / OpenBenchmarking.org analytics engine.
This test profile binary relies on the shared libraries libdnnl.so.1, libm.so.6, libgomp.so.1, libc.so.6.
Tested CPU Architectures
This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.