MKL-DNN DNNL Ice Lake

Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus 3GB on Ubuntu 19.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1910274-HU-MKLDNNDNN96&grs.

MKL-DNN DNNL Ice LakeProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionIce LakeIntel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads)Dell 06CDVY (1.0.9 BIOS)Intel Device 34ef16384MBKBG40ZPZ512G NVMe TOSHIBA 512GBIntel Iris Plus 3GB (1100MHz)Realtek ALC289Intel Device 34f0Ubuntu 19.105.3.0-19-generic (x86_64)GNOME Shell 3.34.1X Server 1.20.5modesetting 1.20.54.6 Mesa 19.3.0-devel (git-1961653 2019-10-24 eoan-oibaf-ppa)1.1.102GCC 9.2.1 20191008ext41920x1200OpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling

MKL-DNN DNNL Ice Lakemkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Convolution Batch conv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_alexnet - u8s8f32mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Convolution Batch conv_3d - u8s8f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch 1D - f32Ice Lake2000.1929589.37146.847964.6950.2469.87571.2143628.27329.57163.08827.9432198.433.927212.4627762.501235.7413.6415.6856201.7310231.6350.1738.6513.493.8822.4933.8611.76OpenBenchmarking.org

MKL-DNN DNNL

Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16Ice Lake400800120016002000SE +/- 23.35, N = 62000.19MIN: 1807.931. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16Ice Lake6K12K18K24K30KSE +/- 6.03, N = 329589.37MIN: 28826.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32Ice Lake306090120150SE +/- 1.07, N = 3146.84MIN: 116.961. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16Ice Lake2K4K6K8K10KSE +/- 58.94, N = 37964.69MIN: 7130.411. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16Ice Lake1122334455SE +/- 0.64, N = 1550.24MIN: 43.711. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16Ice Lake1632486480SE +/- 1.18, N = 1269.87MIN: 47.281. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Ice Lake120240360480600SE +/- 2.81, N = 3571.21MIN: 526.781. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16Ice Lake9K18K27K36K45KSE +/- 9.24, N = 343628.27MIN: 42847.81. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32Ice Lake70140210280350SE +/- 3.15, N = 9329.57MIN: 239.131. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16Ice Lake4080120160200SE +/- 1.23, N = 3163.08MIN: 141.861. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Recurrent Neural Network Training - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Recurrent Neural Network Training - Data Type: f32Ice Lake2004006008001000SE +/- 2.97, N = 3827.94MIN: 792.761. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Ice Lake7K14K21K28K35KSE +/- 3.27, N = 332198.43MIN: 32160.31. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Ice Lake0.8821.7642.6463.5284.41SE +/- 0.05, N = 33.92MIN: 3.011. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: f32Ice Lake15003000450060007500SE +/- 7.90, N = 37212.46MIN: 7015.131. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_all - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f32Ice Lake6K12K18K24K30KSE +/- 26.70, N = 327762.50MIN: 273881. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_alexnet - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: f32Ice Lake30060090012001500SE +/- 17.36, N = 41235.74MIN: 1036.311. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: f32Ice Lake48121620SE +/- 0.20, N = 1513.64MIN: 10.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: f32Ice Lake48121620SE +/- 0.17, N = 315.68MIN: 13.771. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_3d - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f32Ice Lake12K24K36K48K60KSE +/- 22.81, N = 356201.73MIN: 55973.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: f32Ice Lake2K4K6K8K10KSE +/- 6.19, N = 310231.63MIN: 9983.731. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: Convolution Batch conv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: f32Ice Lake1122334455SE +/- 0.29, N = 350.17MIN: 44.851. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: IP Batch All - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: bf16bf16bf16Ice Lake918273645SE +/- 0.57, N = 338.65MIN: 22.311. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: IP Batch All - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f32Ice Lake3691215SE +/- 0.03, N = 313.49MIN: 11.161. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: IP Batch 1D - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f32Ice Lake0.8731.7462.6193.4924.365SE +/- 0.05, N = 123.88MIN: 2.461. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: IP Batch All - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: f32Ice Lake510152025SE +/- 0.03, N = 322.49MIN: 20.251. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: IP Batch 1D - Data Type: bf16bf16bf16

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: bf16bf16bf16Ice Lake816243240SE +/- 0.78, N = 1233.86MIN: 22.491. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

MKL-DNN DNNL

Harness: IP Batch 1D - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: f32Ice Lake3691215SE +/- 0.43, N = 1211.76MIN: 6.551. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl


Phoronix Test Suite v10.8.5