MKL-DNN DNNL Ice Lake Intel Core i7-1065G7 testing with a Dell 06CDVY (1.0.9 BIOS) and Intel Iris Plus 3GB on Ubuntu 19.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1910274-HU-MKLDNNDNN96 Ice Lake Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Device 34ef, Memory: 16384MB, Disk: KBG40ZPZ512G NVMe TOSHIBA 512GB, Graphics: Intel Iris Plus 3GB (1100MHz), Audio: Realtek ALC289, Network: Intel Device 34f0
OS: Ubuntu 19.10, Kernel: 5.3.0-19-generic (x86_64), Desktop: GNOME Shell 3.34.1, Display Server: X Server 1.20.5, Display Driver: modesetting 1.20.5, OpenGL: 4.6 Mesa 19.3.0-devel (git-1961653 2019-10-24 eoan-oibaf-ppa), Vulkan: 1.1.102, Compiler: GCC 9.2.1 20191008, File-System: ext4, Screen Resolution: 1920x1200
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersaveSecurity Notes: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling
MKL-DNN DNNL Ice Lake OpenBenchmarking.org Phoronix Test Suite Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads) Dell 06CDVY (1.0.9 BIOS) Intel Device 34ef 16384MB KBG40ZPZ512G NVMe TOSHIBA 512GB Intel Iris Plus 3GB (1100MHz) Realtek ALC289 Intel Device 34f0 Ubuntu 19.10 5.3.0-19-generic (x86_64) GNOME Shell 3.34.1 X Server 1.20.5 modesetting 1.20.5 4.6 Mesa 19.3.0-devel (git-1961653 2019-10-24 eoan-oibaf-ppa) 1.1.102 GCC 9.2.1 20191008 ext4 1920x1200 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server Display Driver OpenGL Vulkan Compiler File-System Screen Resolution MKL-DNN DNNL Ice Lake Benchmarks System Logs - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling
MKL-DNN DNNL Ice Lake mkl-dnn: IP Batch 1D - f32 mkl-dnn: IP Batch All - f32 mkl-dnn: IP Batch 1D - u8s8f32 mkl-dnn: IP Batch All - u8s8f32 mkl-dnn: IP Batch 1D - bf16bf16bf16 mkl-dnn: IP Batch All - bf16bf16bf16 mkl-dnn: Convolution Batch conv_3d - f32 mkl-dnn: Convolution Batch conv_all - f32 mkl-dnn: Convolution Batch conv_3d - u8s8f32 mkl-dnn: Deconvolution Batch deconv_1d - f32 mkl-dnn: Deconvolution Batch deconv_3d - f32 mkl-dnn: Convolution Batch conv_alexnet - f32 mkl-dnn: Convolution Batch conv_all - u8s8f32 mkl-dnn: Deconvolution Batch deconv_all - f32 mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32 mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32 mkl-dnn: Recurrent Neural Network Training - f32 mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16 mkl-dnn: Convolution Batch conv_alexnet - u8s8f32 mkl-dnn: Convolution Batch conv_all - bf16bf16bf16 mkl-dnn: Convolution Batch conv_googlenet_v3 - f32 mkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16 mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16 mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16 mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32 mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16 mkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16 Ice Lake 11.76 22.49 3.88 13.49 33.86 38.65 50.17 10231.63 56201.73 15.68 13.64 1235.74 27762.50 7212.46 3.92 32198.43 827.94 163.08 329.57 43628.27 571.21 69.87 50.24 7964.69 146.84 29589.37 2000.19 OpenBenchmarking.org
MKL-DNN DNNL This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch 1D - Data Type: f32 Ice Lake 3 6 9 12 15 SE +/- 0.43, N = 12 11.76 MIN: 6.55 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch All - Data Type: f32 Ice Lake 5 10 15 20 25 SE +/- 0.03, N = 3 22.49 MIN: 20.25 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch 1D - Data Type: u8s8f32 Ice Lake 0.873 1.746 2.619 3.492 4.365 SE +/- 0.05, N = 12 3.88 MIN: 2.46 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch All - Data Type: u8s8f32 Ice Lake 3 6 9 12 15 SE +/- 0.03, N = 3 13.49 MIN: 11.16 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch 1D - Data Type: bf16bf16bf16 Ice Lake 8 16 24 32 40 SE +/- 0.78, N = 12 33.86 MIN: 22.49 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch All - Data Type: bf16bf16bf16 Ice Lake 9 18 27 36 45 SE +/- 0.57, N = 3 38.65 MIN: 22.31 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_3d - Data Type: f32 Ice Lake 11 22 33 44 55 SE +/- 0.29, N = 3 50.17 MIN: 44.85 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_all - Data Type: f32 Ice Lake 2K 4K 6K 8K 10K SE +/- 6.19, N = 3 10231.63 MIN: 9983.73 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_3d - Data Type: u8s8f32 Ice Lake 12K 24K 36K 48K 60K SE +/- 22.81, N = 3 56201.73 MIN: 55973.9 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_1d - Data Type: f32 Ice Lake 4 8 12 16 20 SE +/- 0.17, N = 3 15.68 MIN: 13.77 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_3d - Data Type: f32 Ice Lake 4 8 12 16 20 SE +/- 0.20, N = 15 13.64 MIN: 10.81 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_alexnet - Data Type: f32 Ice Lake 300 600 900 1200 1500 SE +/- 17.36, N = 4 1235.74 MIN: 1036.31 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_all - Data Type: u8s8f32 Ice Lake 6K 12K 18K 24K 30K SE +/- 26.70, N = 3 27762.50 MIN: 27388 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_all - Data Type: f32 Ice Lake 1500 3000 4500 6000 7500 SE +/- 7.90, N = 3 7212.46 MIN: 7015.13 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 Ice Lake 0.882 1.764 2.646 3.528 4.41 SE +/- 0.05, N = 3 3.92 MIN: 3.01 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 Ice Lake 7K 14K 21K 28K 35K SE +/- 3.27, N = 3 32198.43 MIN: 32160.3 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Recurrent Neural Network Training - Data Type: f32 Ice Lake 200 400 600 800 1000 SE +/- 2.97, N = 3 827.94 MIN: 792.76 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16 Ice Lake 40 80 120 160 200 SE +/- 1.23, N = 3 163.08 MIN: 141.86 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32 Ice Lake 70 140 210 280 350 SE +/- 3.15, N = 9 329.57 MIN: 239.13 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16 Ice Lake 9K 18K 27K 36K 45K SE +/- 9.24, N = 3 43628.27 MIN: 42847.8 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32 Ice Lake 120 240 360 480 600 SE +/- 2.81, N = 3 571.21 MIN: 526.78 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 Ice Lake 16 32 48 64 80 SE +/- 1.18, N = 12 69.87 MIN: 47.28 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 Ice Lake 11 22 33 44 55 SE +/- 0.64, N = 15 50.24 MIN: 43.71 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16 Ice Lake 2K 4K 6K 8K 10K SE +/- 58.94, N = 3 7964.69 MIN: 7130.41 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32 Ice Lake 30 60 90 120 150 SE +/- 1.07, N = 3 146.84 MIN: 116.96 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16 Ice Lake 6K 12K 18K 24K 30K SE +/- 6.03, N = 3 29589.37 MIN: 28826.9 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16 Ice Lake 400 800 1200 1600 2000 SE +/- 23.35, N = 6 2000.19 MIN: 1807.93 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
Ice Lake Processor: Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads), Motherboard: Dell 06CDVY (1.0.9 BIOS), Chipset: Intel Device 34ef, Memory: 16384MB, Disk: KBG40ZPZ512G NVMe TOSHIBA 512GB, Graphics: Intel Iris Plus 3GB (1100MHz), Audio: Realtek ALC289, Network: Intel Device 34f0
OS: Ubuntu 19.10, Kernel: 5.3.0-19-generic (x86_64), Desktop: GNOME Shell 3.34.1, Display Server: X Server 1.20.5, Display Driver: modesetting 1.20.5, OpenGL: 4.6 Mesa 19.3.0-devel (git-1961653 2019-10-24 eoan-oibaf-ppa), Vulkan: 1.1.102, Compiler: GCC 9.2.1 20191008, File-System: ext4, Screen Resolution: 1920x1200
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersaveSecurity Notes: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling
Testing initiated at 27 October 2019 07:35 by user phoronix.