9900K MKL DNN

Intel Core i9-9900K testing with a ASUS PRIME Z390-A (0802 BIOS) and AMD Radeon RX 64 8GB on Ubuntu 19.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1904183-PTS-9900KMKL62.

9900K MKL DNNProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionIntel Core i9-9900KIntel Core i9-9900K @ 5.00GHz (8 Cores / 16 Threads)ASUS PRIME Z390-A (0802 BIOS)Intel Cannon Lake PCH16384MBSamsung SSD 970 EVO 250GB + 2000GB SABRENTAMD Radeon RX 64 8GB (1630/945MHz)Realtek ALC1220Acer B286HKIntel I219-VUbuntu 19.045.0.0-11-generic (x86_64)GNOME Shell 3.32.0X Server 1.20.4amdgpu 19.0.14.5 Mesa 19.0.2 (LLVM 8.0.0)1.1.90GCC 8.3.0ext43840x2160OpenBenchmarking.org- CXXFLAGS=-O3-march=native CFLAGS=-O3-march=native- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- __user pointer sanitization + Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + SSB disabled via prctl and seccomp

9900K MKL DNNmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32Intel Core i9-9900K7.1493.747.096.9590.2392.0024.562928.655.987.54387.323126.7324.6324.622929.982933.51168.216.027.57388.336.027.563131.32381.183120.00168.03167.82OpenBenchmarking.org

MKL-DNN

Harness: IP Batch 1D - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32Intel Core i9-9900K246810SE +/- 0.08, N = 37.14MIN: 6.311. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32Intel Core i9-9900K20406080100SE +/- 1.18, N = 393.74MIN: 86.671. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32Intel Core i9-9900K246810SE +/- 0.06, N = 37.09MIN: 6.171. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32Intel Core i9-9900K246810SE +/- 0.18, N = 126.95MIN: 4.421. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32Intel Core i9-9900K20406080100SE +/- 1.55, N = 1590.23MIN: 72.761. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32Intel Core i9-9900K20406080100SE +/- 1.44, N = 1492.00MIN: 73.21. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32Intel Core i9-9900K612182430SE +/- 0.03, N = 324.56MIN: 21.71. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32Intel Core i9-9900K6001200180024003000SE +/- 1.64, N = 32928.65MIN: 2826.131. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Core i9-9900K1.34552.6914.03655.3826.7275SE +/- 0.02, N = 35.98MIN: 4.831. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Core i9-9900K246810SE +/- 0.12, N = 37.54MIN: 6.261. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32Intel Core i9-9900K80160240320400SE +/- 1.62, N = 3387.32MIN: 355.371. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32Intel Core i9-9900K7001400210028003500SE +/- 7.41, N = 33126.73MIN: 2971.671. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32Intel Core i9-9900K612182430SE +/- 0.03, N = 324.63MIN: 21.781. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32Intel Core i9-9900K612182430SE +/- 0.03, N = 324.62MIN: 21.691. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s32Intel Core i9-9900K6001200180024003000SE +/- 1.69, N = 32929.98MIN: 2828.911. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s32Intel Core i9-9900K6001200180024003000SE +/- 4.16, N = 32933.51MIN: 2827.831. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Intel Core i9-9900K4080120160200SE +/- 0.12, N = 3168.21MIN: 150.641. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32Intel Core i9-9900K246810SE +/- 0.05, N = 36.02MIN: 4.821. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32Intel Core i9-9900K246810SE +/- 0.04, N = 37.57MIN: 6.31. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32Intel Core i9-9900K80160240320400SE +/- 2.23, N = 3388.33MIN: 355.321. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32Intel Core i9-9900K246810SE +/- 0.01, N = 36.02MIN: 4.761. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32Intel Core i9-9900K246810SE +/- 0.09, N = 67.56MIN: 6.041. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32Intel Core i9-9900K7001400210028003500SE +/- 9.84, N = 33131.32MIN: 2973.391. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32Intel Core i9-9900K80160240320400SE +/- 2.14, N = 3381.18MIN: 355.971. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8f32s32Intel Core i9-9900K7001400210028003500SE +/- 5.77, N = 33120.00MIN: 2973.351. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32Intel Core i9-9900K4080120160200SE +/- 0.38, N = 3168.03MIN: 150.41. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32Intel Core i9-9900K4080120160200SE +/- 0.27, N = 3167.82MIN: 150.51. (CXX) g++ options: -O3 -march=native -std=c++11 -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl


Phoronix Test Suite v10.8.4