cascadelake

2 x Intel Xeon Platinum 8280 testing with a GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS) and ASPEED Family on Ubuntu 18.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1904187-HV-CASCADELA78&grw.

cascadelakeProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen Resolution2 x Intel Xeon Platinum 82802 x Intel Xeon Platinum 8280 @ 4.00GHz (56 Cores / 112 Threads)GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS)Intel Sky Lake-E DMI3 Registers386048MBSamsung SSD 970 PRO 512GBASPEED FamilyVE2282 x Intel X722 for 1GbE + 2 x QLogic FastLinQ QL41000 10/25/40/50GbEUbuntu 18.045.1.0-999-generic (x86_64) 20190416GNOME Shell 3.28.3X Server 1.20.1modesetting 1.20.1GCC 9.0.1 20190414ext41920x1080OpenBenchmarking.org- CXXFLAGS=-O3-march=skylake-avx512 CFLAGS=-O3-march=skylake-avx512- --disable-multilib --enable-checking=release- Scaling Governor: intel_pstate powersave- __user pointer sanitization + Enhanced IBRS IBPB: conditional RSB filling + SSB disabled via prctl and seccomp

cascadelakemkl-dnn: IP Batch All - f32mkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32mkl-dnn: IP Batch 1D - f322 x Intel Xeon Platinum 828081.4738955.5322.2157.212.783881.1662.484.6571.3911.6813.15394.911.2122.722.902.4862.041.25839822.884.36832.924.6723.497.63OpenBenchmarking.org

MKL-DNN

Harness: IP Batch All - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f322 x Intel Xeon Platinum 82802040608010081.47MIN: 77.831. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s322 x Intel Xeon Platinum 828080160240320400389MIN: 382.271. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280122436486055.53MIN: 48.281. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s322 x Intel Xeon Platinum 828051015202522.21MIN: 21.331. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s322 x Intel Xeon Platinum 8280132639526557.21MIN: 48.991. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s322 x Intel Xeon Platinum 82800.62551.2511.87652.5023.12752.78MIN: 1.261. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f322 x Intel Xeon Platinum 828080160240320400388MIN: 381.91. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s322 x Intel Xeon Platinum 82800.2610.5220.7831.0441.3051.16MIN: 1.031. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280142842567062.48MIN: 58.531. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f322 x Intel Xeon Platinum 82801.04632.09263.13894.18525.23154.65MIN: 4.171. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s322 x Intel Xeon Platinum 8280163248648071.39MIN: 66.361. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280369121511.68MIN: 8.971. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s322 x Intel Xeon Platinum 8280369121513.15MIN: 8.631. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s322 x Intel Xeon Platinum 828090180270360450394.91MIN: 387.531. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f322 x Intel Xeon Platinum 82800.27230.54460.81691.08921.36151.21MIN: 1.11. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f322 x Intel Xeon Platinum 828051015202522.72MIN: 21.331. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f322 x Intel Xeon Platinum 82800.65251.3051.95752.613.26252.90MIN: 1.151. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s322 x Intel Xeon Platinum 82800.5581.1161.6742.2322.792.48MIN: 1.151. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f322 x Intel Xeon Platinum 8280142842567062.04MIN: 55.931. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s322 x Intel Xeon Platinum 82800.28130.56260.84391.12521.40651.25MIN: 1.031. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f322 x Intel Xeon Platinum 82802004006008001000839MIN: 813.061. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s322 x Intel Xeon Platinum 82802004006008001000822.88MIN: 809.741. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s322 x Intel Xeon Platinum 82800.9811.9622.9433.9244.9054.36MIN: 4.051. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8f32s322 x Intel Xeon Platinum 82802004006008001000832.92MIN: 818.891. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s322 x Intel Xeon Platinum 82801.05082.10163.15244.20325.2544.67MIN: 4.271. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s322 x Intel Xeon Platinum 828061218243023.49MIN: 21.21. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f322 x Intel Xeon Platinum 82802468107.63MIN: 6.161. (CXX) g++ options: -O3 -std=c++11 -march=native -mtune=native -fPIC -fopenmp -pie -lmklml_intel -ldl


Phoronix Test Suite v10.8.5