Cascade Lake Linux

2 x Intel Xeon Platinum 8280 testing with a GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS) and llvmpipe 377GB on Ubuntu 19.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/1906222-HV-CASCADELA82.

Cascade Lake LinuxProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution2 x Intel Xeon Platinum 82802 x Intel Xeon Platinum 8280 @ 4.00GHz (56 Cores / 112 Threads)GIGABYTE MD61-SC2-00 v01000100 (T15 BIOS)Intel Sky Lake-E DMI3 Registers386048MBSamsung SSD 970 PRO 512GBllvmpipe 377GBVE2282 x Intel X722 for 1GbE + 2 x QLogic FastLinQ QL41000 10/25/40/50GbEUbuntu 19.045.1.7-050107-generic (x86_64)GNOME Shell 3.32.0X Server 1.20.4modesetting 1.20.43.3 Mesa 19.0.2 (LLVM 8.0 256 bits)GCC 8.3.0ext41920x1080OpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu219.04.1)- l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling

Cascade Lake Linuxcp2k: Fayalite-FIST Datadacapobench: H2dacapobench: Jythondacapobench: Tradesoaprenaissance: Scala Dottyrenaissance: Twitter Finaglejohn-the-ripper: Blowfishmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32gromacs: Water Benchmarkspec-jbb2015: SPECjbb2015-Composite max-jOPSspec-jbb2015: SPECjbb2015-Composite critical-jOPS2 x Intel Xeon Platinum 82801978.708787460772596690.8747749.748503011.5196.723.493.5220.3319.963.33381.530.951.1048.021043.463599.913545.581720.011752.8521.310.242044.0814.170.231899.143665.1818.795.766.197.259399551313OpenBenchmarking.org

CP2K Molecular Dynamics

Fayalite-FIST Data

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 6.1Fayalite-FIST Data2 x Intel Xeon Platinum 82804008001200160020001978.70

DaCapo Benchmark

Java Test: H2

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H22 x Intel Xeon Platinum 82802K4K6K8K10KSE +/- 92.67, N = 48787

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython2 x Intel Xeon Platinum 828010002000300040005000SE +/- 38.26, N = 44607

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap2 x Intel Xeon Platinum 828016003200480064008000SE +/- 86.61, N = 207259

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Scala Dotty2 x Intel Xeon Platinum 828014002800420056007000SE +/- 49.50, N = 166690.87

Renaissance

Test: Twitter Finagle

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Twitter Finagle2 x Intel Xeon Platinum 828010K20K30K40K50KSE +/- 411.66, N = 1447749.74

John The Ripper

Test: Blowfish

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: Blowfish2 x Intel Xeon Platinum 828020K40K60K80K100KSE +/- 290.49, N = 3850301. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt -lbz2

MKL-DNN

Harness: IP Batch 1D - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f322 x Intel Xeon Platinum 82803691215SE +/- 0.17, N = 311.51MIN: 6.21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f322 x Intel Xeon Platinum 828020406080100SE +/- 0.92, N = 396.72MIN: 52.281. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s322 x Intel Xeon Platinum 82800.78531.57062.35593.14123.9265SE +/- 0.05, N = 33.49MIN: 2.141. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s322 x Intel Xeon Platinum 82800.7921.5842.3763.1683.96SE +/- 0.03, N = 153.52MIN: 2.151. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s322 x Intel Xeon Platinum 8280510152025SE +/- 0.16, N = 320.33MIN: 12.51. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: IP Batch All - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280510152025SE +/- 0.14, N = 319.96MIN: 12.361. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f322 x Intel Xeon Platinum 82800.74931.49862.24792.99723.7465SE +/- 0.01, N = 33.33MIN: 3.171. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f322 x Intel Xeon Platinum 828080160240320400SE +/- 0.33, N = 3381.53MIN: 375.731. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f322 x Intel Xeon Platinum 82800.21380.42760.64140.85521.069SE +/- 0.01, N = 30.95MIN: 0.911. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f322 x Intel Xeon Platinum 82800.24750.4950.74250.991.2375SE +/- 0.01, N = 31.10MIN: 1.071. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f322 x Intel Xeon Platinum 82801122334455SE +/- 0.05, N = 348.02MIN: 47.171. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f322 x Intel Xeon Platinum 82802004006008001000SE +/- 9.91, N = 31043.46MIN: 959.181. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s322 x Intel Xeon Platinum 82808001600240032004000SE +/- 50.05, N = 43599.91MIN: 3530.951. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s322 x Intel Xeon Platinum 82808001600240032004000SE +/- 2.04, N = 33545.58MIN: 3531.881. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s322 x Intel Xeon Platinum 8280400800120016002000SE +/- 1.21, N = 31720.01MIN: 1714.91. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_all - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280400800120016002000SE +/- 0.40, N = 31752.85MIN: 1745.151. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f322 x Intel Xeon Platinum 8280510152025SE +/- 0.11, N = 321.31MIN: 20.691. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s322 x Intel Xeon Platinum 82800.0540.1080.1620.2160.27SE +/- 0.00, N = 30.24MIN: 0.221. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s322 x Intel Xeon Platinum 8280400800120016002000SE +/- 0.54, N = 32044.08MIN: 2041.311. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s322 x Intel Xeon Platinum 828048121620SE +/- 0.01, N = 314.17MIN: 13.921. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s322 x Intel Xeon Platinum 82800.05180.10360.15540.20720.259SE +/- 0.00, N = 30.23MIN: 0.211. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280400800120016002000SE +/- 4.28, N = 31899.14MIN: 1891.241. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s322 x Intel Xeon Platinum 82808001600240032004000SE +/- 2.43, N = 33665.18MIN: 3608.691. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280510152025SE +/- 0.02, N = 318.79MIN: 18.181. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s322 x Intel Xeon Platinum 82801.2962.5923.8885.1846.48SE +/- 0.02, N = 35.76MIN: 5.541. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

MKL-DNN

Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s322 x Intel Xeon Platinum 8280246810SE +/- 0.02, N = 36.19MIN: 5.911. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

GROMACS

Water Benchmark

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2018.3Water Benchmark2 x Intel Xeon Platinum 8280246810SE +/- 0.01, N = 37.251. (CXX) g++ options: -mavx512f -mfma -std=c++11 -O3 -funroll-all-loops -fopenmp -lrt -lpthread -lm

SPECjbb 2015

SPECjbb2015-Composite max-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite max-jOPS2 x Intel Xeon Platinum 828020K40K60K80K100K93995

SPECjbb 2015

SPECjbb2015-Composite critical-jOPS

OpenBenchmarking.orgjOPS, More Is BetterSPECjbb 2015SPECjbb2015-Composite critical-jOPS2 x Intel Xeon Platinum 828011K22K33K44K55K51313


Phoronix Test Suite v10.8.4