2020-12-25-1510

2 x Intel Xeon E5-2680 v2 testing with a Supermicro X9DRW v0123456789 (3.2 BIOS) and llvmpipe on Peppermint 10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2012251-FI-20201225189&grt.

2020-12-25-1510ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution2 x Intel Xeon E5-2680 v22 x Intel Xeon E5-2680 v2 @ 3.60GHz (20 Cores / 40 Threads)Supermicro X9DRW v0123456789 (3.2 BIOS)Intel Xeon E7 v2/Xeon4 x 16384 MB DDR3-1866MT/s Samsung M393B2G70DB0-CMA16GB USB Flash Drivellvmpipe2 x Intel I350Peppermint 105.0.0-37-generic (x86_64)LXDEX Server 1.20.4modesetting 1.20.43.3 Mesa 19.0.8 (LLVM 8.0 256 bits)GCC 7.5.0overlayfs1600x900OpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance - CPU Microcode: 0x42e- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Not affected

2020-12-25-1510mkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Recurrent Neural Network Inference - f322 x Intel Xeon E5-2680 v25.3846965.869538.9143486.9029.6555613.565199.702337.6325587.79472.1307OpenBenchmarking.org

oneDNN MKL-DNN

Harness: IP Batch 1D - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: f322 x Intel Xeon E5-2680 v21.21162.42323.63484.84646.058SE +/- 0.04255, N = 35.38469MIN: 4.961. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: IP Batch All - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: f322 x Intel Xeon E5-2680 v21530456075SE +/- 0.24, N = 365.87MIN: 64.851. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: IP Batch 1D - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: u8s8f322 x Intel Xeon E5-2680 v2918273645SE +/- 0.12, N = 338.91MIN: 37.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: IP Batch All - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: u8s8f322 x Intel Xeon E5-2680 v2110220330440550SE +/- 0.70, N = 3486.90MIN: 482.81. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: f322 x Intel Xeon E5-2680 v23691215SE +/- 0.00921, N = 39.65556MIN: 9.541. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: f322 x Intel Xeon E5-2680 v23691215SE +/- 0.06, N = 313.57MIN: 13.291. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f322 x Intel Xeon E5-2680 v220406080100SE +/- 1.04, N = 1599.70MIN: 89.11. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f322 x Intel Xeon E5-2680 v2918273645SE +/- 0.10, N = 337.63MIN: 37.11. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: Recurrent Neural Network Training - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Training - Data Type: f322 x Intel Xeon E5-2680 v2130260390520650SE +/- 1.11, N = 3587.79MIN: 570.171. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

oneDNN MKL-DNN

Harness: Recurrent Neural Network Inference - Data Type: f32

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Inference - Data Type: f322 x Intel Xeon E5-2680 v21632486480SE +/- 0.46, N = 372.13MIN: 68.231. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl


Phoronix Test Suite v10.8.4