2021-05-31-1212

2 x Intel Xeon E5-2630 0 testing with a Supermicro X9DR3-F (1.1 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105311-IB-20210531141
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
2 x Intel Xeon E5-2630 0
May 31 2021
  19 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


2021-05-31-1212OpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon E5-2630 0 @ 2.80GHz (12 Cores / 24 Threads)Supermicro X9DR3-F (1.1 BIOS)Intel Xeon E5/Core8 x 16384 MB DDR3-1333MT/s AL48P72E4BLK016GB USB Flash Drivellvmpipe2 x Intel I350Ubuntu 20.045.8.0-41-generic (x86_64)X Server 1.20.94.5 Mesa 20.2.6 (LLVM 11.0.0 256 bits)GCC 9.3.0overlayfs1152x864ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDisplay ServerOpenGLCompilerFile-SystemScreen Resolution2021-05-31-1212 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_cpufreq performance - CPU Microcode: 0x71a- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

2021-05-31-1212mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: IP Batch All - f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Recurrent Neural Network Inference - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f322 x Intel Xeon E5-2630 0210.968705.559107.1861042.3996.038118.49249.8265353.333527.119941.9796OpenBenchmarking.org

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f322 x Intel Xeon E5-2630 050100150200250SE +/- 2.53, N = 15210.97MIN: 195.861. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: u8s8f322 x Intel Xeon E5-2630 0150300450600750SE +/- 1.23, N = 3705.56MIN: 695.461. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: f322 x Intel Xeon E5-2630 020406080100SE +/- 0.78, N = 3107.19MIN: 104.841. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Training - Data Type: f322 x Intel Xeon E5-2630 02004006008001000SE +/- 2.68, N = 31042.39MIN: 1028.341. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Inference - Data Type: f322 x Intel Xeon E5-2630 020406080100SE +/- 0.14, N = 396.04MIN: 92.541. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: f322 x Intel Xeon E5-2630 0510152025SE +/- 0.08, N = 318.49MIN: 18.191. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: f322 x Intel Xeon E5-2630 03691215SE +/- 0.00194, N = 39.82653MIN: 9.631. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: u8s8f322 x Intel Xeon E5-2630 01224364860SE +/- 0.30, N = 353.33MIN: 52.231. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: f322 x Intel Xeon E5-2630 0612182430SE +/- 0.17, N = 1327.12MIN: 26.391. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f322 x Intel Xeon E5-2630 01020304050SE +/- 0.08, N = 341.98MIN: 41.291. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl