mkl haswell

Intel Xeon E5-1680 v3 testing with a ASUS X99-A (3902 BIOS) and eVGA NVIDIA NVE7 1GB on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1906066-HV-MKLHASWEL82
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Xeon E5-1680 v3
June 06 2019
  25 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mkl haswellOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon E5-1680 v3 @ 3.80GHz (8 Cores / 16 Threads)ASUS X99-A (3902 BIOS)Intel Xeon E7 v3/Xeon16384MB60GB Patriot TorcheVGA NVIDIA NVE7 1GBRealtek ALC1150VE228Intel I218-VUbuntu 18.044.18.0-20-generic (x86_64)GNOME Shell 3.28.3X Server 1.20.1modesetting 1.20.14.3 Mesa 18.2.8GCC 7.4.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionMkl Haswell BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- l1tf: Mitigation of PTE Inversion + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

mkl haswellmkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - f32Intel Xeon E5-1680 v314471.6014458.402865.70447.179.357.153388.4823.36103.53106.637.348.32156.8511.89OpenBenchmarking.org

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32Intel Xeon E5-1680 v33K6K9K12K15K14471.60MIN: 14463.61. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32Intel Xeon E5-1680 v33K6K9K12K15K14458.40MIN: 14435.61. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32Intel Xeon E5-1680 v360012001800240030002865.70MIN: 2841.521. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32Intel Xeon E5-1680 v3100200300400500447.17MIN: 439.361. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Xeon E5-1680 v336912159.35MIN: 9.191. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Xeon E5-1680 v32468107.15MIN: 7.081. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32Intel Xeon E5-1680 v370014002100280035003388.48MIN: 3370.941. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32Intel Xeon E5-1680 v361218243023.36MIN: 23.211. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32Intel Xeon E5-1680 v320406080100103.53MIN: 71.881. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32Intel Xeon E5-1680 v320406080100106.63MIN: 74.591. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32Intel Xeon E5-1680 v32468107.34MIN: 4.411. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32Intel Xeon E5-1680 v32468108.32MIN: 4.621. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32Intel Xeon E5-1680 v3306090120150156.85MIN: 122.371. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32Intel Xeon E5-1680 v3369121511.89MIN: 6.821. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl