tests

Intel Xeon E5-2687W v3 testing with a MSI X99S SLI PLUS (MS-7885) v1.0 (1.E0 BIOS) and NVIDIA GeForce GTX 770 2GB on Ubuntu 19.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1910047-AS-TESTS964002
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Xeon E5-2687W v3
October 04 2019
  3 Hours, 42 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


testsOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon E5-2687W v3 @ 3.50GHz (10 Cores / 20 Threads)MSI X99S SLI PLUS (MS-7885) v1.0 (1.E0 BIOS)Intel Xeon E7 v3/Xeon32768MB80GB INTEL SSDSCKGW08NVIDIA GeForce GTX 770 2GBRealtek ALC892Intel I218-VUbuntu 19.045.3.0-999-generic (x86_64) 20190806GNOME Shell 3.32.2X Server 1.20.4modesetting 1.20.4GCC 8.3.0ext4ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemTests PerformanceSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave- l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

testsmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32ospray: San Miguel - SciVisospray: XFrog Forest - SciVisospray: San Miguel - Path Tracerospray: NASA Streamlines - SciVisospray: XFrog Forest - Path Tracerospray: Magnetic Reconnection - SciVisospray: NASA Streamlines - Path Tracerospray: Magnetic Reconnection - Path Tracerembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objoidn: Memorialluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismtungsten: Hairtungsten: Water Caustictungsten: Non-Exponentialtungsten: Volumetric CausticIntel Xeon E5-2687W v39.6220.6092.35439.7222.653016.7016715.506.388.24407.4250364.803628.466004.149931.40282.514245.35167.822400.9213.452.401.2017.241.3111.763.69166.679.4710.6111.1510.0313.2911.527.071.591.5732.7331.439.2811.50OpenBenchmarking.org

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: f32Intel Xeon E5-2687W v33691215SE +/- 0.02, N = 39.62MIN: 9.531. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: f32Intel Xeon E5-2687W v3510152025SE +/- 0.03, N = 320.60MIN: 20.421. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f32Intel Xeon E5-2687W v320406080100SE +/- 0.27, N = 392.35MIN: 90.761. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f32Intel Xeon E5-2687W v3100200300400500SE +/- 0.92, N = 3439.72MIN: 433.931. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: f32Intel Xeon E5-2687W v3510152025SE +/- 0.02, N = 322.65MIN: 22.361. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: f32Intel Xeon E5-2687W v36001200180024003000SE +/- 2.79, N = 33016.70MIN: 3000.741. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f32Intel Xeon E5-2687W v34K8K12K16K20KSE +/- 29.35, N = 316715.50MIN: 16670.71. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Xeon E5-2687W v3246810SE +/- 0.04, N = 36.38MIN: 6.261. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Xeon E5-2687W v3246810SE +/- 0.01, N = 38.24MIN: 8.141. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: f32Intel Xeon E5-2687W v390180270360450SE +/- 0.74, N = 3407.42MIN: 405.591. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f32Intel Xeon E5-2687W v311K22K33K44K55KSE +/- 51.64, N = 350364.80MIN: 50181.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: f32Intel Xeon E5-2687W v38001600240032004000SE +/- 1.38, N = 33628.46MIN: 3616.881. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Intel Xeon E5-2687W v313002600390052006500SE +/- 10.43, N = 36004.14MIN: 5984.561. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Intel Xeon E5-2687W v32K4K6K8K10KSE +/- 3.68, N = 39931.40MIN: 9919.731. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Recurrent Neural Network Training - Data Type: f32Intel Xeon E5-2687W v360120180240300SE +/- 0.08, N = 3282.51MIN: 281.981. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32Intel Xeon E5-2687W v39001800270036004500SE +/- 6.15, N = 34245.35MIN: 4231.591. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Intel Xeon E5-2687W v34080120160200SE +/- 0.18, N = 3167.82MIN: 166.061. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32Intel Xeon E5-2687W v35001000150020002500SE +/- 2.18, N = 32400.92MIN: 2389.521. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisIntel Xeon E5-2687W v33691215SE +/- 0.06, N = 313.45MIN: 12.66 / MAX: 13.51

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisIntel Xeon E5-2687W v30.541.081.622.162.7SE +/- 0.00, N = 62.40MIN: 2.35 / MAX: 2.42

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerIntel Xeon E5-2687W v30.270.540.811.081.35SE +/- 0.00, N = 31.20MIN: 1.18 / MAX: 1.21

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisIntel Xeon E5-2687W v348121620SE +/- 0.00, N = 1217.24MIN: 16.39 / MAX: 17.54

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerIntel Xeon E5-2687W v30.29480.58960.88441.17921.474SE +/- 0.00, N = 91.31MIN: 1.29 / MAX: 1.32

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisIntel Xeon E5-2687W v33691215SE +/- 0.00, N = 1211.76MIN: 11.24 / MAX: 11.9

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerIntel Xeon E5-2687W v30.83031.66062.49093.32124.1515SE +/- 0.00, N = 63.69MIN: 3.6 / MAX: 3.76

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerIntel Xeon E5-2687W v34080120160200SE +/- 0.00, N = 12166.67MIN: 125

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: CrownIntel Xeon E5-2687W v33691215SE +/- 0.01, N = 39.47MIN: 9.4 / MAX: 9.6

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: CrownIntel Xeon E5-2687W v33691215SE +/- 0.01, N = 310.61MIN: 10.54 / MAX: 10.79

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian DragonIntel Xeon E5-2687W v33691215SE +/- 0.03, N = 311.15MIN: 11.05 / MAX: 11.31

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer - Model: Asian Dragon ObjIntel Xeon E5-2687W v33691215SE +/- 0.01, N = 310.03MIN: 9.97 / MAX: 10.14

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian DragonIntel Xeon E5-2687W v33691215SE +/- 0.01, N = 313.29MIN: 13.22 / MAX: 13.46

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.6.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjIntel Xeon E5-2687W v33691215SE +/- 0.02, N = 311.52MIN: 11.43 / MAX: 11.68

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.0.0Scene: MemorialIntel Xeon E5-2687W v3246810SE +/- 0.00, N = 37.07

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: DLSCIntel Xeon E5-2687W v30.35780.71561.07341.43121.789SE +/- 0.01, N = 31.59MIN: 1.52 / MAX: 1.64

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.2Scene: Rainbow Colors and PrismIntel Xeon E5-2687W v30.35330.70661.05991.41321.7665SE +/- 0.01, N = 31.57MIN: 1.52 / MAX: 1.66

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairIntel Xeon E5-2687W v3816243240SE +/- 0.05, N = 332.731. (CXX) g++ options: -std=c++0x -march=haswell -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticIntel Xeon E5-2687W v3714212835SE +/- 0.06, N = 331.431. (CXX) g++ options: -std=c++0x -march=haswell -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialIntel Xeon E5-2687W v33691215SE +/- 0.04, N = 49.281. (CXX) g++ options: -std=c++0x -march=haswell -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticIntel Xeon E5-2687W v33691215SE +/- 0.03, N = 311.501. (CXX) g++ options: -std=c++0x -march=haswell -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -mfma -mbmi2 -mno-sse4a -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lpthread -ldl