Xeon Has

Intel Xeon E5-1680 v3 testing with a ASUS X99-A (3902 BIOS) and eVGA NVIDIA NVE7 1GB on Ubuntu 19.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2004090-NI-XEONHAS3742
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Xeon E5-1680 v3
April 09 2020
  1 Hour, 39 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon HasOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon E5-1680 v3 @ 3.80GHz (8 Cores / 16 Threads)ASUS X99-A (3902 BIOS)Intel Xeon E7 v3/Xeon16GBPNY CS900 240GBeVGA NVIDIA NVE7 1GBRealtek ALC1150G237HLIntel I218-VUbuntu 19.045.0.0-38-generic (x86_64)GNOME Shell 3.32.1X Server 1.20.4modesetting 1.20.44.3 Mesa 19.0.2GCC 8.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionXeon Has PerformanceSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x43- OpenJDK Runtime Environment (build 11.0.5+10-post-Ubuntu-0ubuntu1.119.04) - Python 2.7.16 + Python 3.7.3- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Not affected

Xeon Hasneat: openvkl: vklBenchmarkopenvkl: vklBenchmarkVdbVolumeopenvkl: vklBenchmarkStructuredVolumeluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismyafaray: Total Time For Sample Scenecassandra: Readscassandra: Writesembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objoidn: Memorialmkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8f32mkl-dnn: IP Batch All - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32mkl-dnn: Recurrent Neural Network Training - f32mkl-dnn: Recurrent Neural Network Inference - f32Intel Xeon E5-1680 v324.00789.2217694994.70454540223912.0990991.201.32220.87534085456118.68199.79059.95859.380612.076610.78346.754.4946861.68403.1114741.33175.642709.00727191.4786.14181277.77344.7918OpenBenchmarking.org

Nebular Empirical Analysis Tool

NEAT is the Nebular Empirical Analysis Tool for empirical analysis of ionised nebulae, with uncertainty propagation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29Intel Xeon E5-1680 v3612182430SE +/- 0.03, N = 324.011. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkIntel Xeon E5-1680 v320406080100SE +/- 0.07, N = 389.22MIN: 1 / MAX: 356

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeIntel Xeon E5-1680 v34M8M12M16M20MSE +/- 65580.00, N = 317694994.70MIN: 699167 / MAX: 79878528

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeIntel Xeon E5-1680 v39M18M27M36M45MSE +/- 146902.32, N = 340223912.10MIN: 889800 / MAX: 233817912

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCIntel Xeon E5-1680 v30.270.540.811.081.35SE +/- 0.00, N = 31.20MIN: 1.16 / MAX: 1.22

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismIntel Xeon E5-1680 v30.2970.5940.8911.1881.485SE +/- 0.00, N = 31.32MIN: 1.29 / MAX: 1.37

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneIntel Xeon E5-1680 v350100150200250SE +/- 0.29, N = 3220.881. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lboost_system -lboost_filesystem -lboost_locale

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: ReadsIntel Xeon E5-1680 v37K14K21K28K35KSE +/- 1141.91, N = 1234085

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesIntel Xeon E5-1680 v310K20K30K40K50KSE +/- 649.07, N = 345611

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownIntel Xeon E5-1680 v3246810SE +/- 0.0156, N = 38.6819MIN: 8.62 / MAX: 8.82

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownIntel Xeon E5-1680 v33691215SE +/- 0.0289, N = 39.7905MIN: 9.7 / MAX: 9.98

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonIntel Xeon E5-1680 v33691215SE +/- 0.0172, N = 39.9585MIN: 9.9 / MAX: 10.07

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjIntel Xeon E5-1680 v33691215SE +/- 0.0249, N = 39.3806MIN: 9.3 / MAX: 9.49

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonIntel Xeon E5-1680 v33691215SE +/- 0.09, N = 312.08MIN: 11.88 / MAX: 12.39

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjIntel Xeon E5-1680 v33691215SE +/- 0.03, N = 310.78MIN: 10.7 / MAX: 10.94

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialIntel Xeon E5-1680 v3246810SE +/- 0.00, N = 36.75

oneDNN MKL-DNN

This is a test of the Intel oneDNN (formerly DNNL / Deep Neural Network Library / MKL-DNN) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: f32Intel Xeon E5-1680 v31.01132.02263.03394.04525.0565SE +/- 0.01584, N = 34.49468MIN: 4.391. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: f32Intel Xeon E5-1680 v31428425670SE +/- 0.08, N = 361.68MIN: 60.911. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch 1D - Data Type: u8s8f32Intel Xeon E5-1680 v30.70011.40022.10032.80043.5005SE +/- 0.00876, N = 33.11147MIN: 3.061. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: IP Batch All - Data Type: u8s8f32Intel Xeon E5-1680 v3918273645SE +/- 0.03, N = 341.33MIN: 41.061. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Xeon E5-1680 v31.26962.53923.80885.07846.348SE +/- 0.01249, N = 35.64270MIN: 5.561. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Xeon E5-1680 v33691215SE +/- 0.00734, N = 39.00727MIN: 8.941. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Intel Xeon E5-1680 v34080120160200SE +/- 0.68, N = 3191.48MIN: 186.031. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Intel Xeon E5-1680 v3246810SE +/- 0.00926, N = 36.14181MIN: 6.041. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Training - Data Type: f32Intel Xeon E5-1680 v360120180240300SE +/- 0.68, N = 3277.77MIN: 275.331. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN MKL-DNN 1.3Harness: Recurrent Neural Network Inference - Data Type: f32Intel Xeon E5-1680 v31020304050SE +/- 0.08, N = 344.79MIN: 43.921. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl