ivybridge

Intel Core i7-3770K testing with a ECS Z77H2-A2X v1.0 (4.6.5 BIOS) and ECS Intel Xeon E3-1200 v2/3rd Gen Core on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009159-FI-IVYBRIDGE93
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
ECS Intel Xeon E3-1200 v2
September 15 2020
  4 Hours, 57 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ivybridgeOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-3770K @ 3.90GHz (4 Cores / 8 Threads)ECS Z77H2-A2X v1.0 (4.6.5 BIOS)Intel Xeon E3-1200 v2/3rd8GB160GB INTEL SSDSA2M160ECS Intel Xeon E3-1200 v2/3rd Gen Core (1150MHz)Realtek ALC892G237HL2 x Realtek RTL8111/8168/8411Ubuntu 20.045.4.0-47-generic (x86_64)GNOME Shell 3.36.3X Server 1.20.8modesetting 1.20.84.2 Mesa 20.0.8GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionIvybridge BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x21- Python 3.8.2- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Vulnerable: No microcode + tsx_async_abort: Not affected

ivybridgeparaview: Wavelet Volume - 800 x 600paraview: Wavelet Volume - 800 x 600paraview: Wavelet Contour - 800 x 600paraview: Wavelet Contour - 800 x 600paraview: Wavelet Volume - 1024 x 768paraview: Wavelet Volume - 1024 x 768paraview: Wavelet Contour - 1024 x 768paraview: Wavelet Contour - 1024 x 768paraview: Wavelet Volume - 1280 x 1024paraview: Wavelet Volume - 1280 x 1024paraview: Wavelet Volume - 1920 x 1080paraview: Wavelet Volume - 1920 x 1080paraview: Wavelet Contour - 1280 x 1024paraview: Wavelet Contour - 1280 x 1024paraview: Wavelet Contour - 1920 x 1080paraview: Wavelet Contour - 1920 x 1080namd: ATPase Simulation - 327,506 Atomsonednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsvt-av1: Enc Mode 0 - 1080psvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080pbuild-linux-kernel: Time To Compilegromacs: Water BenchmarkECS Intel Xeon E3-1200 v217.92286.72115.36160.08212.45199.31113.66142.3688.21131.4177.98127.7669.63100.3279.4898.7834.7537920.3105253.00212.5400169.19154.506837.126156.146194.780525.266927.73862143.23704.15914.031512.38460.0080.1150.611270.3630.166OpenBenchmarking.org

ParaView

This test runs ParaView benchmarks: an open-source data analytics and visualization application. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 800 x 600ECS Intel Xeon E3-1200 v248121620SE +/- 0.20, N = 317.92

OpenBenchmarking.orgMiVoxels / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 800 x 600ECS Intel Xeon E3-1200 v260120180240300SE +/- 3.18, N = 3286.72

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 800 x 600ECS Intel Xeon E3-1200 v248121620SE +/- 0.00, N = 315.36

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 800 x 600ECS Intel Xeon E3-1200 v24080120160200SE +/- 0.01, N = 3160.08

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 1024 x 768ECS Intel Xeon E3-1200 v23691215SE +/- 0.02, N = 312.45

OpenBenchmarking.orgMiVoxels / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 1024 x 768ECS Intel Xeon E3-1200 v24080120160200SE +/- 0.38, N = 3199.31

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 1024 x 768ECS Intel Xeon E3-1200 v248121620SE +/- 0.01, N = 313.66

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 1024 x 768ECS Intel Xeon E3-1200 v2306090120150SE +/- 0.09, N = 3142.37

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 1280 x 1024ECS Intel Xeon E3-1200 v2246810SE +/- 0.00, N = 38.21

OpenBenchmarking.orgMiVoxels / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 1280 x 1024ECS Intel Xeon E3-1200 v2306090120150SE +/- 0.02, N = 3131.42

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 1920 x 1080ECS Intel Xeon E3-1200 v2246810SE +/- 0.02, N = 37.98

OpenBenchmarking.orgMiVoxels / Sec, More Is BetterParaView 5.4.1Test: Wavelet Volume - Resolution: 1920 x 1080ECS Intel Xeon E3-1200 v2306090120150SE +/- 0.31, N = 3127.77

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 1280 x 1024ECS Intel Xeon E3-1200 v23691215SE +/- 0.00, N = 39.63

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 1280 x 1024ECS Intel Xeon E3-1200 v220406080100SE +/- 0.03, N = 3100.33

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 1920 x 1080ECS Intel Xeon E3-1200 v23691215SE +/- 0.00, N = 39.48

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.4.1Test: Wavelet Contour - Resolution: 1920 x 1080ECS Intel Xeon E3-1200 v220406080100SE +/- 0.02, N = 398.78

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsECS Intel Xeon E3-1200 v21.06962.13923.20884.27845.348SE +/- 0.01372, N = 34.75379

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v2510152025SE +/- 0.04, N = 320.31MIN: 19.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v260120180240300SE +/- 1.01, N = 3253.00MIN: 246.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUECS Intel Xeon E3-1200 v23691215SE +/- 0.00, N = 312.54MIN: 12.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUECS Intel Xeon E3-1200 v24080120160200SE +/- 0.17, N = 3169.19MIN: 168.321. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v21224364860SE +/- 0.06, N = 354.51MIN: 54.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v2918273645SE +/- 0.09, N = 337.13MIN: 36.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v21326395265SE +/- 0.06, N = 356.15MIN: 55.691. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUECS Intel Xeon E3-1200 v220406080100SE +/- 0.12, N = 394.78MIN: 93.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUECS Intel Xeon E3-1200 v2612182430SE +/- 0.95, N = 1225.27MIN: 23.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUECS Intel Xeon E3-1200 v2714212835SE +/- 0.03, N = 327.74MIN: 27.441. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v25001000150020002500SE +/- 0.23, N = 32143.23MIN: 2139.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v2150300450600750SE +/- 5.72, N = 3704.16MIN: 693.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUECS Intel Xeon E3-1200 v248121620SE +/- 0.00, N = 314.03MIN: 13.791. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUECS Intel Xeon E3-1200 v23691215SE +/- 0.16, N = 512.38MIN: 11.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pECS Intel Xeon E3-1200 v20.00180.00360.00540.00720.009SE +/- 0.000, N = 30.0081. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pECS Intel Xeon E3-1200 v20.02590.05180.07770.10360.1295SE +/- 0.000, N = 30.1151. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pECS Intel Xeon E3-1200 v20.13750.2750.41250.550.6875SE +/- 0.000, N = 30.6111. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileECS Intel Xeon E3-1200 v260120180240300SE +/- 1.08, N = 3270.36

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkECS Intel Xeon E3-1200 v20.03740.07480.11220.14960.187SE +/- 0.000, N = 30.1661. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm