mpi_benchmark

2 x Intel Xeon E5645 testing with a ASUS Z8P(N)E-D12(X) (1103 BIOS) and ASPEED 4GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103018-HA-MPIBENCHM67
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
test_srinath_pc
February 28 2021
  6 Hours, 11 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


mpi_benchmarkOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon E5645 @ 2.40GHz (12 Cores / 24 Threads)ASUS Z8P(N)E-D12(X) (1103 BIOS)Intel 5520 I/O + ICH10R12 x 4096 MB DDR3-1333MT/s8002GB Seagate ST8000DM004-2CX1 + 4001GB Western Digital WD4000FYYZ-0 + 1000GB Samsung SSD 870ASPEED 4GBRealtek ALC888S27D390 + DELL U27112 x Intel 82574LUbuntu 20.045.8.0-44-generic (x86_64)KDE Frameworks 5.68.0X Server 1.20.9GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionMpi_benchmark PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x1f- Python 2.7.18 + Python 3.8.5- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Vulnerable: Clear buffers attempted no microcode; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

mpi_benchmarkhpcg: minife: Smallpennant: sedovbigpennant: leblancbigmrbayes: Primate Phylogeny Analysisqmcpack: simple-H2Oincompact3d: Cylindermocassin: Dust 2D tau100.0lammps: 20k Atomslammps: Rhodopsin Proteinaskap: tConvolve MPI - Degriddingaskap: tConvolve MPI - Griddingintel-mpi: IMB-P2P PingPongintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 PingPongintel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 Sendrecvgromacs: water_GMX50_baregpaw: Carbon Nanotubetest_srinath_pc3.821464085.33313.2361106.9990337.58894.016800.4045413803.3483.5291125.491358.7763036083010.61429.082456.332403.47269.060.409695.531OpenBenchmarking.org

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1test_srinath_pc0.85981.71962.57943.43924.299SE +/- 0.02040, N = 33.821461. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: Smalltest_srinath_pc9001800270036004500SE +/- 41.89, N = 154085.331. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigtest_srinath_pc70140210280350SE +/- 0.79, N = 3313.241. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigtest_srinath_pc20406080100SE +/- 0.44, N = 3107.001. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysistest_srinath_pc70140210280350SE +/- 0.16, N = 3337.591. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -maes -O3 -std=c99 -pedantic -lm

QMCPACK

QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Execution Time - Seconds, Fewer Is BetterQMCPACK 3.10Input: simple-H2Otest_srinath_pc20406080100SE +/- 1.09, N = 494.021. (CXX) g++ options: -fopenmp -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -march=native -O3 -fomit-frame-pointer -ffast-math -pthread -lm

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylindertest_srinath_pc2004006008001000SE +/- 5.78, N = 3800.401. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0test_srinath_pc80160240320400SE +/- 1.20, N = 33801. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atomstest_srinath_pc0.75331.50662.25993.01323.7665SE +/- 0.003, N = 33.3481. (CXX) g++ options: -O3 -pthread -lm

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteintest_srinath_pc0.7941.5882.3823.1763.97SE +/- 0.046, N = 153.5291. (CXX) g++ options: -O3 -pthread -lm

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degriddingtest_srinath_pc2004006008001000SE +/- 8.46, N = 111125.491. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Griddingtest_srinath_pc30060090012001500SE +/- 17.81, N = 111358.771. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Intel MPI Benchmarks

Intel MPI Benchmarks for stressing MPI implementations. At this point the test profile aggregates results for some common MPI functionality. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Msg/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-P2P PingPongtest_srinath_pc1.4M2.8M4.2M5.6M7MSE +/- 58876.83, N = 36303608MIN: 1275 / MAX: 168909971. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangetest_srinath_pc6001200180024003000SE +/- 38.86, N = 33010.61MAX: 12443.91. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangetest_srinath_pc90180270360450SE +/- 4.31, N = 3429.08MIN: 0.96 / MAX: 10435.711. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 PingPongtest_srinath_pc5001000150020002500SE +/- 16.41, N = 32456.33MIN: 3.64 / MAX: 6561.771. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvtest_srinath_pc5001000150020002500SE +/- 24.51, N = 52403.47MAX: 13345.631. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvtest_srinath_pc60120180240300SE +/- 4.68, N = 5269.06MIN: 0.6 / MAX: 7532.641. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_baretest_srinath_pc0.0920.1840.2760.3680.46SE +/- 0.002, N = 30.4091. (CXX) g++ options: -O3 -pthread

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon Nanotubetest_srinath_pc150300450600750SE +/- 3.15, N = 3695.531. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi