hpc-run-1

KVM testing on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007224-NE-HPCRUN12663
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 4 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 5 Tests
MPI Benchmarks 4 Tests
Multi-Core 4 Tests
OpenMPI Tests 5 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
optimized.v1.xlarge
July 21 2020
  22 Minutes
optimized.vm.xlarge
July 21 2020
  5 Hours, 21 Minutes
Invert Hiding All Results Option
  2 Hours, 51 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


hpc-run-1 - Phoronix Test Suite

hpc-run-1

KVM testing on Ubuntu 18.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2007224-NE-HPCRUN12663&grs&rdt.

hpc-run-1ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemSystem Layeroptimized.v1.xlargeoptimized.vm.xlarge2 x Intel Core (Broadwell) (30 Cores)RDO OpenStack Compute (1.11.0-2.el7 BIOS)Intel 82G33/G31/P35/P31 + ICH9100GB21GB QEMU HDD + 365GB QEMU HDDRed Hat Virtio GPURed Hat Virtio deviceUbuntu 18.044.15.0-111-generic (x86_64)GCC 7.5.0ext4KVMOpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v Processor Details- CPU Microcode: 0x1Python Details- Python 3.6.9Security Details- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion + mds: Vulnerable: Clear buffers attempted no microcode; SMT Host state unknown + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline STIBP: disabled RSB filling + srbds: Unknown: Dependent on hypervisor status + tsx_async_abort: Vulnerable: Clear buffers attempted no microcode; SMT Host state unknown

hpc-run-1npb: FT.Cnpb: MG.Cnpb: SP.Bnpb: EP.Dnpb: LU.Cnpb: BT.Chpcg: intel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 PingPongintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-P2P PingPongrodinia: OpenMP Streamclusterrodinia: OpenMP CFD Solverrodinia: OpenMP Leukocyterodinia: OpenMP LavaMDhpcc: Max Ping Pong Bandwidthhpcc: Rand Ring Bandwidthhpcc: Rand Ring Latencyhpcc: EP-STREAM Triadhpcc: G-Ptranshpcc: EP-DGEMMhpcc: G-Fftehpcc: G-HPLhpcc: G-Rand Accessnpb: EP.Coptimized.v1.xlargeoptimized.vm.xlarge18375.8837114.3630909.751182.3169523.7952854.1716.32331180.3118151.5636875.3830770.191177.9069684.8252891.9416.3123221.302633.683962.4622387.073655.8310016813.841269717.99312.797107.427431.91912056.5151.401700.613203.216996.3496622.767576.32367123.580000.080611137.29OpenBenchmarking.org

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Coptimized.v1.xlargeoptimized.vm.xlarge4K8K12K16K20KSE +/- 262.25, N = 4SE +/- 310.47, N = 318375.8818151.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Coptimized.v1.xlargeoptimized.vm.xlarge8K16K24K32K40KSE +/- 274.84, N = 3SE +/- 134.98, N = 337114.3636875.381. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Boptimized.v1.xlargeoptimized.vm.xlarge7K14K21K28K35KSE +/- 241.19, N = 3SE +/- 72.57, N = 330909.7530770.191. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Doptimized.v1.xlargeoptimized.vm.xlarge30060090012001500SE +/- 0.73, N = 3SE +/- 0.82, N = 31182.311177.901. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Coptimized.v1.xlargeoptimized.vm.xlarge15K30K45K60K75KSE +/- 392.59, N = 3SE +/- 56.09, N = 369523.7969684.821. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Coptimized.v1.xlargeoptimized.vm.xlarge11K22K33K44K55KSE +/- 38.65, N = 3SE +/- 57.16, N = 352854.1752891.941. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

High Performance Conjugate Gradient

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1optimized.v1.xlargeoptimized.vm.xlarge48121620SE +/- 0.03, N = 3SE +/- 0.04, N = 316.3216.311. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvoptimized.vm.xlarge50100150200250SE +/- 4.06, N = 3221.30MIN: 0.72 / MAX: 3343.921. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvoptimized.vm.xlarge6001200180024003000SE +/- 33.86, N = 32633.68MIN: 2.78 / MAX: 9181.981. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 PingPong

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 PingPongoptimized.vm.xlarge8001600240032004000SE +/- 45.67, N = 33962.46MIN: 30.9 / MAX: 9652.941. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangeoptimized.vm.xlarge80160240320400SE +/- 3.78, N = 3387.07MIN: 1.11 / MAX: 5678.551. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangeoptimized.vm.xlarge8001600240032004000SE +/- 46.29, N = 33655.83MIN: 3.44 / MAX: 15331.791. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-P2P PingPong

OpenBenchmarking.orgAverage Msg/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-P2P PingPongoptimized.vm.xlarge2M4M6M8M10MSE +/- 29013.76, N = 310016813.84MIN: 2965 / MAX: 310408391. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Rodinia

Test: OpenMP Streamcluster

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusteroptimized.vm.xlarge48121620SE +/- 0.18, N = 317.991. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP CFD Solver

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solveroptimized.vm.xlarge3691215SE +/- 0.05, N = 312.801. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Leukocyte

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyteoptimized.vm.xlarge20406080100SE +/- 0.37, N = 3107.431. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP LavaMD

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDoptimized.vm.xlarge90180270360450SE +/- 0.29, N = 3431.921. (CXX) g++ options: -O2 -lOpenCL

HPC Challenge

Test / Class: Max Ping Pong Bandwidth

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidthoptimized.vm.xlarge3K6K9K12K15KSE +/- 16.04, N = 312056.521. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Random Ring Bandwidth

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidthoptimized.vm.xlarge0.31540.63080.94621.26161.577SE +/- 0.01062, N = 31.401701. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Random Ring Latency

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latencyoptimized.vm.xlarge0.1380.2760.4140.5520.69SE +/- 0.00124, N = 30.613201. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: EP-STREAM Triad

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triadoptimized.vm.xlarge0.72381.44762.17142.89523.619SE +/- 0.01061, N = 33.216991. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Ptrans

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptransoptimized.vm.xlarge246810SE +/- 0.02493, N = 36.349661. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: EP-DGEMM

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMoptimized.vm.xlarge510152025SE +/- 0.29, N = 322.771. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Ffte

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffteoptimized.vm.xlarge246810SE +/- 0.00551, N = 36.323671. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-HPL

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLoptimized.vm.xlarge306090120150SE +/- 0.11, N = 3123.581. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Random Access

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Accessoptimized.vm.xlarge0.01810.03620.05430.07240.0905SE +/- 0.00369, N = 30.080611. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Coptimized.v1.xlargeoptimized.vm.xlarge30060090012001500SE +/- 2.04, N = 3SE +/- 18.46, N = 151180.311137.291. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1


Phoronix Test Suite v10.8.4