hpc-run-1

KVM testing on Ubuntu 18.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2007224-NE-HPCRUN12663.

hpc-run-1ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemSystem Layeroptimized.v1.xlargeoptimized.vm.xlarge2 x Intel Core (Broadwell) (30 Cores)RDO OpenStack Compute (1.11.0-2.el7 BIOS)Intel 82G33/G31/P35/P31 + ICH9100GB21GB QEMU HDD + 365GB QEMU HDDRed Hat Virtio GPURed Hat Virtio deviceUbuntu 18.044.15.0-111-generic (x86_64)GCC 7.5.0ext4KVMOpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v Processor Details- CPU Microcode: 0x1Python Details- Python 3.6.9Security Details- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion + mds: Vulnerable: Clear buffers attempted no microcode; SMT Host state unknown + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline STIBP: disabled RSB filling + srbds: Unknown: Dependent on hypervisor status + tsx_async_abort: Vulnerable: Clear buffers attempted no microcode; SMT Host state unknown

hpc-run-1hpcg: npb: BT.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: LU.Cnpb: MG.Cnpb: SP.Bhpcc: G-HPLhpcc: G-Fftehpcc: EP-DGEMMhpcc: G-Ptranshpcc: EP-STREAM Triadhpcc: G-Rand Accesshpcc: Rand Ring Latencyhpcc: Rand Ring Bandwidthhpcc: Max Ping Pong Bandwidthrodinia: OpenMP LavaMDrodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterintel-mpi: IMB-P2P PingPongintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 PingPongintel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 Sendrecvoptimized.v1.xlargeoptimized.vm.xlarge16.323352854.171180.311182.3118375.8869523.7937114.3630909.7516.312352891.941137.291177.9018151.5669684.8236875.3830770.19123.580006.3236722.767576.349663.216990.080610.613201.4017012056.515431.919107.42712.79717.99310016813.84126973655.83387.073962.46222633.68221.30OpenBenchmarking.org

High Performance Conjugate Gradient

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1optimized.v1.xlargeoptimized.vm.xlarge48121620SE +/- 0.03, N = 3SE +/- 0.04, N = 316.3216.311. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Coptimized.v1.xlargeoptimized.vm.xlarge11K22K33K44K55KSE +/- 38.65, N = 3SE +/- 57.16, N = 352854.1752891.941. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Coptimized.v1.xlargeoptimized.vm.xlarge30060090012001500SE +/- 2.04, N = 3SE +/- 18.46, N = 151180.311137.291. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Doptimized.v1.xlargeoptimized.vm.xlarge30060090012001500SE +/- 0.73, N = 3SE +/- 0.82, N = 31182.311177.901. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Coptimized.v1.xlargeoptimized.vm.xlarge4K8K12K16K20KSE +/- 262.25, N = 4SE +/- 310.47, N = 318375.8818151.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Coptimized.v1.xlargeoptimized.vm.xlarge15K30K45K60K75KSE +/- 392.59, N = 3SE +/- 56.09, N = 369523.7969684.821. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Coptimized.v1.xlargeoptimized.vm.xlarge8K16K24K32K40KSE +/- 274.84, N = 3SE +/- 134.98, N = 337114.3636875.381. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Boptimized.v1.xlargeoptimized.vm.xlarge7K14K21K28K35KSE +/- 241.19, N = 3SE +/- 72.57, N = 330909.7530770.191. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

HPC Challenge

Test / Class: G-HPL

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLoptimized.vm.xlarge306090120150SE +/- 0.11, N = 3123.581. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Ffte

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffteoptimized.vm.xlarge246810SE +/- 0.00551, N = 36.323671. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: EP-DGEMM

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMoptimized.vm.xlarge510152025SE +/- 0.29, N = 322.771. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Ptrans

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptransoptimized.vm.xlarge246810SE +/- 0.02493, N = 36.349661. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: EP-STREAM Triad

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triadoptimized.vm.xlarge0.72381.44762.17142.89523.619SE +/- 0.01061, N = 33.216991. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Random Access

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Accessoptimized.vm.xlarge0.01810.03620.05430.07240.0905SE +/- 0.00369, N = 30.080611. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Random Ring Latency

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latencyoptimized.vm.xlarge0.1380.2760.4140.5520.69SE +/- 0.00124, N = 30.613201. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Random Ring Bandwidth

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidthoptimized.vm.xlarge0.31540.63080.94621.26161.577SE +/- 0.01062, N = 31.401701. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Max Ping Pong Bandwidth

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidthoptimized.vm.xlarge3K6K9K12K15KSE +/- 16.04, N = 312056.521. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

Rodinia

Test: OpenMP LavaMD

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDoptimized.vm.xlarge90180270360450SE +/- 0.29, N = 3431.921. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Leukocyte

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyteoptimized.vm.xlarge20406080100SE +/- 0.37, N = 3107.431. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP CFD Solver

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solveroptimized.vm.xlarge3691215SE +/- 0.05, N = 312.801. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Streamcluster

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusteroptimized.vm.xlarge48121620SE +/- 0.18, N = 317.991. (CXX) g++ options: -O2 -lOpenCL

Intel MPI Benchmarks

Test: IMB-P2P PingPong

OpenBenchmarking.orgAverage Msg/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-P2P PingPongoptimized.vm.xlarge2M4M6M8M10MSE +/- 29013.76, N = 310016813.84MIN: 2965 / MAX: 310408391. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangeoptimized.vm.xlarge8001600240032004000SE +/- 46.29, N = 33655.83MIN: 3.44 / MAX: 15331.791. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangeoptimized.vm.xlarge80160240320400SE +/- 3.78, N = 3387.07MIN: 1.11 / MAX: 5678.551. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 PingPong

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 PingPongoptimized.vm.xlarge8001600240032004000SE +/- 45.67, N = 33962.46MIN: 30.9 / MAX: 9652.941. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvoptimized.vm.xlarge6001200180024003000SE +/- 33.86, N = 32633.68MIN: 2.78 / MAX: 9181.981. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvoptimized.vm.xlarge50100150200250SE +/- 4.06, N = 3221.30MIN: 0.72 / MAX: 3343.921. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi


Phoronix Test Suite v10.8.4