hpc-run-1

KVM testing on Ubuntu 18.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2007182-NE-2007171NE14.

hpc-run-1ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemSystem Layeroptimised vm run 1unoptimised vm 1Intel Core (11 Cores)RDO OpenStack Compute (1.11.0-2.el7 BIOS)Intel 82G33/G31/P35/P31 + ICH990GB21GB QEMU HDD + 172GB QEMU HDDRed Hat Virtio GPU2 x Red Hat Virtio deviceUbuntu 18.044.15.0-111-generic (x86_64)GCC 7.5.0ext4KVM11 x Intel Core (Broadwell) (11 Cores)Intel 440FX 82441FX PMC20GBCirrus Logic GD 5446OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v Processor Details- CPU Microcode: 0x1Python Details- optimised vm run 1: Python 3.6.9Security Details- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion + mds: Vulnerable: Clear buffers attempted no microcode; SMT Host state unknown + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline STIBP: disabled RSB filling + srbds: Unknown: Dependent on hypervisor status + tsx_async_abort: Vulnerable: Clear buffers attempted no microcode; SMT Host state unknown

hpc-run-1npb: BT.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: MG.Cnpb: SP.Bhpcc: G-HPLhpcc: G-Fftehpcc: EP-DGEMMhpcc: G-Ptranshpcc: EP-STREAM Triadhpcc: G-Rand Accesshpcc: Rand Ring Latencyhpcc: Rand Ring Bandwidthhpcc: Max Ping Pong Bandwidthrodinia: OpenMP LavaMDrodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterintel-mpi: IMB-P2P PingPongintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 PingPongintel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 Sendrecvoptimised vm run 1unoptimised vm 120876.02407.20409.399575.5318666.0810600.78212.033333.3320225.084532.940954.132470.042810.341932.2174411983.6601215.651215.14930.22235.54410000004072.93201.273196.172968.55155.7316463.64382.84396.197745.2818290.047917.83226.291332.6401223.295701.849033.774740.033020.344882.0389811913.6231284.490214.07262.26642.84310000003743.87413.753771.453147.82288.26OpenBenchmarking.org

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Coptimised vm run 1unoptimised vm 14K8K12K16K20KSE +/- 16.20, N = 3SE +/- 242.20, N = 320876.0216463.641. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Coptimised vm run 1unoptimised vm 190180270360450SE +/- 1.40, N = 3SE +/- 3.55, N = 10407.20382.841. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Doptimised vm run 1unoptimised vm 190180270360450SE +/- 0.25, N = 3SE +/- 1.18, N = 3409.39396.191. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Coptimised vm run 1unoptimised vm 12K4K6K8K10KSE +/- 127.65, N = 12SE +/- 85.32, N = 39575.537745.281. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Coptimised vm run 1unoptimised vm 14K8K12K16K20KSE +/- 119.53, N = 3SE +/- 84.81, N = 318666.0818290.041. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Boptimised vm run 1unoptimised vm 12K4K6K8K10KSE +/- 15.83, N = 3SE +/- 74.36, N = 1510600.787917.831. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

HPC Challenge

Test / Class: G-HPL

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLoptimised vm run 1unoptimised vm 150100150200250SE +/- 0.38, N = 3SE +/- 1.17, N = 3212.03226.291. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Ffte

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffteoptimised vm run 1unoptimised vm 10.74971.49942.24912.99883.7485SE +/- 0.00911, N = 3SE +/- 0.03356, N = 33.332022.640121. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: EP-DGEMM

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMoptimised vm run 1unoptimised vm 1612182430SE +/- 0.08, N = 3SE +/- 0.17, N = 325.0823.301. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Ptrans

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptransoptimised vm run 1unoptimised vm 10.66171.32341.98512.64683.3085SE +/- 0.01490, N = 3SE +/- 0.06718, N = 32.940951.849031. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: EP-STREAM Triad

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triadoptimised vm run 1unoptimised vm 10.92981.85962.78943.71924.649SE +/- 0.00058, N = 3SE +/- 0.04622, N = 34.132473.774741. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: G-Random Access

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Accessoptimised vm run 1unoptimised vm 10.00960.01920.02880.03840.048SE +/- 0.00021, N = 3SE +/- 0.00263, N = 30.042810.033021. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Random Ring Latency

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latencyoptimised vm run 1unoptimised vm 10.07760.15520.23280.31040.388SE +/- 0.00144, N = 3SE +/- 0.00250, N = 30.341930.344881. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Random Ring Bandwidth

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidthoptimised vm run 1unoptimised vm 10.49890.99781.49671.99562.4945SE +/- 0.01993, N = 3SE +/- 0.02936, N = 32.217442.038981. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

HPC Challenge

Test / Class: Max Ping Pong Bandwidth

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidthoptimised vm run 1unoptimised vm 13K6K9K12K15KSE +/- 17.86, N = 3SE +/- 49.55, N = 311983.6611913.621. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 2.1.1

Rodinia

Test: OpenMP LavaMD

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDoptimised vm run 1unoptimised vm 130060090012001500SE +/- 1.98, N = 3SE +/- 1.66, N = 31215.651284.491. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Leukocyte

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyteoptimised vm run 1unoptimised vm 150100150200250SE +/- 0.60, N = 3SE +/- 0.17, N = 3215.15214.071. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP CFD Solver

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solveroptimised vm run 1unoptimised vm 11428425670SE +/- 0.41, N = 3SE +/- 0.77, N = 1530.2262.271. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Streamcluster

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusteroptimised vm run 1unoptimised vm 11020304050SE +/- 0.43, N = 15SE +/- 0.47, N = 1535.5442.841. (CXX) g++ options: -O2 -lOpenCL

Intel MPI Benchmarks

Test: IMB-P2P PingPong

OpenBenchmarking.orgAverage Msg/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-P2P PingPongoptimised vm run 1unoptimised vm 1200K400K600K800K1000K100000010000001. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangeoptimised vm run 1unoptimised vm 19001800270036004500SE +/- 9.47, N = 3SE +/- 15.40, N = 34072.933743.87MAX: 13289.54MAX: 18639.491. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Exchangeoptimised vm run 1unoptimised vm 190180270360450SE +/- 1.97, N = 3SE +/- 18.34, N = 3201.27413.75MIN: 0.61 / MAX: 4035.45MIN: 0.62 / MAX: 9593.971. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 PingPong

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 PingPongoptimised vm run 1unoptimised vm 18001600240032004000SE +/- 47.70, N = 3SE +/- 103.18, N = 153196.173771.45MIN: 206.78 / MAX: 6638.16MIN: 116.8 / MAX: 9803.81. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvoptimised vm run 1unoptimised vm 17001400210028003500SE +/- 23.29, N = 3SE +/- 54.51, N = 152968.553147.82MAX: 12422.68MAX: 19261.931. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 Sendrecvoptimised vm run 1unoptimised vm 160120180240300SE +/- 0.22, N = 3SE +/- 16.82, N = 15155.73288.26MIN: 0.34 / MAX: 2590.7MIN: 0.35 / MAX: 9737.911. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi


Phoronix Test Suite v10.8.4