hpc-run-1 KVM testing on Ubuntu 18.04 via the Phoronix Test Suite. optimized.v1.xlarge: Processor: 2 x Intel Core (Broadwell) (30 Cores), Motherboard: RDO OpenStack Compute (1.11.0-2.el7 BIOS), Chipset: Intel 82G33/G31/P35/P31 + ICH9, Memory: 100GB, Disk: 21GB QEMU HDD + 365GB QEMU HDD, Graphics: Red Hat Virtio GPU, Network: Red Hat Virtio device OS: Ubuntu 18.04, Kernel: 4.15.0-111-generic (x86_64), Compiler: GCC 7.5.0, File-System: ext4, System Layer: KVM optimized.vm.xlarge: Processor: 2 x Intel Core (Broadwell) (30 Cores), Motherboard: RDO OpenStack Compute (1.11.0-2.el7 BIOS), Chipset: Intel 82G33/G31/P35/P31 + ICH9, Memory: 100GB, Disk: 21GB QEMU HDD + 365GB QEMU HDD, Graphics: Red Hat Virtio GPU, Network: Red Hat Virtio device OS: Ubuntu 18.04, Kernel: 4.15.0-111-generic (x86_64), Compiler: GCC 7.5.0, File-System: ext4, System Layer: KVM High Performance Conjugate Gradient 3.1 GFLOP/s > Higher Is Better optimized.v1.xlarge . 16.32 |================================================== optimized.vm.xlarge . 16.31 |================================================== NAS Parallel Benchmarks 3.4 Test / Class: BT.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 52854.17 |=============================================== optimized.vm.xlarge . 52891.94 |=============================================== NAS Parallel Benchmarks 3.4 Test / Class: EP.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 1180.31 |================================================ optimized.vm.xlarge . 1137.29 |============================================== NAS Parallel Benchmarks 3.4 Test / Class: EP.D Total Mop/s > Higher Is Better optimized.v1.xlarge . 1182.31 |================================================ optimized.vm.xlarge . 1177.90 |================================================ NAS Parallel Benchmarks 3.4 Test / Class: FT.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 18375.88 |=============================================== optimized.vm.xlarge . 18151.56 |============================================== NAS Parallel Benchmarks 3.4 Test / Class: LU.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 69523.79 |=============================================== optimized.vm.xlarge . 69684.82 |=============================================== NAS Parallel Benchmarks 3.4 Test / Class: MG.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 37114.36 |=============================================== optimized.vm.xlarge . 36875.38 |=============================================== NAS Parallel Benchmarks 3.4 Test / Class: SP.B Total Mop/s > Higher Is Better optimized.v1.xlarge . 30909.75 |=============================================== optimized.vm.xlarge . 30770.19 |=============================================== HPC Challenge 1.5.0 Test / Class: G-HPL GFLOPS > Higher Is Better optimized.vm.xlarge . 123.58 |================================================= HPC Challenge 1.5.0 Test / Class: G-Ffte GFLOPS > Higher Is Better optimized.vm.xlarge . 6.32367 |================================================ HPC Challenge 1.5.0 Test / Class: EP-DGEMM GFLOPS > Higher Is Better optimized.vm.xlarge . 22.77 |================================================== HPC Challenge 1.5.0 Test / Class: G-Ptrans GB/s > Higher Is Better optimized.vm.xlarge . 6.34966 |================================================ HPC Challenge 1.5.0 Test / Class: EP-STREAM Triad GB/s > Higher Is Better optimized.vm.xlarge . 3.21699 |================================================ HPC Challenge 1.5.0 Test / Class: G-Random Access GUP/s > Higher Is Better optimized.vm.xlarge . 0.08061 |================================================ HPC Challenge 1.5.0 Test / Class: Random Ring Latency usecs < Lower Is Better optimized.vm.xlarge . 0.61320 |================================================ HPC Challenge 1.5.0 Test / Class: Random Ring Bandwidth GB/s > Higher Is Better optimized.vm.xlarge . 1.40170 |================================================ HPC Challenge 1.5.0 Test / Class: Max Ping Pong Bandwidth MB/s > Higher Is Better optimized.vm.xlarge . 12056.52 |=============================================== Rodinia 3.1 Test: OpenMP LavaMD Seconds < Lower Is Better optimized.vm.xlarge . 431.92 |================================================= Rodinia 3.1 Test: OpenMP Leukocyte Seconds < Lower Is Better optimized.vm.xlarge . 107.43 |================================================= Rodinia 3.1 Test: OpenMP CFD Solver Seconds < Lower Is Better optimized.vm.xlarge . 12.80 |================================================== Rodinia 3.1 Test: OpenMP Streamcluster Seconds < Lower Is Better optimized.vm.xlarge . 17.99 |================================================== Intel MPI Benchmarks 2019.3 Test: IMB-P2P PingPong Average Msg/sec > Higher Is Better optimized.vm.xlarge . 10016813.84 |============================================ Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Exchange Average Mbytes/sec > Higher Is Better optimized.vm.xlarge . 3655.83 |================================================ Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Exchange Average usec < Lower Is Better optimized.vm.xlarge . 387.07 |================================================= Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 PingPong Average Mbytes/sec > Higher Is Better optimized.vm.xlarge . 3962.46 |================================================ Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Sendrecv Average Mbytes/sec > Higher Is Better optimized.vm.xlarge . 2633.68 |================================================ Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Sendrecv Average usec < Lower Is Better optimized.vm.xlarge . 221.30 |=================================================