hpc-run-1

KVM testing on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007224-NE-HPCRUN12663
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 4 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 5 Tests
MPI Benchmarks 4 Tests
Multi-Core 4 Tests
OpenMPI Tests 5 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
optimized.v1.xlarge
July 21 2020
  22 Minutes
optimized.vm.xlarge
July 21 2020
  5 Hours, 21 Minutes
Invert Hiding All Results Option
  2 Hours, 51 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


hpc-run-1 KVM testing on Ubuntu 18.04 via the Phoronix Test Suite. optimized.v1.xlarge: Processor: 2 x Intel Core (Broadwell) (30 Cores), Motherboard: RDO OpenStack Compute (1.11.0-2.el7 BIOS), Chipset: Intel 82G33/G31/P35/P31 + ICH9, Memory: 100GB, Disk: 21GB QEMU HDD + 365GB QEMU HDD, Graphics: Red Hat Virtio GPU, Network: Red Hat Virtio device OS: Ubuntu 18.04, Kernel: 4.15.0-111-generic (x86_64), Compiler: GCC 7.5.0, File-System: ext4, System Layer: KVM optimized.vm.xlarge: Processor: 2 x Intel Core (Broadwell) (30 Cores), Motherboard: RDO OpenStack Compute (1.11.0-2.el7 BIOS), Chipset: Intel 82G33/G31/P35/P31 + ICH9, Memory: 100GB, Disk: 21GB QEMU HDD + 365GB QEMU HDD, Graphics: Red Hat Virtio GPU, Network: Red Hat Virtio device OS: Ubuntu 18.04, Kernel: 4.15.0-111-generic (x86_64), Compiler: GCC 7.5.0, File-System: ext4, System Layer: KVM HPC Challenge 1.5.0 Test / Class: G-HPL GFLOPS > Higher Is Better optimized.vm.xlarge . 123.58 |================================================= Rodinia 3.1 Test: OpenMP LavaMD Seconds < Lower Is Better optimized.vm.xlarge . 431.92 |================================================= High Performance Conjugate Gradient 3.1 GFLOP/s > Higher Is Better optimized.v1.xlarge . 16.32 |================================================== optimized.vm.xlarge . 16.31 |================================================== NAS Parallel Benchmarks 3.4 Test / Class: EP.D Total Mop/s > Higher Is Better optimized.v1.xlarge . 1182.31 |================================================ optimized.vm.xlarge . 1177.90 |================================================ Rodinia 3.1 Test: OpenMP Leukocyte Seconds < Lower Is Better optimized.vm.xlarge . 107.43 |================================================= Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Exchange Average usec < Lower Is Better optimized.vm.xlarge . 387.07 |================================================= Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Exchange Average Mbytes/sec > Higher Is Better optimized.vm.xlarge . 3655.83 |================================================ NAS Parallel Benchmarks 3.4 Test / Class: BT.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 52854.17 |=============================================== optimized.vm.xlarge . 52891.94 |=============================================== Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Sendrecv Average usec < Lower Is Better optimized.vm.xlarge . 221.30 |================================================= Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 Sendrecv Average Mbytes/sec > Higher Is Better optimized.vm.xlarge . 2633.68 |================================================ Intel MPI Benchmarks 2019.3 Test: IMB-P2P PingPong Average Msg/sec > Higher Is Better optimized.vm.xlarge . 10016813.84 |============================================ NAS Parallel Benchmarks 3.4 Test / Class: LU.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 69523.79 |=============================================== optimized.vm.xlarge . 69684.82 |=============================================== NAS Parallel Benchmarks 3.4 Test / Class: FT.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 18375.88 |=============================================== optimized.vm.xlarge . 18151.56 |============================================== NAS Parallel Benchmarks 3.4 Test / Class: EP.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 1180.31 |================================================ optimized.vm.xlarge . 1137.29 |============================================== Rodinia 3.1 Test: OpenMP Streamcluster Seconds < Lower Is Better optimized.vm.xlarge . 17.99 |================================================== Rodinia 3.1 Test: OpenMP CFD Solver Seconds < Lower Is Better optimized.vm.xlarge . 12.80 |================================================== Intel MPI Benchmarks 2019.3 Test: IMB-MPI1 PingPong Average Mbytes/sec > Higher Is Better optimized.vm.xlarge . 3962.46 |================================================ NAS Parallel Benchmarks 3.4 Test / Class: SP.B Total Mop/s > Higher Is Better optimized.v1.xlarge . 30909.75 |=============================================== optimized.vm.xlarge . 30770.19 |=============================================== NAS Parallel Benchmarks 3.4 Test / Class: MG.C Total Mop/s > Higher Is Better optimized.v1.xlarge . 37114.36 |=============================================== optimized.vm.xlarge . 36875.38 |=============================================== HPC Challenge 1.5.0 Test / Class: Max Ping Pong Bandwidth MB/s > Higher Is Better optimized.vm.xlarge . 12056.52 |=============================================== HPC Challenge 1.5.0 Test / Class: Random Ring Bandwidth GB/s > Higher Is Better optimized.vm.xlarge . 1.40170 |================================================ HPC Challenge 1.5.0 Test / Class: Random Ring Latency usecs < Lower Is Better optimized.vm.xlarge . 0.61320 |================================================ HPC Challenge 1.5.0 Test / Class: G-Random Access GUP/s > Higher Is Better optimized.vm.xlarge . 0.08061 |================================================ HPC Challenge 1.5.0 Test / Class: EP-STREAM Triad GB/s > Higher Is Better optimized.vm.xlarge . 3.21699 |================================================ HPC Challenge 1.5.0 Test / Class: G-Ptrans GB/s > Higher Is Better optimized.vm.xlarge . 6.34966 |================================================ HPC Challenge 1.5.0 Test / Class: EP-DGEMM GFLOPS > Higher Is Better optimized.vm.xlarge . 22.77 |================================================== HPC Challenge 1.5.0 Test / Class: G-Ffte GFLOPS > Higher Is Better optimized.vm.xlarge . 6.32367 |================================================