hpc-run-1

KVM testing on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007224-NE-HPCRUN12663
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 4 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 5 Tests
MPI Benchmarks 4 Tests
Multi-Core 4 Tests
OpenMPI Tests 5 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
optimized.v1.xlarge
July 21 2020
  22 Minutes
optimized.vm.xlarge
July 21 2020
  5 Hours, 21 Minutes
Invert Hiding All Results Option
  2 Hours, 51 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


hpc-run-1, "High Performance Conjugate Gradient 3.1 - ", Higher Results Are Better "optimized.v1.xlarge",16.3791,16.298,16.2929 "optimized.vm.xlarge",16.3828,16.252,16.302 "HPC Challenge 1.5.0 - Test / Class: G-HPL", Higher Results Are Better "optimized.vm.xlarge",123.73,123.362,123.648 "HPC Challenge 1.5.0 - Test / Class: G-Ffte", Higher Results Are Better "optimized.vm.xlarge",6.31493,6.33384,6.32223 "HPC Challenge 1.5.0 - Test / Class: EP-DGEMM", Higher Results Are Better "optimized.vm.xlarge",23.1254,22.2032,22.9741 "HPC Challenge 1.5.0 - Test / Class: G-Ptrans", Higher Results Are Better "optimized.vm.xlarge",6.36757,6.38101,6.3004 "HPC Challenge 1.5.0 - Test / Class: EP-STREAM Triad", Higher Results Are Better "optimized.vm.xlarge",3.20862,3.20428,3.23806 "HPC Challenge 1.5.0 - Test / Class: G-Random Access", Higher Results Are Better "optimized.vm.xlarge",0.0827517,0.0734334,0.0856484 "HPC Challenge 1.5.0 - Test / Class: Random Ring Latency", Lower Results Are Better "optimized.vm.xlarge",0.615024,0.610823,0.613744 "HPC Challenge 1.5.0 - Test / Class: Random Ring Bandwidth", Higher Results Are Better "optimized.vm.xlarge",1.38181,1.4052,1.41809 "HPC Challenge 1.5.0 - Test / Class: Max Ping Pong Bandwidth", Higher Results Are Better "optimized.vm.xlarge",12028.881,12084.446,12056.218 "Intel MPI Benchmarks 2019.3 - Test: IMB-P2P PingPong", Higher Results Are Better "optimized.vm.xlarge",10029677.380952,9961379.0952381,10059385.047619 "Intel MPI Benchmarks 2019.3 - Test: IMB-MPI1 Exchange", Higher Results Are Better "optimized.vm.xlarge",3582.81625,3741.62,3643.0395833333 "Intel MPI Benchmarks 2019.3 - Test: IMB-MPI1 Exchange", Lower Results Are Better "optimized.vm.xlarge",380.40666666667,387.29291666667,393.49791666667 "Intel MPI Benchmarks 2019.3 - Test: IMB-MPI1 PingPong", Higher Results Are Better "optimized.vm.xlarge",3989.53,4024.471,3873.3855 "Intel MPI Benchmarks 2019.3 - Test: IMB-MPI1 Sendrecv", Higher Results Are Better "optimized.vm.xlarge",2683.27875,2568.9416666667,2648.8054166667 "Intel MPI Benchmarks 2019.3 - Test: IMB-MPI1 Sendrecv", Lower Results Are Better "optimized.vm.xlarge",215.5975,219.15541666667,229.14875 "NAS Parallel Benchmarks 3.4 - Test / Class: BT.C", Higher Results Are Better "optimized.v1.xlarge",52913.43,52867.51,52781.56 "optimized.vm.xlarge",52823.15,53005.4,52847.27 "NAS Parallel Benchmarks 3.4 - Test / Class: EP.C", Higher Results Are Better "optimized.v1.xlarge",1178.51,1184.37,1178.04 "optimized.vm.xlarge",1181.01,1179.15,1048.44,1179.64,1181.1,1159.26,1158.25,1166.13,1181.79,1146.67,1002.52,1182.89,963.14,1179.29,1150.08 "NAS Parallel Benchmarks 3.4 - Test / Class: EP.D", Higher Results Are Better "optimized.v1.xlarge",1181.46,1183.76,1181.7 "optimized.vm.xlarge",1176.71,1177.51,1179.47 "NAS Parallel Benchmarks 3.4 - Test / Class: FT.C", Higher Results Are Better "optimized.v1.xlarge",18534.26,17597.48,18718.29,18653.5 "optimized.vm.xlarge",18159.22,18685.44,17610.03 "NAS Parallel Benchmarks 3.4 - Test / Class: LU.C", Higher Results Are Better "optimized.v1.xlarge",69650.69,68789.3,70131.38 "optimized.vm.xlarge",69783.98,69680.66,69589.81 "NAS Parallel Benchmarks 3.4 - Test / Class: MG.C", Higher Results Are Better "optimized.v1.xlarge",36771.35,36913.89,37657.85 "optimized.vm.xlarge",36654.31,37120.09,36851.75 "NAS Parallel Benchmarks 3.4 - Test / Class: SP.B", Higher Results Are Better "optimized.v1.xlarge",30727.52,31387.66,30614.07 "optimized.vm.xlarge",30722.2,30675.55,30912.81 "Rodinia 3.1 - Test: OpenMP LavaMD", Lower Results Are Better "optimized.vm.xlarge",432.147,432.262,431.347 "Rodinia 3.1 - Test: OpenMP Leukocyte", Lower Results Are Better "optimized.vm.xlarge",108.166,107.048,107.068 "Rodinia 3.1 - Test: OpenMP CFD Solver", Lower Results Are Better "optimized.vm.xlarge",12.783,12.723,12.885 "Rodinia 3.1 - Test: OpenMP Streamcluster", Lower Results Are Better "optimized.vm.xlarge",18.314,17.971,17.694