Microsoft Azure EPYC Milan-X HBv3 Benchmarks

Microsoft Azure HBv3 (Milan) versus HBv3 (Milan-X) benchmarking by Michael Larabel for a future article on Phoronix.com. Looking at performance of AMD EPYC Milan-X in Microsoft Azure cloud for a variety of workloads.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2203201-PTS-AZUREHBV49
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
HBv3: 64 Cores
November 06 2021
  22 Hours, 56 Minutes
HBv3 Milan-X: 64 Cores
November 06 2021
  16 Hours, 44 Minutes
HBv3: 120 Cores
November 06 2021
  1 Day, 45 Minutes
HBv3 Milan-X: 120 Cores
November 06 2021
  19 Hours, 24 Minutes
Invert Behavior (Only Show Selected Data)
  20 Hours, 57 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Microsoft Azure EPYC Milan-X HBv3 BenchmarksOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 7V13 64-Core (64 Cores)2 x AMD EPYC 7V73X 64-Core (64 Cores)2 x AMD EPYC 7V13 64-Core (120 Cores)2 x AMD EPYC 7V73X 64-Core (120 Cores)Microsoft Virtual Machine (Hyper-V UEFI v4.1 BIOS)442GB2 x 960GB Microsoft NVMe Direct Disk + 32GB Virtual Disk + 515GB Virtual Diskhyperv_fbMellanox MT27710CentOS Linux 84.18.0-147.8.1.el8_1.x86_64 (x86_64)GCC 8.3.1 20190507ext41152x864microsoftProcessorsMotherboardMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionSystem LayerMicrosoft Azure EPYC Milan-X HBv3 Benchmarks PerformanceSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libmpx --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - CPU Microcode: 0xffffffff- Python 3.6.8- SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline STIBP: disabled RSB filling + tsx_async_abort: Not affected

hpcc: Rand Ring Bandwidthospray: San Miguel - Path Tracerhpcc: G-HPLgraphics-magick: Noise-Gaussianembree: Pathtracer - Asian Dragonembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonbrl-cad: VGR Performance Metricospray: NASA Streamlines - Path Tracerospray: NASA Streamlines - SciVisospray: XFrog Forest - Path Tracerospray: Magnetic Reconnection - SciVisospray: XFrog Forest - SciVisopenfoam: Motorbike 60Mcompress-zstd: 19, Long Mode - Compression Speedospray: San Miguel - SciVisaskap: tConvolve MPI - Degriddingrocksdb: Rand Readopenvkl: vklBenchmark Scalarnamd: ATPase Simulation - 327,506 Atomsrelion: Basic - CPUaskap: tConvolve MPI - Griddingopenvkl: vklBenchmark ISPCincompact3d: X3D-benchmarking input.i3dlulesh: build-nodejs: Time To Compilebuild-linux-kernel: Time To Compilewrf: conus 2.5kmgromacs: MPI CPU - water_GMX50_barecompress-zstd: 19 - Compression Speedlammps: 20k Atomsrocksdb: Read Rand Write Randlammps: Rhodopsin Proteinhpcc: Max Ping Pong Bandwidthnwchem: C240 Buckyballonnx: super-resolution-10 - CPUnpb: CG.Chpcg: kripke: john-the-ripper: MD5parboil: OpenMP CUTCPincompact3d: input.i3d 193 Cells Per DirectionHBv3HBv3 Milan-XHBv3HBv3 Milan-X 64 Cores 64 Cores 120 Cores 120 Cores1.829924.3299.5661058541.859640.780038.905142.001561849215.8771.435.7038.4610.7589.6539.852.6335988.0324822447720.41157418.47938175.0120348.11460444262.22796.34824.15910150.0677.47685.131.605135734932.95817174.7532256.6610720940.7040.02337363552156974671.51554813.67810926.202424.93175.0270087445.683346.092944.252345.656665518317.2483.336.204011.3665.5059.855.5640896.3330410911740.40802414.54141160.1126322.87511254759.68993.79523.9069294.7037.977106.232.373138115733.95518347.8662219.8635422323.2341.13039737330159130001.12716610.87805050.764147.3989.3555072164.371766.504963.286263.4384104436824.59111.119.0962.516.9580.6036.683.3341724.75027288081060.27619312.79741287.0166287.76138340262.20675.71419.0658766.549.05482.036.881158774335.40915815.9032557.1585220926.5238.71808820114271432670.97647012.28599453.415388.50139.04400112378.771075.940971.980276.5242110948627.781259.9066.6718.4154.0352.885.8657881.45223873011110.26900274.35357042.5177255.83641147341.12872.45218.5597804.469.70593.839.535168465438.46716082.1482467.9648521914.5139.43689393654181414000.8477209.93829823OpenBenchmarking.org

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth120 Cores64 Cores2468103.415386.202420.764141.829921. (CC) gcc options: -lblas -lm -fexceptions -pthread -lmpi2. ATLAS + Open MPI 4.0.5

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracer120 Cores64 Cores246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 38.504.937.394.32

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL120 Cores64 Cores4080120160200139.04175.0389.3699.571. (CC) gcc options: -lblas -lm -fexceptions -pthread -lmpi2. ATLAS + Open MPI 4.0.5

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussian120 Cores64 Cores2004006008001000SE +/- 11.24, N = 15SE +/- 4.48, N = 3SE +/- 4.18, N = 3SE +/- 6.24, N = 411238747215851. (CC) gcc options: -fopenmp -O2 -pthread -ltiff -ljpeg -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon120 Cores64 Cores20406080100SE +/- 0.21, N = 3SE +/- 0.29, N = 3SE +/- 0.28, N = 3SE +/- 0.23, N = 378.7745.6864.3741.86

HBv3 Milan-XHBv3OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown120 Cores64 Cores20406080100SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.17, N = 3SE +/- 0.05, N = 375.9446.0966.5040.78

HBv3 Milan-XHBv3OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown120 Cores64 Cores1632486480SE +/- 0.18, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.08, N = 371.9844.2563.2938.91

HBv3 Milan-XHBv3OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon120 Cores64 Cores20406080100SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.21, N = 376.5245.6663.4442.00

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.2VGR Performance Metric120 Cores64 Cores200K400K600K800K1000K110948665518310443686184921. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path Tracer120 Cores64 Cores714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.20, N = 3SE +/- 0.00, N = 327.7817.2424.5915.87

HBv3 Milan-XHBv3OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVis120 Cores64 Cores306090120150SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3125.0083.33111.1171.43

HBv3 Milan-XHBv3OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path Tracer120 Cores64 Cores3691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 39.906.209.095.70

HBv3 Milan-XHBv3OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVis120 Cores64 Cores1530456075SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 366.6740.0062.5038.46

HBv3 Milan-XHBv3OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVis120 Cores64 Cores510152025SE +/- 0.11, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 318.4111.3616.9510.75

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M120 Cores64 Cores20406080100SE +/- 0.22, N = 3SE +/- 0.15, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 354.0365.5080.6089.651. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed120 Cores64 Cores1326395265SE +/- 0.50, N = 15SE +/- 0.64, N = 3SE +/- 0.29, N = 3SE +/- 0.34, N = 1552.859.836.639.81. (CC) gcc options: -O3 -pthread -lz -llzma

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVis120 Cores64 Cores20406080100SE +/- 0.95, N = 15SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 385.8655.5683.3352.63

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding120 Cores64 Cores12K24K36K48K60KSE +/- 0.00, N = 3SE +/- 263.83, N = 3SE +/- 146.90, N = 3SE +/- 204.47, N = 357881.440896.341724.735988.01. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Read120 Cores64 Cores110M220M330M440M550MSE +/- 1522212.52, N = 3SE +/- 648314.38, N = 3SE +/- 4680557.08, N = 7SE +/- 95304.17, N = 35223873013304109115027288083248224471. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -O2 -fno-rtti -lgflags

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark Scalar120 Cores64 Cores20406080100SE +/- 1.11, N = 6SE +/- 0.67, N = 3SE +/- 1.21, N = 9SE +/- 0.88, N = 31117410672

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms120 Cores64 Cores0.09260.18520.27780.37040.463SE +/- 0.00007, N = 3SE +/- 0.00005, N = 3SE +/- 0.00012, N = 3SE +/- 0.00048, N = 30.269000.408020.276190.41157

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU120 Cores64 Cores90180270360450SE +/- 1.22, N = 3SE +/- 0.68, N = 3SE +/- 1.58, N = 3SE +/- 1.03, N = 3274.35414.54312.80418.481. (CXX) g++ options: -fopenmp -std=c++0x -O2 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -fexceptions -pthread -lmpi_cxx -lmpi

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding120 Cores64 Cores12K24K36K48K60KSE +/- 0.00, N = 3SE +/- 143.87, N = 3SE +/- 400.79, N = 357042.541160.141287.038175.01. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPC120 Cores64 Cores4080120160200SE +/- 1.75, N = 5SE +/- 0.88, N = 3177126166120

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3d120 Cores64 Cores80160240320400SE +/- 0.52, N = 3SE +/- 0.69, N = 3SE +/- 0.24, N = 3SE +/- 0.83, N = 3255.84322.88287.76348.111. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3120 Cores64 Cores12K24K36K48K60KSE +/- 209.57, N = 3SE +/- 258.47, N = 3SE +/- 286.97, N = 3SE +/- 36.72, N = 347341.1354759.6940262.2144262.231. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To Compile120 Cores64 Cores20406080100SE +/- 0.32, N = 3SE +/- 0.31, N = 3SE +/- 0.28, N = 3SE +/- 0.27, N = 372.4593.8075.7196.35

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile120 Cores64 Cores612182430SE +/- 0.12, N = 13SE +/- 0.21, N = 8SE +/- 0.16, N = 8SE +/- 0.22, N = 718.5623.9119.0724.16

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5km120 Cores64 Cores2K4K6K8K10K7804.469294.708766.5410150.071. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -fexceptions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare120 Cores64 Cores3691215SE +/- 0.061, N = 3SE +/- 0.020, N = 3SE +/- 0.051, N = 3SE +/- 0.061, N = 39.7057.9779.0547.4761. (CXX) g++ options: -O2 -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed120 Cores64 Cores20406080100SE +/- 1.33, N = 3SE +/- 1.05, N = 15SE +/- 0.83, N = 6SE +/- 0.93, N = 1593.8106.282.085.11. (CC) gcc options: -O3 -pthread -lz -llzma

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms120 Cores64 Cores918273645SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.22, N = 3SE +/- 0.12, N = 339.5432.3736.8831.611. (CXX) g++ options: -O2 -pthread -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write Random120 Cores64 Cores400K800K1200K1600K2000KSE +/- 6175.84, N = 3SE +/- 6520.37, N = 3SE +/- 5368.09, N = 3SE +/- 10799.70, N = 316846541381157158774313573491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -O2 -fno-rtti -lgflags

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein120 Cores64 Cores918273645SE +/- 0.29, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.24, N = 338.4733.9635.4132.961. (CXX) g++ options: -O2 -pthread -lm

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth120 Cores64 Cores4K8K12K16K20K16082.1518347.8715815.9017174.751. (CC) gcc options: -lblas -lm -fexceptions -pthread -lmpi2. ATLAS + Open MPI 4.0.5

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyball120 Cores64 Cores50010001500200025002467.92219.82557.12256.61. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: super-resolution-10 - Device: CPU120 Cores64 Cores14002800420056007000SE +/- 117.46, N = 9SE +/- 100.53, N = 9SE +/- 56.15, N = 3SE +/- 62.83, N = 364856354585261071. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O2 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C120 Cores64 Cores5K10K15K20K25KSE +/- 70.02, N = 3SE +/- 34.77, N = 3SE +/- 26.14, N = 3SE +/- 51.72, N = 321914.5122323.2320926.5220940.701. (F9X) gfortran options: -O3 -march=native -fexceptions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1120 Cores64 Cores918273645SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 339.4441.1338.7240.021. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4120 Cores64 Cores20M40M60M80M100MSE +/- 2167036.06, N = 15SE +/- 2522711.91, N = 15SE +/- 2974209.65, N = 15SE +/- 1812363.94, N = 15939365419737330188201142736355211. (CXX) g++ options: -O2 -fopenmp

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5120 Cores64 Cores2M4M6M8M10MSE +/- 271831.96, N = 15SE +/- 10969.66, N = 3SE +/- 283586.65, N = 15SE +/- 54210.51, N = 1581414005913000714326756974671. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP CUTCP120 Cores64 Cores0.3410.6821.0231.3641.705SE +/- 0.022647, N = 12SE +/- 0.011448, N = 6SE +/- 0.006046, N = 3SE +/- 0.014450, N = 30.8477201.1271660.9764701.5155481. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

HBv3 Milan-XHBv3OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Direction120 Cores64 Cores48121620SE +/- 0.02331367, N = 3SE +/- 0.04593850, N = 3SE +/- 0.02189533, N = 3SE +/- 0.43535891, N = 159.9382982310.8780505012.2859945013.678109201. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi