Microsoft Azure EPYC Milan-X HBv3 Benchmarks

Microsoft Azure HBv3 (Milan) versus HBv3 (Milan-X) benchmarking by Michael Larabel for a future article on Phoronix.com. Looking at performance of AMD EPYC Milan-X in Microsoft Azure cloud for a variety of workloads.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2203201-PTS-AZUREHBV49
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
HBv3: 64 Cores
November 06 2021
  22 Hours, 56 Minutes
HBv3 Milan-X: 64 Cores
November 06 2021
  16 Hours, 44 Minutes
HBv3: 120 Cores
November 06 2021
  1 Day, 45 Minutes
HBv3 Milan-X: 120 Cores
November 06 2021
  19 Hours, 24 Minutes
Invert Behavior (Only Show Selected Data)
  20 Hours, 57 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Microsoft Azure EPYC Milan-X HBv3 BenchmarksOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 7V13 64-Core (64 Cores)2 x AMD EPYC 7V73X 64-Core (64 Cores)2 x AMD EPYC 7V13 64-Core (120 Cores)2 x AMD EPYC 7V73X 64-Core (120 Cores)Microsoft Virtual Machine (Hyper-V UEFI v4.1 BIOS)442GB2 x 960GB Microsoft NVMe Direct Disk + 32GB Virtual Disk + 515GB Virtual Diskhyperv_fbMellanox MT27710CentOS Linux 84.18.0-147.8.1.el8_1.x86_64 (x86_64)GCC 8.3.1 20190507ext41152x864microsoftProcessorsMotherboardMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionSystem LayerMicrosoft Azure EPYC Milan-X HBv3 Benchmarks PerformanceSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libmpx --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-gcc-major-version-only --with-isl --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - CPU Microcode: 0xffffffff- Python 3.6.8- SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline STIBP: disabled RSB filling + tsx_async_abort: Not affected

hpcc: G-HPLwrf: conus 2.5kmopenvkl: vklBenchmark Scalaropenvkl: vklBenchmark ISPCnwchem: C240 Buckyballrelion: Basic - CPUincompact3d: X3D-benchmarking input.i3dbrl-cad: VGR Performance Metriclammps: 20k Atomsjohn-the-ripper: MD5onnx: super-resolution-10 - CPUkripke: hpcg: compress-zstd: 19 - Compression Speedopenfoam: Motorbike 60Mcompress-zstd: 19, Long Mode - Compression Speedgraphics-magick: Noise-Gaussianbuild-nodejs: Time To Compilerocksdb: Rand Readbuild-linux-kernel: Time To Compileospray: San Miguel - Path Tracerrocksdb: Read Rand Write Randospray: San Miguel - SciVisaskap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingospray: XFrog Forest - Path Tracergromacs: MPI CPU - water_GMX50_barenamd: ATPase Simulation - 327,506 Atomsincompact3d: input.i3d 193 Cells Per Directionospray: XFrog Forest - SciVisnpb: CG.Cembree: Pathtracer ISPC - Crownospray: NASA Streamlines - Path Tracerembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Crownembree: Pathtracer - Asian Dragonlulesh: ospray: Magnetic Reconnection - SciVisospray: NASA Streamlines - SciVislammps: Rhodopsin Proteinparboil: OpenMP CUTCPhpcc: Max Ping Pong Bandwidthhpcc: Rand Ring BandwidthHBv3HBv3 Milan-XHBv3HBv3 Milan-X 64 Cores 64 Cores 120 Cores 120 Cores99.5661010150.067721202256.6418.479348.11460461849231.605569746761077363552140.023385.189.6539.858596.34832482244724.1594.32135734952.6338175.035988.05.707.4760.4115713.678109210.7520940.7038.905115.8742.001540.780041.859644262.22738.4671.4332.9581.51554817174.7531.82992175.027009294.703741262219.8414.541322.87511265518332.373591300063549737330141.1303106.265.5059.887493.79533041091123.9064.93138115755.5641160.140896.36.207.9770.4080210.878050511.3622323.2344.252317.2445.656646.092945.683354759.6894083.3333.9551.12716618347.8666.2024289.355508766.541061662557.1312.797287.761383104436836.881714326758528820114238.718082.080.6036.672175.71450272880819.0657.39158774383.3341287.041724.79.099.0540.2761912.285994516.9520926.5263.286224.5963.438466.504964.371740262.20662.5111.1135.4090.97647015815.9030.76414139.044007804.461111772467.9274.353255.836411110948639.535814140064859393654139.436893.854.0352.8112372.45252238730118.5598.50168465485.8657042.557881.49.909.7050.269009.9382982318.4121914.5171.980227.7876.524275.940978.771047341.12866.6712538.4670.84772016082.1483.41538OpenBenchmarking.org

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL120 Cores64 Cores408012016020089.3699.57139.04175.031. (CC) gcc options: -lblas -lm -fexceptions -pthread -lmpi2. ATLAS + Open MPI 4.0.5

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5km64 Cores120 Cores2K4K6K8K10K10150.078766.549294.707804.461. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -fexceptions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark Scalar64 Cores120 Cores20406080100SE +/- 0.88, N = 3SE +/- 1.21, N = 9SE +/- 0.67, N = 3SE +/- 1.11, N = 67210674111

HBv3HBv3 Milan-XOpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPC64 Cores120 Cores4080120160200SE +/- 0.88, N = 3SE +/- 1.75, N = 5120166126177

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyball120 Cores64 Cores50010001500200025002557.12256.62467.92219.81. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU64 Cores120 Cores90180270360450SE +/- 1.03, N = 3SE +/- 1.58, N = 3SE +/- 0.68, N = 3SE +/- 1.22, N = 3418.48312.80414.54274.351. (CXX) g++ options: -fopenmp -std=c++0x -O2 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -fexceptions -pthread -lmpi_cxx -lmpi

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3d64 Cores120 Cores80160240320400SE +/- 0.83, N = 3SE +/- 0.24, N = 3SE +/- 0.69, N = 3SE +/- 0.52, N = 3348.11287.76322.88255.841. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.32.2VGR Performance Metric64 Cores120 Cores200K400K600K800K1000K618492104436865518311094861. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -pthread -ldl -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms64 Cores120 Cores918273645SE +/- 0.12, N = 3SE +/- 0.22, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 331.6136.8832.3739.541. (CXX) g++ options: -O2 -pthread -lm

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD564 Cores120 Cores2M4M6M8M10MSE +/- 54210.51, N = 15SE +/- 283586.65, N = 15SE +/- 10969.66, N = 3SE +/- 271831.96, N = 1556974677143267591300081414001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: super-resolution-10 - Device: CPU120 Cores64 Cores14002800420056007000SE +/- 56.15, N = 3SE +/- 62.83, N = 3SE +/- 100.53, N = 9SE +/- 117.46, N = 958526107635464851. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O2 -flto -fno-fat-lto-objects -ldl -lrt -pthread -lpthread

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.464 Cores120 Cores20M40M60M80M100MSE +/- 1812363.94, N = 15SE +/- 2974209.65, N = 15SE +/- 2167036.06, N = 15SE +/- 2522711.91, N = 15736355218820114293936541973733011. (CXX) g++ options: -O2 -fopenmp

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1120 Cores64 Cores918273645SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 338.7240.0239.4441.131. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed120 Cores64 Cores20406080100SE +/- 0.83, N = 6SE +/- 0.93, N = 15SE +/- 1.33, N = 3SE +/- 1.05, N = 1582.085.193.8106.21. (CC) gcc options: -O3 -pthread -lz -llzma

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M64 Cores120 Cores20406080100SE +/- 0.13, N = 3SE +/- 0.05, N = 3SE +/- 0.15, N = 3SE +/- 0.22, N = 389.6580.6065.5054.031. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed120 Cores64 Cores1326395265SE +/- 0.29, N = 3SE +/- 0.34, N = 15SE +/- 0.50, N = 15SE +/- 0.64, N = 336.639.852.859.81. (CC) gcc options: -O3 -pthread -lz -llzma

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussian64 Cores120 Cores2004006008001000SE +/- 6.24, N = 4SE +/- 4.18, N = 3SE +/- 4.48, N = 3SE +/- 11.24, N = 1558572187411231. (CC) gcc options: -fopenmp -O2 -pthread -ltiff -ljpeg -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To Compile64 Cores120 Cores20406080100SE +/- 0.27, N = 3SE +/- 0.28, N = 3SE +/- 0.31, N = 3SE +/- 0.32, N = 396.3575.7193.8072.45

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Random Read64 Cores120 Cores110M220M330M440M550MSE +/- 95304.17, N = 3SE +/- 4680557.08, N = 7SE +/- 648314.38, N = 3SE +/- 1522212.52, N = 33248224475027288083304109115223873011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -O2 -fno-rtti -lgflags

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile64 Cores120 Cores612182430SE +/- 0.22, N = 7SE +/- 0.16, N = 8SE +/- 0.21, N = 8SE +/- 0.12, N = 1324.1619.0723.9118.56

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracer64 Cores120 Cores246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 34.327.394.938.50

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.22.1Test: Read Random Write Random64 Cores120 Cores400K800K1200K1600K2000KSE +/- 10799.70, N = 3SE +/- 5368.09, N = 3SE +/- 6520.37, N = 3SE +/- 6175.84, N = 313573491587743138115716846541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -O2 -fno-rtti -lgflags

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVis64 Cores120 Cores20406080100SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.95, N = 1552.6383.3355.5685.86

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Gridding64 Cores120 Cores12K24K36K48K60KSE +/- 400.79, N = 3SE +/- 143.87, N = 3SE +/- 0.00, N = 338175.041287.041160.157042.51. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

HBv3HBv3 Milan-XOpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - Degridding64 Cores120 Cores12K24K36K48K60KSE +/- 204.47, N = 3SE +/- 146.90, N = 3SE +/- 263.83, N = 3SE +/- 0.00, N = 335988.041724.740896.357881.41. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path Tracer64 Cores120 Cores3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.709.096.209.90

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare64 Cores120 Cores3691215SE +/- 0.061, N = 3SE +/- 0.051, N = 3SE +/- 0.020, N = 3SE +/- 0.061, N = 37.4769.0547.9779.7051. (CXX) g++ options: -O2 -pthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms64 Cores120 Cores0.09260.18520.27780.37040.463SE +/- 0.00048, N = 3SE +/- 0.00012, N = 3SE +/- 0.00005, N = 3SE +/- 0.00007, N = 30.411570.276190.408020.26900

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Direction64 Cores120 Cores48121620SE +/- 0.43535891, N = 15SE +/- 0.02189533, N = 3SE +/- 0.04593850, N = 3SE +/- 0.02331367, N = 313.6781092012.2859945010.878050509.938298231. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVis64 Cores120 Cores510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.11, N = 310.7516.9511.3618.41

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C120 Cores64 Cores5K10K15K20K25KSE +/- 26.14, N = 3SE +/- 51.72, N = 3SE +/- 70.02, N = 3SE +/- 34.77, N = 320926.5220940.7021914.5122323.231. (F9X) gfortran options: -O3 -march=native -fexceptions -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown64 Cores120 Cores1632486480SE +/- 0.08, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 338.9163.2944.2571.98

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path Tracer64 Cores120 Cores714212835SE +/- 0.00, N = 3SE +/- 0.20, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 315.8724.5917.2427.78

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon64 Cores120 Cores20406080100SE +/- 0.21, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.12, N = 342.0063.4445.6676.52

HBv3HBv3 Milan-XOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown64 Cores120 Cores20406080100SE +/- 0.05, N = 3SE +/- 0.17, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 340.7866.5046.0975.94

HBv3HBv3 Milan-XOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon64 Cores120 Cores20406080100SE +/- 0.23, N = 3SE +/- 0.28, N = 3SE +/- 0.29, N = 3SE +/- 0.21, N = 341.8664.3745.6878.77

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3120 Cores64 Cores12K24K36K48K60KSE +/- 286.97, N = 3SE +/- 36.72, N = 3SE +/- 209.57, N = 3SE +/- 258.47, N = 340262.2144262.2347341.1354759.691. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVis64 Cores120 Cores1530456075SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 338.4662.5040.0066.67

HBv3HBv3 Milan-XOpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVis64 Cores120 Cores306090120150SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 371.43111.1183.33125.00

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein64 Cores120 Cores918273645SE +/- 0.24, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.29, N = 332.9635.4133.9638.471. (CXX) g++ options: -O2 -pthread -lm

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP CUTCP64 Cores120 Cores0.3410.6821.0231.3641.705SE +/- 0.014450, N = 3SE +/- 0.006046, N = 3SE +/- 0.011448, N = 6SE +/- 0.022647, N = 121.5155480.9764701.1271660.8477201. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

HBv3HBv3 Milan-XOpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth120 Cores64 Cores4K8K12K16K20K15815.9017174.7516082.1518347.871. (CC) gcc options: -lblas -lm -fexceptions -pthread -lmpi2. ATLAS + Open MPI 4.0.5

HBv3HBv3 Milan-XOpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth120 Cores64 Cores2468100.764141.829923.415386.202421. (CC) gcc options: -lblas -lm -fexceptions -pthread -lmpi2. ATLAS + Open MPI 4.0.5