Xeon Gold 5220R 2P 2021

2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101191-HA-XEONGOLD534
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 2 Tests
CPU Massive 3 Tests
Creator Workloads 4 Tests
Encoding 2 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 11 Tests
Machine Learning 3 Tests
Molecular Dynamics 4 Tests
Multi-Core 4 Tests
OpenMPI Tests 7 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 2 Tests
Scientific Computing 7 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
January 18 2021
  7 Hours, 26 Minutes
2
January 18 2021
  7 Hours, 34 Minutes
3
January 19 2021
  6 Hours, 14 Minutes
Invert Hiding All Results Option
  7 Hours, 5 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Gold 5220R 2P 2021ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution1232 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads)TYAN S7106 (V2.01.B40 BIOS)Intel Sky Lake-E DMI3 Registers94GB500GB Samsung SSD 860llvmpipeVE2282 x Intel I210 + 2 x QLogic cLOM8214 1/10GbEUbuntu 20.045.9.0-050900rc6-generic (x86_64) 20200920GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 256 bits)GCC 9.3.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x5003003Python Details- Python 2.7.18rc1 + Python 3.8.5Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

123Result OverviewPhoronix Test Suite100%103%105%108%LULESHQuantum ESPRESSOAlgebraic Multi-Grid BenchmarkRELIONTNNKripkeONNX RuntimeOpenFOAMTimed Godot Game Engine Compilationdav1dCloverLeafMobile Neural Networkrav1eGoogle SynthMarkLAMMPS Molecular Dynamics Simulator

Xeon Gold 5220R 2P 2021lulesh: mnn: MobileNetV2_224amg: mnn: resnet-v2-50tnn: CPU - MobileNet v2relion: Basic - CPUonnx: fcn-resnet101-11 - OpenMP CPUopenfoam: Motorbike 60Mmnn: inception-v3rav1e: 6dav1d: Summer Nature 1080pmnn: SqueezeNetV1.0onnx: yolov4 - OpenMP CPUrav1e: 10rav1e: 5dav1d: Summer Nature 4Krav1e: 1dav1d: Chimera 1080p 10-bitonnx: super-resolution-10 - OpenMP CPUcloverleaf: Lagrangian-Eulerian Hydrodynamicsbuild-godot: Time To Compilelammps: Rhodopsin Proteinopenfoam: Motorbike 30Msynthmark: VoiceMark_100lammps: 20k Atomsdav1d: Chimera 1080ptnn: CPU - SqueezeNet v1.1kripke: onnx: shufflenet-v2-10 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUmnn: mobilenet-v1-1.0qe: AUSURF11212313911.5405.244110850266729.878397.487652.598111238.3336.8811.223339.628.5673462.5730.964186.730.36170.83531925.1282.40116.72830.50535.22218.033338.35349.6737658181163343962.8391645.3815312.5024.764116584933331.064386.480642.292111234.2237.4141.236344.018.4933422.5730.961186.140.36470.84532724.9682.91916.77630.62536.68118.091338.91349.8157642056765824052.9011740.6515372.5604.952117448733330.882388.704654.522109237.5937.0131.239343.248.4643462.5480.955187.890.36371.34528924.9782.65616.82730.57534.63118.063338.75349.7057554070065963982.9081764.97OpenBenchmarking.org

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.31233K6K9K12K15KSE +/- 150.18, N = 3SE +/- 138.21, N = 15SE +/- 131.93, N = 1513911.5415312.5015372.561. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.31233K6K9K12K15KMin: 13612.46 / Avg: 13911.54 / Max: 14085.02Min: 13853.02 / Avg: 15312.5 / Max: 15679.87Min: 13733.44 / Avg: 15372.56 / Max: 15712.71. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_2241231.17992.35983.53974.71965.8995SE +/- 0.081, N = 3SE +/- 0.044, N = 3SE +/- 0.064, N = 155.2444.7644.952MIN: 4.52 / MAX: 14.09MIN: 4.01 / MAX: 13.65MIN: 3.83 / MAX: 25.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224123246810Min: 5.08 / Avg: 5.24 / Max: 5.33Min: 4.7 / Avg: 4.76 / Max: 4.85Min: 4.44 / Avg: 4.95 / Max: 5.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2123300M600M900M1200M1500MSE +/- 10611072.21, N = 3SE +/- 4505964.10, N = 3SE +/- 7168295.30, N = 31108502667116584933311744873331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2123200M400M600M800M1000MMin: 1089769000 / Avg: 1108502666.67 / Max: 1126505000Min: 1159818000 / Avg: 1165849333.33 / Max: 1174664000Min: 1162359000 / Avg: 1174487333.33 / Max: 11871720001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50123714212835SE +/- 0.16, N = 3SE +/- 0.41, N = 3SE +/- 0.26, N = 1529.8831.0630.88MIN: 29.23 / MAX: 117.71MIN: 29.7 / MAX: 81.8MIN: 29.09 / MAX: 131.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50123714212835Min: 29.67 / Avg: 29.88 / Max: 30.18Min: 30.26 / Avg: 31.06 / Max: 31.6Min: 29.59 / Avg: 30.88 / Max: 33.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212390180270360450SE +/- 5.67, N = 3SE +/- 0.65, N = 3SE +/- 2.72, N = 3397.49386.48388.70MIN: 383.2 / MAX: 556.49MIN: 383.42 / MAX: 469.27MIN: 383.05 / MAX: 540.221. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212370140210280350Min: 388.47 / Avg: 397.49 / Max: 407.97Min: 385.7 / Avg: 386.48 / Max: 387.76Min: 385.86 / Avg: 388.7 / Max: 394.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU123140280420560700SE +/- 7.75, N = 9SE +/- 3.96, N = 3SE +/- 3.01, N = 3652.60642.29654.521. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU123120240360480600Min: 637.69 / Avg: 652.6 / Max: 713.43Min: 634.83 / Avg: 642.29 / Max: 648.29Min: 651.26 / Avg: 654.52 / Max: 660.531. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -pthread -lmpi_cxx -lmpi

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU12320406080100SE +/- 0.29, N = 3SE +/- 0.58, N = 3SE +/- 1.09, N = 31111111091. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPU12320406080100Min: 110 / Avg: 110.5 / Max: 111Min: 109.5 / Avg: 110.5 / Max: 111.5Min: 106.5 / Avg: 108.67 / Max: 1101. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M12350100150200250SE +/- 0.25, N = 3SE +/- 1.31, N = 3SE +/- 0.16, N = 3238.33234.22237.591. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60M1234080120160200Min: 237.99 / Avg: 238.33 / Max: 238.82Min: 231.84 / Avg: 234.22 / Max: 236.35Min: 237.37 / Avg: 237.59 / Max: 237.91. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v3123918273645SE +/- 0.12, N = 3SE +/- 0.24, N = 3SE +/- 0.14, N = 1536.8837.4137.01MIN: 35.86 / MAX: 106.43MIN: 36.55 / MAX: 86.82MIN: 35.64 / MAX: 118.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v3123816243240Min: 36.72 / Avg: 36.88 / Max: 37.11Min: 37.08 / Avg: 37.41 / Max: 37.88Min: 36.17 / Avg: 37.01 / Max: 38.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 61230.27880.55760.83641.11521.394SE +/- 0.010, N = 3SE +/- 0.004, N = 3SE +/- 0.004, N = 31.2231.2361.239
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6123246810Min: 1.21 / Avg: 1.22 / Max: 1.24Min: 1.23 / Avg: 1.24 / Max: 1.24Min: 1.23 / Avg: 1.24 / Max: 1.24

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p12370140210280350SE +/- 2.79, N = 3SE +/- 1.08, N = 3SE +/- 0.92, N = 3339.62344.01343.24MIN: 166.21 / MAX: 377.75MIN: 204.26 / MAX: 378.62MIN: 204.44 / MAX: 379.881. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p12360120180240300Min: 334.06 / Avg: 339.62 / Max: 342.89Min: 342.01 / Avg: 344.01 / Max: 345.71Min: 342.25 / Avg: 343.24 / Max: 345.081. (CC) gcc options: -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0123246810SE +/- 0.111, N = 3SE +/- 0.144, N = 3SE +/- 0.097, N = 158.5678.4938.464MIN: 7.54 / MAX: 9.36MIN: 7.73 / MAX: 9.13MIN: 7.25 / MAX: 15.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.01233691215Min: 8.37 / Avg: 8.57 / Max: 8.75Min: 8.3 / Avg: 8.49 / Max: 8.78Min: 7.65 / Avg: 8.46 / Max: 8.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU12380160240320400SE +/- 1.59, N = 3SE +/- 5.07, N = 3SE +/- 1.15, N = 33463423461. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPU12360120180240300Min: 344.5 / Avg: 346.33 / Max: 349.5Min: 332 / Avg: 342 / Max: 348.5Min: 343.5 / Avg: 345.5 / Max: 347.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 101230.57891.15781.73672.31562.8945SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.020, N = 32.5732.5732.548
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10123246810Min: 2.57 / Avg: 2.57 / Max: 2.58Min: 2.57 / Avg: 2.57 / Max: 2.59Min: 2.51 / Avg: 2.55 / Max: 2.57

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 51230.21690.43380.65070.86761.0845SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.006, N = 30.9640.9610.955
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5123246810Min: 0.96 / Avg: 0.96 / Max: 0.97Min: 0.96 / Avg: 0.96 / Max: 0.97Min: 0.95 / Avg: 0.96 / Max: 0.97

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K1234080120160200SE +/- 1.36, N = 3SE +/- 1.69, N = 3SE +/- 0.46, N = 3186.73186.14187.89MIN: 112.08 / MAX: 202.84MIN: 107.72 / MAX: 201.56MIN: 124.73 / MAX: 202.31. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K123306090120150Min: 185.15 / Avg: 186.73 / Max: 189.43Min: 182.82 / Avg: 186.14 / Max: 188.32Min: 187.17 / Avg: 187.89 / Max: 188.741. (CC) gcc options: -pthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 11230.08190.16380.24570.32760.4095SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.3610.3640.363
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 112312345Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.36 / Avg: 0.36 / Max: 0.37Min: 0.36 / Avg: 0.36 / Max: 0.36

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit1231632486480SE +/- 0.16, N = 3SE +/- 0.13, N = 3SE +/- 0.13, N = 370.8370.8471.34MIN: 54.39 / MAX: 106.93MIN: 54.3 / MAX: 106.84MIN: 54.55 / MAX: 109.861. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit1231428425670Min: 70.51 / Avg: 70.83 / Max: 71.01Min: 70.63 / Avg: 70.84 / Max: 71.09Min: 71.17 / Avg: 71.34 / Max: 71.591. (CC) gcc options: -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU12311002200330044005500SE +/- 12.57, N = 3SE +/- 9.44, N = 3SE +/- 27.57, N = 35319532752891. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPU1239001800270036004500Min: 5297.5 / Avg: 5318.67 / Max: 5341Min: 5309.5 / Avg: 5326.83 / Max: 5342Min: 5240.5 / Avg: 5288.83 / Max: 53361. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics123612182430SE +/- 0.34, N = 3SE +/- 0.06, N = 3SE +/- 0.35, N = 325.1224.9624.971. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics123612182430Min: 24.51 / Avg: 25.12 / Max: 25.68Min: 24.88 / Avg: 24.96 / Max: 25.09Min: 24.61 / Avg: 24.97 / Max: 25.661. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile12320406080100SE +/- 0.55, N = 3SE +/- 0.95, N = 3SE +/- 0.32, N = 382.4082.9282.66
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile1231632486480Min: 81.67 / Avg: 82.4 / Max: 83.49Min: 81.67 / Avg: 82.92 / Max: 84.79Min: 82.12 / Avg: 82.66 / Max: 83.24

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein12348121620SE +/- 0.08, N = 3SE +/- 0.20, N = 15SE +/- 0.22, N = 1516.7316.7816.831. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein12348121620Min: 16.64 / Avg: 16.73 / Max: 16.9Min: 15.6 / Avg: 16.78 / Max: 18.04Min: 14.63 / Avg: 16.83 / Max: 17.951. (CXX) g++ options: -O3 -pthread -lm

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M123714212835SE +/- 0.10, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 330.5030.6230.571. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M123714212835Min: 30.4 / Avg: 30.5 / Max: 30.69Min: 30.59 / Avg: 30.62 / Max: 30.64Min: 30.47 / Avg: 30.57 / Max: 30.651. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100123120240360480600SE +/- 0.90, N = 3SE +/- 0.60, N = 3SE +/- 0.43, N = 3535.22536.68534.631. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_10012390180270360450Min: 534.18 / Avg: 535.22 / Max: 537.01Min: 535.78 / Avg: 536.68 / Max: 537.83Min: 533.79 / Avg: 534.63 / Max: 535.21. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms12348121620SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 318.0318.0918.061. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms123510152025Min: 17.93 / Avg: 18.03 / Max: 18.17Min: 18.07 / Avg: 18.09 / Max: 18.13Min: 18.02 / Avg: 18.06 / Max: 18.11. (CXX) g++ options: -O3 -pthread -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p12370140210280350SE +/- 1.04, N = 3SE +/- 1.22, N = 3SE +/- 0.29, N = 3338.35338.91338.75MIN: 226.18 / MAX: 437.6MIN: 237.5 / MAX: 439.58MIN: 238.25 / MAX: 436.841. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p12360120180240300Min: 336.39 / Avg: 338.35 / Max: 339.92Min: 336.7 / Avg: 338.91 / Max: 340.9Min: 338.35 / Avg: 338.75 / Max: 339.321. (CC) gcc options: -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112380160240320400SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3349.67349.82349.71MIN: 349.01 / MAX: 351.27MIN: 349.05 / MAX: 360.06MIN: 348.99 / MAX: 358.541. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300Min: 349.56 / Avg: 349.67 / Max: 349.79Min: 349.7 / Avg: 349.81 / Max: 349.89Min: 349.59 / Avg: 349.71 / Max: 349.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.412316M32M48M64M80MSE +/- 1820096.95, N = 12SE +/- 1685258.80, N = 15SE +/- 1808221.02, N = 127658181176420567755407001. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.412313M26M39M52M65MMin: 65733700 / Avg: 76581810.83 / Max: 87637050Min: 65139310 / Avg: 76420567.33 / Max: 86742660Min: 63042740 / Avg: 75540700 / Max: 857274801. (CXX) g++ options: -O3 -fopenmp

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU12314002800420056007000SE +/- 158.17, N = 12SE +/- 9.53, N = 3SE +/- 16.10, N = 36334658265961. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPU12311002200330044005500Min: 5131 / Avg: 6334 / Max: 6651.5Min: 6566 / Avg: 6582.33 / Max: 6599Min: 6564 / Avg: 6596.17 / Max: 6613.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU12390180270360450SE +/- 6.36, N = 3SE +/- 8.06, N = 12SE +/- 5.16, N = 123964053981. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPU12370140210280350Min: 383 / Avg: 395.67 / Max: 403Min: 378 / Avg: 404.75 / Max: 480.5Min: 380.5 / Avg: 398.29 / Max: 434.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.01230.65431.30861.96292.61723.2715SE +/- 0.005, N = 3SE +/- 0.082, N = 3SE +/- 0.053, N = 152.8392.9012.908MIN: 2.59 / MAX: 3.94MIN: 2.64 / MAX: 3.23MIN: 2.55 / MAX: 8.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0123246810Min: 2.83 / Avg: 2.84 / Max: 2.85Min: 2.81 / Avg: 2.9 / Max: 3.07Min: 2.72 / Avg: 2.91 / Max: 3.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112123400800120016002000SE +/- 47.91, N = 7SE +/- 33.06, N = 9SE +/- 29.14, N = 31645.381740.651764.971. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF11212330060090012001500Min: 1458.64 / Avg: 1645.38 / Max: 1807.04Min: 1550.4 / Avg: 1740.65 / Max: 1875.3Min: 1707.54 / Avg: 1764.97 / Max: 1802.31. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi