Xeon Scalable Ice Lake P-State Governor

Intel Xeon Platinum 8380 Ice Lake P-State CPU frequency scaling Linux benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105267-IB-XEONSCALA38
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 4 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 10 Tests
CPU Massive 14 Tests
Creator Workloads 12 Tests
Encoding 6 Tests
Fortran Tests 7 Tests
Game Development 4 Tests
HPC - High Performance Computing 16 Tests
LAPACK (Linear Algebra Pack) Tests 4 Tests
Machine Learning 2 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 5 Tests
Multi-Core 26 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
OpenCL 2 Tests
OpenMPI Tests 11 Tests
Programmer / Developer System Benchmarks 7 Tests
Python Tests 5 Tests
Renderers 2 Tests
Scientific Computing 8 Tests
Server CPU Tests 11 Tests
Video Encoding 6 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
P-State powersave
May 23 2021
  15 Hours, 42 Minutes
P-State performance
May 24 2021
  11 Hours, 14 Minutes
intel_cpufreq schedutil
May 24 2021
  15 Hours, 54 Minutes
intel_cpufreq performance
May 25 2021
  13 Hours, 43 Minutes
Invert Hiding All Results Option
  14 Hours, 8 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


Xeon Scalable Ice Lake P-State GovernorOpenBenchmarking.orgPhoronix Test Suite 10.8.32 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 099816 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN800GB INTEL SSDPF21Q800GBASPEED2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 21.045.11.0-17-generic (x86_64)GNOME Shell 3.38.4X Server1.0.2GCC 10.3.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionXeon Scalable Ice Lake P-State Governor BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - P-State powersave: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270 - P-State performance: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270 - intel_cpufreq schedutil: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0xd000270 - intel_cpufreq performance: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xd000270 - Python 3.9.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performanceResult OverviewPhoronix Test Suite 10.8.3100%148%195%243%290%AOM AV1KvazaarPostgreSQL pgbenchSVT-VP9SVT-HEVCSVT-AV1PJSIPTimed Godot Game Engine CompilationDarmstadt Automotive Parallel Heterogeneous SuiteZstd CompressionArrayFireOpenVKLlibavif avifencTimed Linux Kernel CompilationTimed Node.js CompilationRodiniaTensorFlow LiteNAS Parallel BenchmarksQuantum ESPRESSOEmbreeOSPrayBlenderTimed LLVM CompilationIntel Open Image DenoiseTimed Mesa CompilationOpenFOAMONNX RuntimeHigh Performance Conjugate GradientStockfishGROMACSLAMMPS Molecular Dynamics SimulatorLiquid-DSPXcompact3d Incompact3dRELIONNAMDWRFNWChem

P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performancePer Watt Result OverviewPhoronix Test Suite 10.8.3100%107%113%120%126%AOM AV1NAS Parallel BenchmarksSVT-HEVCIntel Open Image DenoiseOSPrayZstd CompressionKvazaarOpenVKLSVT-AV1StockfishEmbreeArrayFireDarmstadt Automotive Parallel Heterogeneous SuitePJSIPSVT-VP9ONNX RuntimeLiquid-DSPHigh Performance Conjugate GradientLAMMPS Molecular Dynamics SimulatorGROMACSP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

Xeon Scalable Ice Lake P-State Governornpb: BT.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: LU.Cnpb: SP.Bnpb: SP.Cnpb: IS.Dnpb: MG.Cnpb: CG.Copenfoam: Motorbike 30Mopenfoam: Motorbike 60Mwrf: conus 2.5kmincompact3d: input.i3d 193 Cells Per Directionincompact3d: X3D-benchmarking input.i3darrayfire: BLAS CPUarrayfire: Conjugate Gradient CPUhpcg: lammps: 20k Atomsnamd: ATPase Simulation - 327,506 Atomsgromacs: MPI CPU - water_GMX50_bareonnx: yolov4 - OpenMP CPUonnx: fcn-resnet101-11 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUqe: AUSURF112daphne: OpenMP - Euclidean Clusterdaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagerelion: Basic - CPUtensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4rodinia: OpenMP CFD Solverrodinia: OpenMP LavaMDrodinia: OpenMP Leukocyterodinia: OpenMP Streamclusterrodinia: OpenMP HotSpot3Dsvt-av1: Preset 8 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedospray: Magnetic Reconnection - SciVisospray: XFrog Forest - SciVisospray: XFrog Forest - Path Tracerospray: NASA Streamlines - SciVisospray: NASA Streamlines - Path Tracerospray: San Miguel - SciVisospray: San Miguel - Path Tracerembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Crownoidn: RT.hdr_alb_nrm.3840x2160oidn: RT.ldr_alb_nrm.3840x2160oidn: RTLightmap.hdr.4096x4096openvkl: vklBenchmarkopenvkl: vklBenchmarkStructuredVolumeopenvkl: vklBenchmarkUnstructuredVolumeopenvkl: vklBenchmarkVdbVolumekvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastavifenc: 6, Losslessavifenc: 10avifenc: 10, Losslesspjsip: OPTIONS, Statefulpjsip: OPTIONS, Statelesspjsip: INVITEbuild-godot: Time To Compilebuild-linux-kernel: Time To Compilebuild-mesa: Time To Compilebuild-llvm: Ninjabuild-llvm: Unix Makefilesbuild-nodejs: Time To Compileliquid-dsp: 64 - 256 - 57liquid-dsp: 128 - 256 - 57liquid-dsp: 160 - 256 - 57blender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Barbershop - CPU-Onlypgbench: 1000 - 100 - Read Writepgbench: 1000 - 100 - Read Write - Average Latencypgbench: 1000 - 100 - Read Onlypgbench: 1000 - 100 - Read Only - Average Latencypgbench: 1000 - 250 - Read Writepgbench: 1000 - 250 - Read Write - Average Latencypgbench: 1000 - 250 - Read Onlypgbench: 1000 - 250 - Read Only - Average Latencystockfish: Total Timenwchem: C240 BuckyballP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance191166.416051.908620.9790989.73174911.02114946.4691313.602999.72116754.1639485.7214.92105.039892.4211.1916566290.4111024538.694.29240.135935.4950.270938.999480196821268985091106.69831.33742.306634.58350.26641126.85720656821534.82739.81569.0497.710104.33930.546168.38258.65206.54210.9223.5917.077.383.94296.082.72563.246.72711.6111.1118.8710.3812527.7890.9110.42106.621987.209266.52512.852.871.386707197443817572692172232613.2623.0337.4695.2058.833376941421254875.61224.88118.983129.272199.398102.19730299000003270133333307393333329.7272.17107.73233834.2833933290.2562341110.7239064340.2771770311221882.3197815.317610.188920.10100543.72187300.47123544.0991817.763014.77120084.2740261.2614.79104.769885.55611.0554509290.1433314539.996.13140.647135.8250.271689.052483194839266264771178.521020.77924.8413527.238996430349.99333656.35721496637074.74639.42346.3587.816104.18556.887305.77576.99453.67460.4461.0648.6219.099.691011.986.72561.849.12677.9114.8118.8710.31142.8627.7810010.45107.630789.089574.05512.932.931.4081110605645818038402999653938.2146.3130.3454.3417.2481036242389539348.15721.48618.497127.321190.68590.85030661333333293000000308510000028.4071.27105.10630491.58812367130.081664933.7749241900.2711771314281888.2191705.026157.588634.3091970.99177878.68115501.4191642.332926.26117869.4939430.6615.07105.109889.90711.2054179290.4166974510.296.45440.065935.7920.272279.042483194833467565041156.49884.75743.017077.76349.43635239.05716326722544.71039.19963.2867.712104.18435.480193.80345.48260.25262.7821.7215.905.864.00311.781.52594.746.12693.4111.1118.8710.3112527.7890.9110.38102.704387.992767.75402.782.881.387187799762817480172359924911.3020.5331.9875.1348.486379741743254053.00122.19018.754127.688191.17391.85730031250003291466667308253333329.2572.24107.39365772.74012285700.081657263.8148853790.2851754461821880197821.887705.448938.09100763.11186973.06123176.7291830.562978.52120124.0540226.0214.49104.279875.26411.0561748289.9755864574.834.00540.831135.8490.270519.102476197838069555031175.691021.62914.7013249.678695875349.02833581.35726796774214.72939.74647.1057.727104.08856.750309.13574.88457.02464.7161.2748.9119.2010.011015.883.12609.748.32713.1112.9618.8710.31142.8627.7810010.42106.173689.490574.24202.932.931.4082110670464118096673001631838.2646.3430.0144.3407.2761040942338541648.65121.41018.517127.332191.80191.08130726000003293400000309670000028.2871.22104.51628261.59412319830.081661863.7918764620.2881786294741882.2OpenBenchmarking.org

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance40K80K120K160K200KSE +/- 416.58, N = 4SE +/- 313.16, N = 4SE +/- 322.38, N = 4SE +/- 230.36, N = 4191166.41197815.31191705.02197821.881. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance30K60K90K120K150KMin: 190433.54 / Avg: 191166.41 / Max: 192039.54Min: 196947.42 / Avg: 197815.31 / Max: 198332.24Min: 191071.55 / Avg: 191705.02 / Max: 192503.07Min: 197423.37 / Avg: 197821.88 / Max: 198486.891. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance17003400510068008500SE +/- 109.82, N = 12SE +/- 165.19, N = 15SE +/- 121.94, N = 15SE +/- 128.21, N = 156051.907610.186157.587705.441. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance13002600390052006500Min: 5120.33 / Avg: 6051.9 / Max: 6487.12Min: 6085.07 / Avg: 7610.18 / Max: 8332.98Min: 5002.25 / Avg: 6157.58 / Max: 6669.01Min: 6545.11 / Avg: 7705.44 / Max: 8291.961. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance2K4K6K8K10KSE +/- 65.72, N = 3SE +/- 60.08, N = 13SE +/- 22.19, N = 3SE +/- 38.79, N = 48620.978920.108634.308938.091. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance16003200480064008000Min: 8490.11 / Avg: 8620.97 / Max: 8696.99Min: 8360.09 / Avg: 8920.1 / Max: 9191.16Min: 8595.29 / Avg: 8634.3 / Max: 8672.13Min: 8880.61 / Avg: 8938.09 / Max: 9049.681. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance20K40K60K80K100KSE +/- 842.11, N = 7SE +/- 83.85, N = 7SE +/- 807.72, N = 7SE +/- 63.65, N = 790989.73100543.7291970.99100763.111. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance20K40K60K80K100KMin: 88152.28 / Avg: 90989.73 / Max: 95161.66Min: 100220.81 / Avg: 100543.72 / Max: 100841.36Min: 89431.45 / Avg: 91970.99 / Max: 94436.99Min: 100515.52 / Avg: 100763.11 / Max: 100957.171. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance40K80K120K160K200KSE +/- 1423.68, N = 4SE +/- 364.13, N = 4SE +/- 196.76, N = 4SE +/- 495.98, N = 4174911.02187300.47177878.68186973.061. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance30K60K90K120K150KMin: 171769.36 / Avg: 174911.02 / Max: 178187.71Min: 186706.87 / Avg: 187300.47 / Max: 188292.05Min: 177474.51 / Avg: 177878.68 / Max: 178398.74Min: 185984.7 / Avg: 186973.06 / Max: 188227.211. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance30K60K90K120K150KSE +/- 480.22, N = 8SE +/- 134.33, N = 9SE +/- 203.66, N = 8SE +/- 133.35, N = 9114946.46123544.09115501.41123176.721. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance20K40K60K80K100KMin: 112909.96 / Avg: 114946.46 / Max: 117270.82Min: 123048.82 / Avg: 123544.09 / Max: 124187.87Min: 114547.2 / Avg: 115501.41 / Max: 116322.05Min: 122615.53 / Avg: 123176.72 / Max: 123909.891. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance20K40K60K80K100KSE +/- 139.83, N = 3SE +/- 159.72, N = 4SE +/- 142.93, N = 3SE +/- 55.17, N = 491313.6091817.7691642.3391830.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance16K32K48K64K80KMin: 91068.82 / Avg: 91313.6 / Max: 91553.12Min: 91403.58 / Avg: 91817.76 / Max: 92172.43Min: 91416.18 / Avg: 91642.33 / Max: 91906.82Min: 91677.56 / Avg: 91830.56 / Max: 91920.791. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance6001200180024003000SE +/- 29.00, N = 4SE +/- 15.07, N = 4SE +/- 32.51, N = 4SE +/- 5.86, N = 42999.723014.772926.262978.521. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance5001000150020002500Min: 2943.44 / Avg: 2999.72 / Max: 3073.84Min: 2969.56 / Avg: 3014.77 / Max: 3030.39Min: 2854.37 / Avg: 2926.26 / Max: 3004.56Min: 2967.29 / Avg: 2978.52 / Max: 2988.911. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance30K60K90K120K150KSE +/- 333.81, N = 10SE +/- 89.79, N = 10SE +/- 233.06, N = 10SE +/- 209.39, N = 10116754.16120084.27117869.49120124.051. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance20K40K60K80K100KMin: 114463.06 / Avg: 116754.16 / Max: 117996.91Min: 119574.58 / Avg: 120084.27 / Max: 120655.3Min: 116191.23 / Avg: 117869.49 / Max: 118738.5Min: 119055.9 / Avg: 120124.05 / Max: 121232.861. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance9K18K27K36K45KSE +/- 83.38, N = 7SE +/- 113.89, N = 8SE +/- 42.28, N = 8SE +/- 69.22, N = 839485.7240261.2639430.6640226.021. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance7K14K21K28K35KMin: 39158.68 / Avg: 39485.72 / Max: 39729.32Min: 39894.3 / Avg: 40261.26 / Max: 40800.55Min: 39293.73 / Avg: 39430.66 / Max: 39639.4Min: 39928.96 / Avg: 40226.02 / Max: 40474.551. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance48121620SE +/- 0.06, N = 3SE +/- 0.15, N = 15SE +/- 0.14, N = 7SE +/- 0.01, N = 314.9214.7915.0714.491. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance48121620Min: 14.84 / Avg: 14.92 / Max: 15.03Min: 14.26 / Avg: 14.79 / Max: 16.23Min: 14.81 / Avg: 15.07 / Max: 15.86Min: 14.46 / Avg: 14.49 / Max: 14.51. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance20406080100SE +/- 0.19, N = 3SE +/- 0.09, N = 3SE +/- 0.38, N = 3SE +/- 0.16, N = 3105.03104.76105.10104.271. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance20406080100Min: 104.78 / Avg: 105.03 / Max: 105.41Min: 104.64 / Avg: 104.76 / Max: 104.94Min: 104.47 / Avg: 105.1 / Max: 105.79Min: 104.04 / Avg: 104.27 / Max: 104.581. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance2K4K6K8K10K9892.429885.569889.919875.261. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance3691215SE +/- 0.03, N = 4SE +/- 0.05, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 411.1911.0611.2111.061. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance3691215Min: 11.1 / Avg: 11.19 / Max: 11.25Min: 10.94 / Avg: 11.06 / Max: 11.18Min: 11.19 / Avg: 11.21 / Max: 11.23Min: 11 / Avg: 11.06 / Max: 11.091. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance60120180240300SE +/- 0.12, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 3290.41290.14290.42289.981. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance50100150200250Min: 290.25 / Avg: 290.41 / Max: 290.63Min: 290.13 / Avg: 290.14 / Max: 290.16Min: 290.29 / Avg: 290.42 / Max: 290.64Min: 289.85 / Avg: 289.98 / Max: 290.151. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

ArrayFire

ArrayFire is an GPU and CPU numeric processing library, this test uses the built-in CPU and OpenCL ArrayFire benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterArrayFire 3.7Test: BLAS CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance10002000300040005000SE +/- 10.80, N = 4SE +/- 24.52, N = 4SE +/- 36.10, N = 4SE +/- 11.74, N = 44538.694539.994510.294574.831. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgGFLOPS, More Is BetterArrayFire 3.7Test: BLAS CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance8001600240032004000Min: 4506.93 / Avg: 4538.69 / Max: 4554.9Min: 4478.24 / Avg: 4539.99 / Max: 4597.81Min: 4426.25 / Avg: 4510.29 / Max: 4580.19Min: 4545.38 / Avg: 4574.83 / Max: 4602.841. (CXX) g++ options: -rdynamic

OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance246810SE +/- 0.067, N = 15SE +/- 0.068, N = 15SE +/- 0.088, N = 15SE +/- 0.128, N = 154.2926.1316.4544.0051. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance3691215Min: 3.88 / Avg: 4.29 / Max: 4.71Min: 5.33 / Avg: 6.13 / Max: 6.39Min: 5.43 / Avg: 6.45 / Max: 6.89Min: 3.52 / Avg: 4.01 / Max: 5.541. (CXX) g++ options: -rdynamic

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance918273645SE +/- 0.17, N = 3SE +/- 0.20, N = 3SE +/- 0.12, N = 3SE +/- 0.21, N = 340.1440.6540.0740.831. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance816243240Min: 39.95 / Avg: 40.14 / Max: 40.47Min: 40.35 / Avg: 40.65 / Max: 41.03Min: 39.92 / Avg: 40.07 / Max: 40.3Min: 40.42 / Avg: 40.83 / Max: 41.041. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance816243240SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 335.5035.8335.7935.851. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance816243240Min: 35.37 / Avg: 35.5 / Max: 35.58Min: 35.74 / Avg: 35.82 / Max: 35.89Min: 35.71 / Avg: 35.79 / Max: 35.84Min: 35.8 / Avg: 35.85 / Max: 35.951. (CXX) g++ options: -O3 -pthread -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance0.06130.12260.18390.24520.3065SE +/- 0.00040, N = 3SE +/- 0.00055, N = 3SE +/- 0.00043, N = 3SE +/- 0.00021, N = 30.270930.271680.272270.27051
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance12345Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.27

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bareP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance3691215SE +/- 0.038, N = 3SE +/- 0.048, N = 3SE +/- 0.035, N = 3SE +/- 0.011, N = 38.9999.0529.0429.1021. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bareP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance3691215Min: 8.95 / Avg: 9 / Max: 9.08Min: 8.97 / Avg: 9.05 / Max: 9.14Min: 8.98 / Avg: 9.04 / Max: 9.1Min: 9.09 / Avg: 9.1 / Max: 9.121. (CXX) g++ options: -O3 -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance100200300400500SE +/- 2.02, N = 3SE +/- 5.43, N = 3SE +/- 5.29, N = 3SE +/- 0.73, N = 34804834834761. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance90180270360450Min: 476 / Avg: 480 / Max: 482.5Min: 477 / Avg: 483.17 / Max: 494Min: 474.5 / Avg: 482.5 / Max: 492.5Min: 475 / Avg: 476.17 / Max: 477.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance4080120160200SE +/- 2.36, N = 3SE +/- 0.44, N = 3SE +/- 1.89, N = 3SE +/- 0.29, N = 31961941941971. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance4080120160200Min: 192 / Avg: 195.5 / Max: 200Min: 193.5 / Avg: 194.17 / Max: 195Min: 191 / Avg: 194 / Max: 197.5Min: 196 / Avg: 196.5 / Max: 1971. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance2K4K6K8K10KSE +/- 58.56, N = 12SE +/- 25.12, N = 3SE +/- 65.66, N = 3SE +/- 24.06, N = 382128392833483801. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance15003000450060007500Min: 7618 / Avg: 8212.08 / Max: 8342Min: 8348.5 / Avg: 8392.33 / Max: 8435.5Min: 8208.5 / Avg: 8334.17 / Max: 8430Min: 8348.5 / Avg: 8380.33 / Max: 8427.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance15003000450060007500SE +/- 205.17, N = 12SE +/- 200.91, N = 12SE +/- 210.11, N = 12SE +/- 156.45, N = 1268986626675669551. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance12002400360048006000Min: 5184 / Avg: 6897.71 / Max: 7233Min: 5152 / Avg: 6625.63 / Max: 7219.5Min: 5207 / Avg: 6755.5 / Max: 7271Min: 5786 / Avg: 6955 / Max: 72631. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance110220330440550SE +/- 10.98, N = 9SE +/- 2.17, N = 3SE +/- 8.53, N = 12SE +/- 6.15, N = 125094775045031. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance90180270360450Min: 461 / Avg: 508.5 / Max: 546Min: 473.5 / Avg: 477.17 / Max: 481Min: 472.5 / Avg: 503.96 / Max: 545Min: 471 / Avg: 503 / Max: 534.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance30060090012001500SE +/- 19.34, N = 9SE +/- 1.74, N = 3SE +/- 22.92, N = 9SE +/- 17.96, N = 91106.691178.521156.491175.691. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance2004006008001000Min: 1061.03 / Avg: 1106.69 / Max: 1201.44Min: 1175.75 / Avg: 1178.52 / Max: 1181.74Min: 1061 / Avg: 1156.49 / Max: 1240.58Min: 1087.89 / Avg: 1175.69 / Max: 1232.31. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance2004006008001000SE +/- 7.84, N = 15SE +/- 2.39, N = 3SE +/- 9.84, N = 15SE +/- 1.25, N = 3831.331020.77884.751021.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance2004006008001000Min: 797.28 / Avg: 831.33 / Max: 884.98Min: 1017.19 / Avg: 1020.77 / Max: 1025.31Min: 828.25 / Avg: 884.75 / Max: 952.64Min: 1019.65 / Avg: 1021.62 / Max: 1023.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance2004006008001000SE +/- 18.61, N = 15SE +/- 4.79, N = 3SE +/- 19.99, N = 14SE +/- 4.97, N = 3742.30924.84743.01914.701. (CXX) g++ options: -O3 -std=c++11 -fopenmp