Xeon Scalable Ice Lake P-State Governor

Intel Xeon Platinum 8380 Ice Lake P-State CPU frequency scaling Linux benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105267-IB-XEONSCALA38
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 4 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 10 Tests
CPU Massive 14 Tests
Creator Workloads 12 Tests
Encoding 6 Tests
Fortran Tests 7 Tests
Game Development 4 Tests
HPC - High Performance Computing 16 Tests
LAPACK (Linear Algebra Pack) Tests 4 Tests
Machine Learning 2 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 5 Tests
Multi-Core 26 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 11 Tests
Programmer / Developer System Benchmarks 7 Tests
Python Tests 5 Tests
Renderers 2 Tests
Scientific Computing 8 Tests
Server CPU Tests 11 Tests
Video Encoding 6 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
P-State powersave
May 23 2021
  15 Hours, 42 Minutes
P-State performance
May 24 2021
  11 Hours, 14 Minutes
intel_cpufreq schedutil
May 24 2021
  15 Hours, 54 Minutes
intel_cpufreq performance
May 25 2021
  13 Hours, 43 Minutes
Invert Hiding All Results Option
  14 Hours, 8 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Scalable Ice Lake P-State GovernorOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 099816 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN800GB INTEL SSDPF21Q800GBASPEED2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 21.045.11.0-17-generic (x86_64)GNOME Shell 3.38.4X Server1.0.2GCC 10.3.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionXeon Scalable Ice Lake P-State Governor BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-gDeRY6/gcc-10-10.3.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - P-State powersave: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270 - P-State performance: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270 - intel_cpufreq schedutil: Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0xd000270 - intel_cpufreq performance: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xd000270 - Python 3.9.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performanceResult OverviewPhoronix Test Suite100%148%195%243%290%AOM AV1KvazaarPostgreSQL pgbenchSVT-VP9SVT-HEVCSVT-AV1PJSIPTimed Godot Game Engine CompilationDarmstadt Automotive Parallel Heterogeneous SuiteZstd CompressionArrayFireOpenVKLlibavif avifencTimed Linux Kernel CompilationTimed Node.js CompilationRodiniaTensorFlow LiteNAS Parallel BenchmarksQuantum ESPRESSOEmbreeOSPrayBlenderTimed LLVM CompilationIntel Open Image DenoiseTimed Mesa CompilationOpenFOAMONNX RuntimeHigh Performance Conjugate GradientStockfishGROMACSLAMMPS Molecular Dynamics SimulatorLiquid-DSPXcompact3d Incompact3dRELIONNAMDWRFNWChem

P-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performancePer Watt Result OverviewPhoronix Test Suite100%107%113%120%126%AOM AV1NAS Parallel BenchmarksSVT-HEVCIntel Open Image DenoiseOSPrayZstd CompressionKvazaarOpenVKLSVT-AV1StockfishEmbreeArrayFireDarmstadt Automotive Parallel Heterogeneous SuitePJSIPSVT-VP9ONNX RuntimeLiquid-DSPHigh Performance Conjugate GradientLAMMPS Molecular Dynamics SimulatorGROMACSP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

Xeon Scalable Ice Lake P-State Governordaphne: OpenMP - Euclidean Clusterdaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagerelion: Basic - CPUwrf: conus 2.5kmonnx: yolov4 - OpenMP CPUonnx: fcn-resnet101-11 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUtensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4gromacs: MPI CPU - water_GMX50_barelammps: 20k Atomshpcg: npb: BT.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: LU.Cnpb: SP.Bnpb: SP.Cnpb: IS.Dnpb: MG.Cnpb: CG.Crodinia: OpenMP CFD Solverrodinia: OpenMP LavaMDrodinia: OpenMP Leukocyterodinia: OpenMP Streamclusterrodinia: OpenMP HotSpot3Dnamd: ATPase Simulation - 327,506 Atomsarrayfire: BLAS CPUarrayfire: Conjugate Gradient CPUnwchem: C240 Buckyballopenfoam: Motorbike 30Mopenfoam: Motorbike 60Mincompact3d: input.i3d 193 Cells Per Directionincompact3d: X3D-benchmarking input.i3dqe: AUSURF112stockfish: Total Timebuild-llvm: Ninjabuild-llvm: Unix Makefilescompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedbuild-linux-kernel: Time To Compilekvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Barbershop - CPU-Onlyavifenc: 6, Losslessavifenc: 10avifenc: 10, Losslessbuild-godot: Time To Compileembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Crownoidn: RT.hdr_alb_nrm.3840x2160oidn: RT.ldr_alb_nrm.3840x2160oidn: RTLightmap.hdr.4096x4096openvkl: vklBenchmarkopenvkl: vklBenchmarkStructuredVolumeopenvkl: vklBenchmarkUnstructuredVolumeopenvkl: vklBenchmarkVdbVolumeospray: Magnetic Reconnection - SciVisospray: XFrog Forest - SciVisospray: XFrog Forest - Path Tracerospray: NASA Streamlines - SciVisospray: NASA Streamlines - Path Tracerospray: San Miguel - SciVisospray: San Miguel - Path Tracerbuild-mesa: Time To Compilebuild-nodejs: Time To Compileliquid-dsp: 64 - 256 - 57liquid-dsp: 128 - 256 - 57liquid-dsp: 160 - 256 - 57pgbench: 1000 - 100 - Read Writepgbench: 1000 - 100 - Read Write - Average Latencypgbench: 1000 - 100 - Read Onlypgbench: 1000 - 100 - Read Only - Average Latencypgbench: 1000 - 250 - Read Writepgbench: 1000 - 250 - Read Write - Average Latencypgbench: 1000 - 250 - Read Onlypgbench: 1000 - 250 - Read Only - Average Latencypjsip: OPTIONS, Statefulpjsip: OPTIONS, Statelesspjsip: INVITEP-State powersaveP-State performanceintel_cpufreq schedutilintel_cpufreq performance831.33742.306634.58350.2669892.424801968212689850941126.85720656821538.99935.49540.1359191166.416051.908620.9790989.73174911.02114946.4691313.602999.72116754.1639485.724.82739.81569.0497.710104.3390.270934538.694.2921882.314.92105.0311.1916566290.4111021106.69177031122129.272199.398296.082.72563.246.72711.624.88113.2623.0323.5917.077.383.94206.54210.9230.546168.38258.6529.7272.17107.7337.4695.2058.83375.612106.621987.209266.52512.852.871.3867071974438175726921722326111.1118.8710.3812527.7890.9110.4218.983102.197302990000032701333333073933333233834.2833933290.2562341110.7239064340.27737694142125481020.77924.8413527.238996430349.9939885.5564831948392662647733656.35721496637079.05235.82540.6471197815.317610.188920.10100543.72187300.47123544.0991817.763014.77120084.2740261.264.74639.42346.3587.816104.1850.271684539.996.1311888.214.79104.7611.0554509290.1433311178.52177131428127.321190.6851011.986.72561.849.12677.921.48638.2146.3161.0648.6219.099.69453.67460.4456.887305.77576.9928.4071.27105.1030.3454.3417.24848.157107.630789.089574.05512.932.931.40811106056458180384029996539114.8118.8710.31142.8627.7810010.4518.49790.850306613333332930000003085100000630491.58812367130.081664933.7749241900.27110362423895393884.75743.017077.76349.4369889.9074831948334675650435239.05716326722549.04235.79240.0659191705.026157.588634.3091970.99177878.68115501.4191642.332926.26117869.4939430.664.71039.19963.2867.712104.1840.272274510.296.454188015.07105.1011.2054179290.4166971156.49175446182127.688191.173311.781.52594.746.12693.422.19011.3020.5321.7215.905.864.00260.25262.7835.480193.80345.4829.2572.24107.3931.9875.1348.48653.001102.704387.992767.75402.782.881.3871877997628174801723599249111.1118.8710.3112527.7890.9110.3818.75491.857300312500032914666673082533333365772.74012285700.081657263.8148853790.28537974174325401021.62914.7013249.678695875349.0289875.2644761978380695550333581.35726796774219.10235.84940.8311197821.887705.448938.09100763.11186973.06123176.7291830.562978.52120124.0540226.024.72939.74647.1057.727104.0880.270514574.834.0051882.214.49104.2711.0561748289.9755861175.69178629474127.332191.8011018.683.12609.748.32713.121.41038.2646.3461.2748.9119.2010.01457.02464.7156.750309.13574.8828.2871.22104.5130.0144.3407.27648.651106.173689.490574.24202.932.931.40821106704641180966730016318112.9618.8710.31142.8627.7810010.4218.51791.081307260000032934000003096700000628261.59412319830.081661863.7918764620.28810409423385416OpenBenchmarking.org

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean Clusterintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave2004006008001000SE +/- 1.25, N = 3SE +/- 9.84, N = 15SE +/- 2.39, N = 3SE +/- 7.84, N = 151021.62884.751020.77831.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean Clusterintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave2004006008001000Min: 1019.65 / Avg: 1021.62 / Max: 1023.94Min: 828.25 / Avg: 884.75 / Max: 952.64Min: 1017.19 / Avg: 1020.77 / Max: 1025.31Min: 797.28 / Avg: 831.33 / Max: 884.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT Mappingintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave2004006008001000SE +/- 4.97, N = 3SE +/- 19.99, N = 14SE +/- 4.79, N = 3SE +/- 18.61, N = 15914.70743.01924.84742.301. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT Mappingintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave160320480640800Min: 907.75 / Avg: 914.7 / Max: 924.32Min: 599.99 / Avg: 743.01 / Max: 867.36Min: 915.28 / Avg: 924.84 / Max: 930.07Min: 595.43 / Avg: 742.3 / Max: 826.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2Imageintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave3K6K9K12K15KSE +/- 100.42, N = 15SE +/- 115.67, N = 12SE +/- 114.27, N = 15SE +/- 86.30, N = 313249.687077.7613527.246634.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2Imageintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave2K4K6K8K10KMin: 12570.34 / Avg: 13249.68 / Max: 13852.01Min: 6389.05 / Avg: 7077.76 / Max: 7664.41Min: 12950.1 / Avg: 13527.24 / Max: 14298.38Min: 6484.21 / Avg: 6634.58 / Max: 6783.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave80160240320400SE +/- 1.39, N = 3SE +/- 1.38, N = 3SE +/- 1.34, N = 3SE +/- 1.16, N = 3349.03349.44349.99350.271. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave60120180240300Min: 347.5 / Avg: 349.03 / Max: 351.79Min: 347.93 / Avg: 349.44 / Max: 352.2Min: 348.54 / Avg: 349.99 / Max: 352.66Min: 348.64 / Avg: 350.27 / Max: 352.521. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -pthread -lmpi_cxx -lmpi

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5kmintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave2K4K6K8K10K9875.269889.919885.569892.421. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave100200300400500SE +/- 0.73, N = 3SE +/- 5.29, N = 3SE +/- 5.43, N = 3SE +/- 2.02, N = 34764834834801. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave90180270360450Min: 475 / Avg: 476.17 / Max: 477.5Min: 474.5 / Avg: 482.5 / Max: 492.5Min: 477 / Avg: 483.17 / Max: 494Min: 476 / Avg: 480 / Max: 482.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave4080120160200SE +/- 0.29, N = 3SE +/- 1.89, N = 3SE +/- 0.44, N = 3SE +/- 2.36, N = 31971941941961. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave4080120160200Min: 196 / Avg: 196.5 / Max: 197Min: 191 / Avg: 194 / Max: 197.5Min: 193.5 / Avg: 194.17 / Max: 195Min: 192 / Avg: 195.5 / Max: 2001. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave2K4K6K8K10KSE +/- 24.06, N = 3SE +/- 65.66, N = 3SE +/- 25.12, N = 3SE +/- 58.56, N = 1283808334839282121. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave15003000450060007500Min: 8348.5 / Avg: 8380.33 / Max: 8427.5Min: 8208.5 / Avg: 8334.17 / Max: 8430Min: 8348.5 / Avg: 8392.33 / Max: 8435.5Min: 7618 / Avg: 8212.08 / Max: 83421. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave15003000450060007500SE +/- 156.45, N = 12SE +/- 210.11, N = 12SE +/- 200.91, N = 12SE +/- 205.17, N = 1269556756662668981. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave12002400360048006000Min: 5786 / Avg: 6955 / Max: 7263Min: 5207 / Avg: 6755.5 / Max: 7271Min: 5152 / Avg: 6625.63 / Max: 7219.5Min: 5184 / Avg: 6897.71 / Max: 72331. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave110220330440550SE +/- 6.15, N = 12SE +/- 8.53, N = 12SE +/- 2.17, N = 3SE +/- 10.98, N = 95035044775091. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave90180270360450Min: 471 / Avg: 503 / Max: 534.5Min: 472.5 / Avg: 503.96 / Max: 545Min: 473.5 / Avg: 477.17 / Max: 481Min: 461 / Avg: 508.5 / Max: 5461. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave9K18K27K36K45KSE +/- 61.65, N = 3SE +/- 352.77, N = 3SE +/- 117.93, N = 3SE +/- 711.79, N = 1233581.335239.033656.341126.8
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave7K14K21K28K35KMin: 33486.2 / Avg: 33581.27 / Max: 33696.8Min: 34751.3 / Avg: 35239.03 / Max: 35924.4Min: 33474.3 / Avg: 33656.27 / Max: 33877.2Min: 36357.8 / Avg: 41126.78 / Max: 44326.8

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2intel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave120K240K360K480K600KSE +/- 491.89, N = 3SE +/- 1875.50, N = 3SE +/- 2262.00, N = 3SE +/- 1289.77, N = 3572679571632572149572065
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2intel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave100K200K300K400K500KMin: 571700 / Avg: 572679.33 / Max: 573250Min: 569343 / Avg: 571632 / Max: 575350Min: 568264 / Avg: 572149 / Max: 576099Min: 570605 / Avg: 572065.33 / Max: 574637

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4intel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave150K300K450K600K750KSE +/- 6399.11, N = 3SE +/- 2013.17, N = 3SE +/- 2267.77, N = 3SE +/- 5220.61, N = 3677421672254663707682153
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4intel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave120K240K360K480K600KMin: 668260 / Avg: 677420.67 / Max: 689741Min: 668241 / Avg: 672254.33 / Max: 674541Min: 660981 / Avg: 663706.67 / Max: 668209Min: 674804 / Avg: 682153.33 / Max: 692251

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bareintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave3691215SE +/- 0.011, N = 3SE +/- 0.035, N = 3SE +/- 0.048, N = 3SE +/- 0.038, N = 39.1029.0429.0528.9991. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bareintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave3691215Min: 9.09 / Avg: 9.1 / Max: 9.12Min: 8.98 / Avg: 9.04 / Max: 9.1Min: 8.97 / Avg: 9.05 / Max: 9.14Min: 8.95 / Avg: 9 / Max: 9.081. (CXX) g++ options: -O3 -pthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atomsintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave816243240SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 335.8535.7935.8335.501. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atomsintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave816243240Min: 35.8 / Avg: 35.85 / Max: 35.95Min: 35.71 / Avg: 35.79 / Max: 35.84Min: 35.74 / Avg: 35.82 / Max: 35.89Min: 35.37 / Avg: 35.5 / Max: 35.581. (CXX) g++ options: -O3 -pthread -lm

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1intel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave918273645SE +/- 0.21, N = 3SE +/- 0.12, N = 3SE +/- 0.20, N = 3SE +/- 0.17, N = 340.8340.0740.6540.141. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1intel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave816243240Min: 40.42 / Avg: 40.83 / Max: 41.04Min: 39.92 / Avg: 40.07 / Max: 40.3Min: 40.35 / Avg: 40.65 / Max: 41.03Min: 39.95 / Avg: 40.14 / Max: 40.471. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave40K80K120K160K200KSE +/- 230.36, N = 4SE +/- 322.38, N = 4SE +/- 313.16, N = 4SE +/- 416.58, N = 4197821.88191705.02197815.31191166.411. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave30K60K90K120K150KMin: 197423.37 / Avg: 197821.88 / Max: 198486.89Min: 191071.55 / Avg: 191705.02 / Max: 192503.07Min: 196947.42 / Avg: 197815.31 / Max: 198332.24Min: 190433.54 / Avg: 191166.41 / Max: 192039.541. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave17003400510068008500SE +/- 128.21, N = 15SE +/- 121.94, N = 15SE +/- 165.19, N = 15SE +/- 109.82, N = 127705.446157.587610.186051.901. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave13002600390052006500Min: 6545.11 / Avg: 7705.44 / Max: 8291.96Min: 5002.25 / Avg: 6157.58 / Max: 6669.01Min: 6085.07 / Avg: 7610.18 / Max: 8332.98Min: 5120.33 / Avg: 6051.9 / Max: 6487.121. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Dintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave2K4K6K8K10KSE +/- 38.79, N = 4SE +/- 22.19, N = 3SE +/- 60.08, N = 13SE +/- 65.72, N = 38938.098634.308920.108620.971. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Dintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave16003200480064008000Min: 8880.61 / Avg: 8938.09 / Max: 9049.68Min: 8595.29 / Avg: 8634.3 / Max: 8672.13Min: 8360.09 / Avg: 8920.1 / Max: 9191.16Min: 8490.11 / Avg: 8620.97 / Max: 8696.991. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave20K40K60K80K100KSE +/- 63.65, N = 7SE +/- 807.72, N = 7SE +/- 83.85, N = 7SE +/- 842.11, N = 7100763.1191970.99100543.7290989.731. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave20K40K60K80K100KMin: 100515.52 / Avg: 100763.11 / Max: 100957.17Min: 89431.45 / Avg: 91970.99 / Max: 94436.99Min: 100220.81 / Avg: 100543.72 / Max: 100841.36Min: 88152.28 / Avg: 90989.73 / Max: 95161.661. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave40K80K120K160K200KSE +/- 495.98, N = 4SE +/- 196.76, N = 4SE +/- 364.13, N = 4SE +/- 1423.68, N = 4186973.06177878.68187300.47174911.021. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave30K60K90K120K150KMin: 185984.7 / Avg: 186973.06 / Max: 188227.21Min: 177474.51 / Avg: 177878.68 / Max: 178398.74Min: 186706.87 / Avg: 187300.47 / Max: 188292.05Min: 171769.36 / Avg: 174911.02 / Max: 178187.711. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Bintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave30K60K90K120K150KSE +/- 133.35, N = 9SE +/- 203.66, N = 8SE +/- 134.33, N = 9SE +/- 480.22, N = 8123176.72115501.41123544.09114946.461. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Bintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave20K40K60K80K100KMin: 122615.53 / Avg: 123176.72 / Max: 123909.89Min: 114547.2 / Avg: 115501.41 / Max: 116322.05Min: 123048.82 / Avg: 123544.09 / Max: 124187.87Min: 112909.96 / Avg: 114946.46 / Max: 117270.821. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave20K40K60K80K100KSE +/- 55.17, N = 4SE +/- 142.93, N = 3SE +/- 159.72, N = 4SE +/- 139.83, N = 391830.5691642.3391817.7691313.601. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave16K32K48K64K80KMin: 91677.56 / Avg: 91830.56 / Max: 91920.79Min: 91416.18 / Avg: 91642.33 / Max: 91906.82Min: 91403.58 / Avg: 91817.76 / Max: 92172.43Min: 91068.82 / Avg: 91313.6 / Max: 91553.121. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Dintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave6001200180024003000SE +/- 5.86, N = 4SE +/- 32.51, N = 4SE +/- 15.07, N = 4SE +/- 29.00, N = 42978.522926.263014.772999.721. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Dintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave5001000150020002500Min: 2967.29 / Avg: 2978.52 / Max: 2988.91Min: 2854.37 / Avg: 2926.26 / Max: 3004.56Min: 2969.56 / Avg: 3014.77 / Max: 3030.39Min: 2943.44 / Avg: 2999.72 / Max: 3073.841. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave30K60K90K120K150KSE +/- 209.39, N = 10SE +/- 233.06, N = 10SE +/- 89.79, N = 10SE +/- 333.81, N = 10120124.05117869.49120084.27116754.161. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave20K40K60K80K100KMin: 119055.9 / Avg: 120124.05 / Max: 121232.86Min: 116191.23 / Avg: 117869.49 / Max: 118738.5Min: 119574.58 / Avg: 120084.27 / Max: 120655.3Min: 114463.06 / Avg: 116754.16 / Max: 117996.911. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave9K18K27K36K45KSE +/- 69.22, N = 8SE +/- 42.28, N = 8SE +/- 113.89, N = 8SE +/- 83.38, N = 740226.0239430.6640261.2639485.721. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Cintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave7K14K21K28K35KMin: 39928.96 / Avg: 40226.02 / Max: 40474.55Min: 39293.73 / Avg: 39430.66 / Max: 39639.4Min: 39894.3 / Avg: 40261.26 / Max: 40800.55Min: 39158.68 / Avg: 39485.72 / Max: 39729.321. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solverintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave1.08612.17223.25834.34445.4305SE +/- 0.019, N = 8SE +/- 0.014, N = 8SE +/- 0.036, N = 8SE +/- 0.041, N = 154.7294.7104.7464.8271. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solverintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave246810Min: 4.62 / Avg: 4.73 / Max: 4.82Min: 4.64 / Avg: 4.71 / Max: 4.76Min: 4.61 / Avg: 4.75 / Max: 4.94Min: 4.67 / Avg: 4.83 / Max: 5.191. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave918273645SE +/- 0.47, N = 3SE +/- 0.18, N = 3SE +/- 0.13, N = 3SE +/- 0.23, N = 339.7539.2039.4239.821. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave816243240Min: 39.01 / Avg: 39.75 / Max: 40.62Min: 38.91 / Avg: 39.2 / Max: 39.52Min: 39.18 / Avg: 39.42 / Max: 39.61Min: 39.38 / Avg: 39.81 / Max: 40.141. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyteintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave1530456075SE +/- 0.69, N = 15SE +/- 1.35, N = 15SE +/- 0.64, N = 12SE +/- 0.69, N = 647.1163.2946.3669.051. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyteintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave1326395265Min: 42.18 / Avg: 47.1 / Max: 52.63Min: 55.02 / Avg: 63.29 / Max: 73.3Min: 43.26 / Avg: 46.36 / Max: 50.25Min: 65.81 / Avg: 69.05 / Max: 70.221. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusterintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave246810SE +/- 0.076, N = 6SE +/- 0.044, N = 6SE +/- 0.078, N = 15SE +/- 0.058, N = 157.7277.7127.8167.7101. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusterintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave3691215Min: 7.5 / Avg: 7.73 / Max: 7.95Min: 7.59 / Avg: 7.71 / Max: 7.86Min: 7.24 / Avg: 7.82 / Max: 8.38Min: 7.4 / Avg: 7.71 / Max: 8.061. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3Dintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave20406080100SE +/- 0.52, N = 3SE +/- 0.27, N = 3SE +/- 0.01, N = 3SE +/- 0.44, N = 3104.09104.18104.19104.341. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3Dintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave20406080100Min: 103.44 / Avg: 104.09 / Max: 105.11Min: 103.89 / Avg: 104.18 / Max: 104.71Min: 104.17 / Avg: 104.18 / Max: 104.19Min: 103.74 / Avg: 104.34 / Max: 105.21. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave0.06130.12260.18390.24520.3065SE +/- 0.00021, N = 3SE +/- 0.00043, N = 3SE +/- 0.00055, N = 3SE +/- 0.00040, N = 30.270510.272270.271680.27093
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave12345Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.27

ArrayFire

ArrayFire is an GPU and CPU numeric processing library, this test uses the built-in CPU and OpenCL ArrayFire benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterArrayFire 3.7Test: BLAS CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave10002000300040005000SE +/- 11.74, N = 4SE +/- 36.10, N = 4SE +/- 24.52, N = 4SE +/- 10.80, N = 44574.834510.294539.994538.691. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgGFLOPS, More Is BetterArrayFire 3.7Test: BLAS CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave8001600240032004000Min: 4545.38 / Avg: 4574.83 / Max: 4602.84Min: 4426.25 / Avg: 4510.29 / Max: 4580.19Min: 4478.24 / Avg: 4539.99 / Max: 4597.81Min: 4506.93 / Avg: 4538.69 / Max: 4554.91. (CXX) g++ options: -rdynamic

OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave246810SE +/- 0.128, N = 15SE +/- 0.088, N = 15SE +/- 0.068, N = 15SE +/- 0.067, N = 154.0056.4546.1314.2921. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient CPUintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave3691215Min: 3.52 / Avg: 4.01 / Max: 5.54Min: 5.43 / Avg: 6.45 / Max: 6.89Min: 5.33 / Avg: 6.13 / Max: 6.39Min: 3.88 / Avg: 4.29 / Max: 4.711. (CXX) g++ options: -rdynamic

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyballintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave4008001200160020001882.21880.01888.21882.31. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30Mintel_cpufreq performanceintel_cpufreq schedutilP-State performanceP-State powersave48121620SE +/- 0.01, N = 3SE +/- 0.14, N = 7SE +/- 0.15, N = 15SE +/- 0.06, N = 314.4915.0714.7914.921. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm