Core i9 10980XE Ryzen 9 3990X - Pop OS Skylake Opt Benchmark

Benchmarks by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007151-NE-2007118NE34
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Bioinformatics 2 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 11 Tests
CPU Massive 21 Tests
Creator Workloads 9 Tests
Database Test Suite 2 Tests
Encoding 3 Tests
Fortran Tests 4 Tests
HPC - High Performance Computing 16 Tests
Imaging 2 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 5 Tests
Multi-Core 15 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 7 Tests
Programmer / Developer System Benchmarks 5 Tests
Python 7 Tests
Raytracing 2 Tests
Renderers 2 Tests
Scientific Computing 6 Tests
Server 2 Tests
Server CPU Tests 16 Tests
Single-Threaded 4 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Default
July 11 2020
  5 Hours, 46 Minutes
Optimized
July 11 2020
  5 Hours, 21 Minutes
Optimized Round 2
July 15 2020
  5 Hours, 19 Minutes
Invert Hiding All Results Option
  5 Hours, 29 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 10980XE Ryzen 9 3990X - Pop OS Skylake Opt BenchmarkOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads)System76 Thelio Major (F4c Z5 BIOS)AMD Starship/Matisse126GBSamsung SSD 970 EVO Plus 500GBAMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioLG Ultra HDIntel I211 + Intel Wi-Fi 6 AX200Pop 20.045.4.0-7634-generic (x86_64)GNOME Shell 3.36.3X Server 1.20.8amdgpu 19.1.04.6 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I9 10980XE Ryzen 9 3990X - Pop OS Skylake Opt Benchmark PerformanceSystem Logs- snd_usb_audio.ignore_ctl_error=1- Default: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Optimized: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch=skylake --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Optimized Round 2: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch=skylake --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=skylake --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025- Python 2.7.18rc1 + Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + tsx_async_abort: Not affected

DefaultOptimizedOptimized Round 2Result OverviewPhoronix Test Suite100%107%115%122%Himeno BenchmarkRedisdav1dglibc benchSVT-AV1Darmstadt Automotive Parallel Heterogeneous SuiteLAMMPS Molecular Dynamics SimulatorTimed MrBayes AnalysisYafaRayCoremarkBRL-CADRodiniaSQLite SpeedtestMlpack BenchmarkPOV-RayCython benchmarkACES DGEMMAOM AV1Numpy BenchmarkHuginPyPerformanceScikit-LearnGROMACSPyBenchPlaidMLStockfishNumenta Anomaly BenchmarkOCRMyPDFAI Benchmark AlphaNAS Parallel BenchmarksMontage Astronomical Image Mosaic EngineJohn The RipperZstd CompressionHigh Performance Conjugate GradientCloverLeaf

Core i9 10980XE Ryzen 9 3990X - Pop OS Skylake Opt Benchmarkplaidml: No - Inference - DenseNet 201 - CPUhpcg: ai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scoreplaidml: No - Inference - ResNet 50 - CPUnumpy: brl-cad: VGR Performance Metricplaidml: No - Inference - Inception V3 - CPUsvt-av1: Enc Mode 0 - 1080pplaidml: No - Inference - Mobilenet - CPUhimeno: Poisson Pressure Solveryafaray: Total Time For Sample Scenerodinia: OpenMP HotSpot3Dscikit-learn: pyperformance: python_startupmrbayes: Primate Phylogeny Analysisdaphne: OpenMP - Points2Imageplaidml: No - Inference - VGG19 - CPUpyperformance: raytracemontage: Mosaic of M17, K band, 1.5 deg x 1.5 degnumenta-nab: Earthgecko Skylinenumenta-nab: EXPoSEplaidml: No - Inference - VGG16 - CPUdav1d: Chimera 1080p 10-bitsqlite-speedtest: Timed Time - Size 1,000gromacs: Water Benchmarkjohn-the-ripper: MD5pyperformance: 2to3mt-dgemm: Sustained Floating-Point Raterodinia: OpenMP LavaMDstockfish: Total Timepyperformance: gohugin: Panorama Photo Assistant + Stitching Timenpb: BT.Ccompress-zstd: 19redis: SETcython-bench: rodinia: OpenMP Leukocyteredis: GETpyperformance: django_templateaom-av1: Speed 6 Realtimeplaidml: No - Inference - IMDB LSTM - CPUnpb: LU.Cpyperformance: regex_compileglibc-bench: expjohn-the-ripper: Blowfishpyperformance: crypto_pyaespyperformance: pathlibnpb: EP.Dcompress-zstd: 3coremark: CoreMark Size 666 - Iterations Per Secondaom-av1: Speed 6 Two-Passnumenta-nab: Bayesian Changepointpyperformance: pickle_pure_pythondaphne: OpenMP - NDT Mappingmlpack: scikit_svmpyperformance: floatpyperformance: json_loadsglibc-bench: sqrtpybench: Total For Average Test Timespyperformance: chaospyperformance: nbodyglibc-bench: sincosglibc-bench: singlibc-bench: cosaom-av1: Speed 4 Two-Passdaphne: OpenMP - Euclidean Clusteraom-av1: Speed 8 Realtimenpb: FT.Cocrmypdf: Processing 60 Page PDF Documentnumenta-nab: Relative Entropydav1d: Chimera 1080psvt-av1: Enc Mode 4 - 1080ppovray: Trace Timeglibc-bench: pthread_onceglibc-bench: ffsllglibc-bench: atanhglibc-bench: asinhglibc-bench: tanhglibc-bench: sinhglibc-bench: modfglibc-bench: log2glibc-bench: ffsdav1d: Summer Nature 4Knpb: SP.Brodinia: OpenMP Streamclusterrodinia: OpenMP CFD Solversvt-av1: Enc Mode 8 - 1080pnpb: MG.Cnumenta-nab: Windowed Gaussiandav1d: Summer Nature 1080pnpb: EP.Clammps: Rhodopsin Proteincloverleaf: Lagrangian-Eulerian HydrodynamicsDefaultOptimizedOptimized Round 23.449.049793129119719326.28373.1781693411.150.12814.704076.62946954.04286.855107.56312.293.72721228.85328573431.4744574.19770.96032.26937.56169.3763.8683.890512533329917.17351643.74115414665223545.48667739.7882.21926576.2840.69239.8552704398.1246.418.05821.0265289.011595.1896288783100.016.44999.437205.82332502.5047743.925.768434968.1220.6810922.82.2512593110410412.366842.560042.91502.531183.6834.5028563.0214.81612.959882.239.5989.0501.786411.7897810.27088.7761610.77967.806092.027945.956322.02383379.9747235.728.1196.70597.10026534.146.213873.544818.1523.6240.403.449.052533120119719236.31375.8381752911.060.1314.983508.09817954.98686.138106.7391296.30621080.52491611831.9644474.17171.61832.48637.21206.6865.0833.853511300029917.42582341.05215342856823546.02167707.3482.41879274.3740.76139.7472774720.9244.518.23819.4265560.841585.185528889410116.75027.837163.32352886.8130373.8925.861414915.0420.6310122.62.2763293010510511.654642.462542.94972.511168.8635.6228564.5914.86513.167857.099.8769.0511.789532.024419.923258.3779810.90987.561522.028174.055861.79262375.6447137.998.0196.63798.23126530.566.059881.354894.0324.3290.403.469.062063138120019386.29378.3283412011.220.13414.954534.73150853.64885.320106.51212.394.68921047.57762217232.4644473.98372.06932.76737.29206.9464.0053.889511633330017.26949342.11315464401823545.58467722.7382.81942514.9440.14340.1272977648.4444.418.20822.0365583.531585.207608857610216.75002.327135.02383854.7961403.9425.785414967.7021.0110222.52.2533093810510711.526942.936643.16592.561089.0235.4028571.4714.78113.277857.659.9669.2031.790881.789649.538288.1017310.75397.462342.027294.137862.02413377.3347122.158.1886.575100.85526578.296.095885.224894.0924.2470.40OpenBenchmarking.org

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUOptimized Round 2OptimizedDefault0.77851.5572.33553.1143.8925SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.463.443.44
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUOptimized Round 2OptimizedDefault246810Min: 3.46 / Avg: 3.46 / Max: 3.46Min: 3.44 / Avg: 3.44 / Max: 3.45Min: 3.44 / Avg: 3.44 / Max: 3.45

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Optimized Round 2OptimizedDefault3691215SE +/- 0.00356, N = 3SE +/- 0.00205, N = 3SE +/- 0.00135, N = 39.062069.052539.049791. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Optimized Round 2OptimizedDefault3691215Min: 9.06 / Avg: 9.06 / Max: 9.07Min: 9.05 / Avg: 9.05 / Max: 9.06Min: 9.05 / Avg: 9.05 / Max: 9.051. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreOptimized Round 2DefaultOptimized7001400210028003500313831293120

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreOptimized Round 2OptimizedDefault30060090012001500120011971197

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreOptimized Round 2DefaultOptimized400800120016002000193819321923

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUOptimizedOptimized Round 2Default246810SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 36.316.296.28
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUOptimizedOptimized Round 2Default3691215Min: 6.28 / Avg: 6.31 / Max: 6.34Min: 6.21 / Avg: 6.29 / Max: 6.35Min: 6.27 / Avg: 6.28 / Max: 6.29

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkOptimized Round 2OptimizedDefault80160240320400SE +/- 0.79, N = 3SE +/- 0.40, N = 3SE +/- 0.91, N = 3378.32375.83373.17
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkOptimized Round 2OptimizedDefault70140210280350Min: 376.84 / Avg: 378.32 / Max: 379.56Min: 375.06 / Avg: 375.83 / Max: 376.36Min: 372.18 / Avg: 373.17 / Max: 374.99

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricOptimized Round 2OptimizedDefault200K400K600K800K1000K8341208175298169341. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lGL -lGLdispatch -lX11 -lpthread -ldl -luuid -lm

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUOptimized Round 2DefaultOptimized3691215SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 311.2211.1511.06
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUOptimized Round 2DefaultOptimized3691215Min: 11.08 / Avg: 11.22 / Max: 11.29Min: 11.1 / Avg: 11.15 / Max: 11.21Min: 11.03 / Avg: 11.06 / Max: 11.11

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pOptimized Round 2OptimizedDefault0.03020.06040.09060.12080.151SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1340.1300.1281. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pOptimized Round 2OptimizedDefault12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.131. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUOptimizedOptimized Round 2Default48121620SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 314.9814.9514.70
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUOptimizedOptimized Round 2Default48121620Min: 14.85 / Avg: 14.98 / Max: 15.11Min: 14.83 / Avg: 14.95 / Max: 15.05Min: 14.63 / Avg: 14.7 / Max: 14.76

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverOptimized Round 2DefaultOptimized10002000300040005000SE +/- 12.48, N = 3SE +/- 34.51, N = 15SE +/- 47.31, N = 34534.734076.633508.101. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverOptimized Round 2DefaultOptimized8001600240032004000Min: 4512.28 / Avg: 4534.73 / Max: 4555.39Min: 3811.66 / Avg: 4076.63 / Max: 4245.9Min: 3447.63 / Avg: 3508.1 / Max: 3601.371. (CC) gcc options: -O3 -mavx2

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneOptimized Round 2DefaultOptimized1224364860SE +/- 0.33, N = 3SE +/- 0.61, N = 15SE +/- 0.56, N = 353.6554.0454.991. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneOptimized Round 2DefaultOptimized1122334455Min: 53.18 / Avg: 53.65 / Max: 54.28Min: 51.13 / Avg: 54.04 / Max: 57.04Min: 53.87 / Avg: 54.99 / Max: 55.551. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DOptimized Round 2OptimizedDefault20406080100SE +/- 0.67, N = 3SE +/- 0.50, N = 3SE +/- 1.02, N = 685.3286.1486.861. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DOptimized Round 2OptimizedDefault1632486480Min: 84.62 / Avg: 85.32 / Max: 86.65Min: 85.15 / Avg: 86.14 / Max: 86.78Min: 84.68 / Avg: 86.85 / Max: 91.581. (CXX) g++ options: -O2 -lOpenCL

Scikit-Learn

Scikit-learn is a Python module for machine learning Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1Optimized Round 2OptimizedDefault20406080100SE +/- 0.09, N = 3SE +/- 0.29, N = 3SE +/- 0.69, N = 3106.51106.74107.56
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1Optimized Round 2OptimizedDefault20406080100Min: 106.43 / Avg: 106.51 / Max: 106.69Min: 106.25 / Avg: 106.74 / Max: 107.27Min: 106.18 / Avg: 107.56 / Max: 108.38

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupOptimizedDefaultOptimized Round 23691215SE +/- 0.03, N = 3SE +/- 0.00, N = 312.012.212.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupOptimizedDefaultOptimized Round 248121620Min: 12.1 / Avg: 12.17 / Max: 12.2Min: 12.3 / Avg: 12.3 / Max: 12.3

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisDefaultOptimized Round 2Optimized20406080100SE +/- 0.32, N = 3SE +/- 0.96, N = 3SE +/- 0.77, N = 393.7394.6996.311. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisDefaultOptimized Round 2Optimized20406080100Min: 93.1 / Avg: 93.73 / Max: 94.16Min: 93.25 / Avg: 94.69 / Max: 96.5Min: 94.93 / Avg: 96.31 / Max: 97.581. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageDefaultOptimizedOptimized Round 25K10K15K20K25KSE +/- 222.73, N = 3SE +/- 230.98, N = 3SE +/- 183.10, N = 321228.8521080.5221047.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageDefaultOptimizedOptimized Round 24K8K12K16K20KMin: 20791.25 / Avg: 21228.85 / Max: 21519.8Min: 20619.12 / Avg: 21080.52 / Max: 21330.83Min: 20785.77 / Avg: 21047.58 / Max: 21400.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUOptimized Round 2OptimizedDefault816243240SE +/- 0.12, N = 3SE +/- 0.29, N = 3SE +/- 0.29, N = 332.4631.9631.47
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUOptimized Round 2OptimizedDefault714212835Min: 32.25 / Avg: 32.46 / Max: 32.68Min: 31.53 / Avg: 31.96 / Max: 32.51Min: 31.14 / Avg: 31.47 / Max: 32.05

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceOptimizedOptimized Round 2Default100200300400500SE +/- 0.33, N = 3SE +/- 0.88, N = 3444444445
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceOptimizedOptimized Round 2Default80160240320400Min: 444 / Avg: 444.33 / Max: 445Min: 443 / Avg: 444.67 / Max: 446

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degOptimized Round 2OptimizedDefault1632486480SE +/- 0.17, N = 3SE +/- 0.19, N = 3SE +/- 0.13, N = 373.9874.1774.201. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degOptimized Round 2OptimizedDefault1428425670Min: 73.69 / Avg: 73.98 / Max: 74.29Min: 73.81 / Avg: 74.17 / Max: 74.47Min: 74.03 / Avg: 74.2 / Max: 74.441. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineDefaultOptimizedOptimized Round 21632486480SE +/- 0.25, N = 3SE +/- 0.34, N = 3SE +/- 0.16, N = 370.9671.6272.07
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineDefaultOptimizedOptimized Round 21428425670Min: 70.66 / Avg: 70.96 / Max: 71.45Min: 70.94 / Avg: 71.62 / Max: 72.05Min: 71.75 / Avg: 72.07 / Max: 72.29

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEDefaultOptimizedOptimized Round 2816243240SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.37, N = 1332.2732.4932.77
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEDefaultOptimizedOptimized Round 2714212835Min: 32.15 / Avg: 32.27 / Max: 32.33Min: 32.37 / Avg: 32.49 / Max: 32.66Min: 32.15 / Avg: 32.77 / Max: 37.17

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUDefaultOptimized Round 2Optimized918273645SE +/- 0.08, N = 3SE +/- 0.51, N = 3SE +/- 0.10, N = 337.5637.2937.21
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUDefaultOptimized Round 2Optimized816243240Min: 37.44 / Avg: 37.56 / Max: 37.7Min: 36.7 / Avg: 37.29 / Max: 38.31Min: 37.07 / Avg: 37.21 / Max: 37.39

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitOptimized Round 2OptimizedDefault50100150200250SE +/- 0.48, N = 3SE +/- 0.53, N = 3SE +/- 0.19, N = 3206.94206.68169.37MIN: 147.43 / MAX: 330.5MIN: 146.94 / MAX: 330.56MIN: 121.34 / MAX: 268.391. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitOptimized Round 2OptimizedDefault4080120160200Min: 206.25 / Avg: 206.94 / Max: 207.87Min: 205.62 / Avg: 206.68 / Max: 207.23Min: 169.01 / Avg: 169.37 / Max: 169.681. (CC) gcc options: -pthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000DefaultOptimized Round 2Optimized1530456075SE +/- 0.22, N = 3SE +/- 0.32, N = 3SE +/- 0.25, N = 363.8764.0165.081. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000DefaultOptimized Round 2Optimized1326395265Min: 63.62 / Avg: 63.87 / Max: 64.31Min: 63.51 / Avg: 64.01 / Max: 64.62Min: 64.6 / Avg: 65.08 / Max: 65.41. (CC) gcc options: -O2 -ldl -lz -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkDefaultOptimized Round 2Optimized0.87531.75062.62593.50124.3765SE +/- 0.066, N = 3SE +/- 0.062, N = 3SE +/- 0.054, N = 43.8903.8893.8531. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkDefaultOptimized Round 2Optimized246810Min: 3.82 / Avg: 3.89 / Max: 4.02Min: 3.82 / Avg: 3.89 / Max: 4.01Min: 3.79 / Avg: 3.85 / Max: 4.011. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5DefaultOptimized Round 2Optimized1.1M2.2M3.3M4.4M5.5MSE +/- 28386.23, N = 3SE +/- 27180.47, N = 3SE +/- 28827.07, N = 35125333511633351130001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5DefaultOptimized Round 2Optimized900K1800K2700K3600K4500KMin: 5094000 / Avg: 5125333.33 / Max: 5182000Min: 5082000 / Avg: 5116333.33 / Max: 5170000Min: 5077000 / Avg: 5113000 / Max: 51700001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3DefaultOptimizedOptimized Round 270140210280350299299300

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateOptimizedOptimized Round 2Default48121620SE +/- 0.15, N = 15SE +/- 0.21, N = 5SE +/- 0.24, N = 1517.4317.2717.171. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateOptimizedOptimized Round 2Default48121620Min: 16.15 / Avg: 17.43 / Max: 18.25Min: 16.53 / Avg: 17.27 / Max: 17.7Min: 15.76 / Avg: 17.17 / Max: 18.381. (CC) gcc options: -O3 -march=native -fopenmp

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDOptimizedOptimized Round 2Default1020304050SE +/- 0.12, N = 3SE +/- 0.50, N = 5SE +/- 0.20, N = 341.0542.1143.741. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDOptimizedOptimized Round 2Default918273645Min: 40.82 / Avg: 41.05 / Max: 41.23Min: 41.29 / Avg: 42.11 / Max: 44.06Min: 43.34 / Avg: 43.74 / Max: 43.981. (CXX) g++ options: -O2 -lOpenCL

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeOptimized Round 2DefaultOptimized30M60M90M120M150MSE +/- 945939.33, N = 3SE +/- 860075.69, N = 3SE +/- 793489.01, N = 31546440181541466521534285681. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeOptimized Round 2DefaultOptimized30M60M90M120M150MMin: 153432642 / Avg: 154644018 / Max: 156508209Min: 152589676 / Avg: 154146651.67 / Max: 155558433Min: 151843910 / Avg: 153428568.33 / Max: 1542951801. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultOptimizedOptimized Round 250100150200250SE +/- 0.33, N = 3SE +/- 0.58, N = 3235235235
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultOptimizedOptimized Round 24080120160200Min: 234 / Avg: 234.67 / Max: 235Min: 234 / Avg: 235 / Max: 236

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeDefaultOptimized Round 2Optimized1020304050SE +/- 0.34, N = 3SE +/- 0.47, N = 3SE +/- 0.48, N = 345.4945.5846.02
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeDefaultOptimized Round 2Optimized918273645Min: 45.01 / Avg: 45.49 / Max: 46.16Min: 44.72 / Avg: 45.58 / Max: 46.36Min: 45.09 / Avg: 46.02 / Max: 46.71

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CDefaultOptimized Round 2Optimized15K30K45K60K75KSE +/- 25.73, N = 3SE +/- 15.08, N = 3SE +/- 21.90, N = 367739.7867722.7367707.341. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CDefaultOptimized Round 2Optimized12K24K36K48K60KMin: 67688.33 / Avg: 67739.78 / Max: 67766.4Min: 67692.56 / Avg: 67722.73 / Max: 67737.96Min: 67667.33 / Avg: 67707.34 / Max: 67742.81. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Optimized Round 2OptimizedDefault20406080100SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 382.882.482.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Optimized Round 2OptimizedDefault1632486480Min: 82.5 / Avg: 82.8 / Max: 83Min: 82.4 / Avg: 82.43 / Max: 82.5Min: 82.1 / Avg: 82.17 / Max: 82.21. (CC) gcc options: -O3 -pthread -lz -llzma

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SETOptimized Round 2DefaultOptimized400K800K1200K1600K2000KSE +/- 23065.50, N = 6SE +/- 26705.87, N = 15SE +/- 19531.99, N = 151942514.941926576.281879274.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SETOptimized Round 2DefaultOptimized300K600K900K1200K1500KMin: 1845018.5 / Avg: 1942514.94 / Max: 2012072.38Min: 1703577.5 / Avg: 1926576.28 / Max: 2044989.75Min: 1751313.5 / Avg: 1879274.37 / Max: 19801981. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Cython benchmark

Stress benchmark tests to measure time consumed by cython code Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCython benchmark 0.27Optimized Round 2DefaultOptimized918273645SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.31, N = 340.1440.6940.76
OpenBenchmarking.orgSeconds, Fewer Is BetterCython benchmark 0.27Optimized Round 2DefaultOptimized816243240Min: 40.02 / Avg: 40.14 / Max: 40.37Min: 40.57 / Avg: 40.69 / Max: 40.81Min: 40.42 / Avg: 40.76 / Max: 41.38

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteOptimizedDefaultOptimized Round 2918273645SE +/- 0.51, N = 3SE +/- 0.23, N = 3SE +/- 0.23, N = 339.7539.8640.131. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteOptimizedDefaultOptimized Round 2816243240Min: 38.73 / Avg: 39.75 / Max: 40.32Min: 39.45 / Avg: 39.86 / Max: 40.23Min: 39.74 / Avg: 40.13 / Max: 40.521. (CXX) g++ options: -O2 -lOpenCL

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: GETOptimized Round 2OptimizedDefault600K1200K1800K2400K3000KSE +/- 37976.12, N = 4SE +/- 50011.51, N = 15SE +/- 54993.70, N = 152977648.442774720.922704398.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: GETOptimized Round 2OptimizedDefault500K1000K1500K2000K2500KMin: 2881844.25 / Avg: 2977648.44 / Max: 3067484.5Min: 2222222.25 / Avg: 2774720.92 / Max: 2932551.5Min: 2237136.5 / Avg: 2704398.12 / Max: 2932551.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateOptimized Round 2OptimizedDefault1122334455SE +/- 0.13, N = 3SE +/- 0.00, N = 3SE +/- 0.09, N = 344.444.546.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateOptimized Round 2OptimizedDefault918273645Min: 44.3 / Avg: 44.43 / Max: 44.7Min: 44.5 / Avg: 44.5 / Max: 44.5Min: 46.3 / Avg: 46.43 / Max: 46.6

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeOptimizedOptimized Round 2Default48121620SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 318.2318.2018.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeOptimizedOptimized Round 2Default510152025Min: 18.14 / Avg: 18.23 / Max: 18.32Min: 18.15 / Avg: 18.2 / Max: 18.28Min: 17.86 / Avg: 18.05 / Max: 18.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUOptimized Round 2DefaultOptimized2004006008001000SE +/- 1.77, N = 3SE +/- 1.55, N = 3SE +/- 1.34, N = 3822.03821.02819.42
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUOptimized Round 2DefaultOptimized140280420560700Min: 819.81 / Avg: 822.03 / Max: 825.52Min: 817.95 / Avg: 821.02 / Max: 822.93Min: 817.71 / Avg: 819.42 / Max: 822.06

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.COptimized Round 2OptimizedDefault14K28K42K56K70KSE +/- 26.57, N = 3SE +/- 7.16, N = 3SE +/- 52.68, N = 365583.5365560.8465289.011. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.COptimized Round 2OptimizedDefault11K22K33K44K55KMin: 65534.15 / Avg: 65583.53 / Max: 65625.22Min: 65549.24 / Avg: 65560.84 / Max: 65573.91Min: 65191.21 / Avg: 65289.01 / Max: 65371.851. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileOptimizedOptimized Round 2Default4080120160200158158159

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: expOptimizedDefaultOptimized Round 21.17172.34343.51514.68685.8585SE +/- 0.02078, N = 3SE +/- 0.00032, N = 3SE +/- 0.08351, N = 35.185525.189625.20760
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: expOptimizedDefaultOptimized Round 2246810Min: 5.16 / Avg: 5.19 / Max: 5.23Min: 5.19 / Avg: 5.19 / Max: 5.19Min: 5.04 / Avg: 5.21 / Max: 5.31

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishOptimizedDefaultOptimized Round 220K40K60K80K100KSE +/- 396.97, N = 3SE +/- 387.67, N = 3SE +/- 371.83, N = 38889488783885761. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishOptimizedDefaultOptimized Round 215K30K45K60K75KMin: 88367 / Avg: 88894.33 / Max: 89672Min: 88290 / Avg: 88783.33 / Max: 89548Min: 88166 / Avg: 88575.67 / Max: 893181. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultOptimizedOptimized Round 220406080100SE +/- 0.03, N = 3SE +/- 0.33, N = 3100.0101.0102.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultOptimizedOptimized Round 220406080100Min: 99.9 / Avg: 99.97 / Max: 100Min: 101 / Avg: 101.67 / Max: 102

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultOptimizedOptimized Round 248121620SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 316.416.716.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultOptimizedOptimized Round 248121620Min: 16.4 / Avg: 16.43 / Max: 16.5Min: 16.6 / Avg: 16.67 / Max: 16.7Min: 16.7 / Avg: 16.73 / Max: 16.8

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DOptimizedOptimized Round 2Default11002200330044005500SE +/- 9.51, N = 3SE +/- 14.13, N = 3SE +/- 7.97, N = 35027.835002.324999.431. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DOptimizedOptimized Round 2Default9001800270036004500Min: 5010.97 / Avg: 5027.83 / Max: 5043.89Min: 4986.64 / Avg: 5002.32 / Max: 5030.51Min: 4987.57 / Avg: 4999.43 / Max: 5014.591. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3DefaultOptimizedOptimized Round 215003000450060007500SE +/- 44.91, N = 3SE +/- 4.71, N = 3SE +/- 60.74, N = 37205.87163.37135.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3DefaultOptimizedOptimized Round 213002600390052006500Min: 7150.8 / Avg: 7205.8 / Max: 7294.8Min: 7157.8 / Avg: 7163.33 / Max: 7172.7Min: 7036.3 / Avg: 7135.03 / Max: 7245.71. (CC) gcc options: -O3 -pthread -lz -llzma

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondOptimized Round 2OptimizedDefault500K1000K1500K2000K2500KSE +/- 32870.25, N = 4SE +/- 4570.17, N = 3SE +/- 16958.53, N = 32383854.802352886.812332502.501. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondOptimized Round 2OptimizedDefault400K800K1200K1600K2000KMin: 2345718.61 / Avg: 2383854.8 / Max: 2482303.89Min: 2347331.74 / Avg: 2352886.81 / Max: 2361950.45Min: 2301330.46 / Avg: 2332502.5 / Max: 2359664.491. (CC) gcc options: -O2 -lrt" -lrt

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassOptimized Round 2DefaultOptimized0.88651.7732.65953.5464.4325SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.943.903.891. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassOptimized Round 2DefaultOptimized246810Min: 3.94 / Avg: 3.94 / Max: 3.95Min: 3.9 / Avg: 3.9 / Max: 3.9Min: 3.89 / Avg: 3.89 / Max: 3.891. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointDefaultOptimized Round 2Optimized612182430SE +/- 0.26, N = 3SE +/- 0.02, N = 3SE +/- 0.21, N = 325.7725.7925.86
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointDefaultOptimized Round 2Optimized612182430Min: 25.33 / Avg: 25.77 / Max: 26.24Min: 25.75 / Avg: 25.79 / Max: 25.83Min: 25.62 / Avg: 25.86 / Max: 26.28

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonOptimizedOptimized Round 2Default90180270360450SE +/- 0.33, N = 3SE +/- 1.45, N = 3414414434
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonOptimizedOptimized Round 2Default80160240320400Min: 413 / Avg: 413.67 / Max: 414Min: 432 / Avg: 434.33 / Max: 437

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingDefaultOptimized Round 2Optimized2004006008001000SE +/- 2.68, N = 3SE +/- 0.83, N = 3SE +/- 4.60, N = 3968.12967.70915.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingDefaultOptimized Round 2Optimized2004006008001000Min: 965.01 / Avg: 968.12 / Max: 973.45Min: 966.05 / Avg: 967.7 / Max: 968.57Min: 909.19 / Avg: 915.04 / Max: 924.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmOptimizedDefaultOptimized Round 2510152025SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.23, N = 320.6320.6821.01
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmOptimizedDefaultOptimized Round 2510152025Min: 20.62 / Avg: 20.63 / Max: 20.64Min: 20.61 / Avg: 20.68 / Max: 20.78Min: 20.76 / Avg: 21.01 / Max: 21.48

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatOptimizedOptimized Round 2Default20406080100SE +/- 0.33, N = 3101102109
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatOptimizedOptimized Round 2Default20406080100Min: 108 / Avg: 108.67 / Max: 109

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsOptimized Round 2OptimizedDefault510152025SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 322.522.622.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsOptimized Round 2OptimizedDefault510152025Min: 22.5 / Avg: 22.53 / Max: 22.6Min: 22.6 / Avg: 22.6 / Max: 22.6Min: 22.8 / Avg: 22.8 / Max: 22.8

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sqrtDefaultOptimized Round 2Optimized0.51221.02441.53662.04882.561SE +/- 0.00090, N = 3SE +/- 0.00370, N = 3SE +/- 0.01708, N = 132.251252.253302.27632
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sqrtDefaultOptimized Round 2Optimized246810Min: 2.25 / Avg: 2.25 / Max: 2.25Min: 2.25 / Avg: 2.25 / Max: 2.26Min: 2.25 / Avg: 2.28 / Max: 2.48

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesOptimizedDefaultOptimized Round 22004006008001000SE +/- 2.19, N = 3SE +/- 2.65, N = 3SE +/- 4.58, N = 3930931938
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesOptimizedDefaultOptimized Round 2160320480640800Min: 927 / Avg: 929.67 / Max: 934Min: 926 / Avg: 931 / Max: 935Min: 932 / Avg: 938 / Max: 947

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefaultOptimizedOptimized Round 220406080100104105105

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultOptimizedOptimized Round 220406080100SE +/- 0.33, N = 3104105107
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultOptimizedOptimized Round 220406080100Min: 104 / Avg: 104.67 / Max: 105

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sincosOptimized Round 2OptimizedDefault3691215SE +/- 0.11, N = 3SE +/- 0.17, N = 3SE +/- 0.01, N = 311.5311.6512.37
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sincosOptimized Round 2OptimizedDefault48121620Min: 11.36 / Avg: 11.53 / Max: 11.74Min: 11.35 / Avg: 11.65 / Max: 11.96Min: 12.36 / Avg: 12.37 / Max: 12.37

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinOptimizedDefaultOptimized Round 21020304050SE +/- 0.33, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 342.4642.5642.94
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinOptimizedDefaultOptimized Round 2918273645Min: 41.95 / Avg: 42.46 / Max: 43.07Min: 42.54 / Avg: 42.56 / Max: 42.57Min: 42.92 / Avg: 42.94 / Max: 42.95

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: cosDefaultOptimizedOptimized Round 21020304050SE +/- 0.01, N = 3SE +/- 0.28, N = 3SE +/- 0.07, N = 342.9242.9543.17
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: cosDefaultOptimizedOptimized Round 2918273645Min: 42.9 / Avg: 42.91 / Max: 42.94Min: 42.39 / Avg: 42.95 / Max: 43.26Min: 43.03 / Avg: 43.17 / Max: 43.24

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassOptimized Round 2DefaultOptimized0.5761.1521.7282.3042.88SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.562.532.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassOptimized Round 2DefaultOptimized246810Min: 2.55 / Avg: 2.56 / Max: 2.57Min: 2.53 / Avg: 2.53 / Max: 2.53Min: 2.51 / Avg: 2.51 / Max: 2.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterDefaultOptimizedOptimized Round 230060090012001500SE +/- 3.08, N = 3SE +/- 3.50, N = 3SE +/- 15.06, N = 31183.681168.861089.021. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterDefaultOptimizedOptimized Round 22004006008001000Min: 1177.69 / Avg: 1183.68 / Max: 1187.92Min: 1164.7 / Avg: 1168.86 / Max: 1175.81Min: 1058.97 / Avg: 1089.02 / Max: 1105.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeOptimizedOptimized Round 2Default816243240SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 335.6235.4034.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeOptimizedOptimized Round 2Default816243240Min: 35.51 / Avg: 35.62 / Max: 35.74Min: 35.31 / Avg: 35.4 / Max: 35.49Min: 34.4 / Avg: 34.5 / Max: 34.551. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.COptimized Round 2OptimizedDefault6K12K18K24K30KSE +/- 9.61, N = 3SE +/- 13.18, N = 3SE +/- 7.08, N = 328571.4728564.5928563.021. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.COptimized Round 2OptimizedDefault5K10K15K20K25KMin: 28553.23 / Avg: 28571.47 / Max: 28585.83Min: 28538.33 / Avg: 28564.59 / Max: 28579.63Min: 28549.11 / Avg: 28563.02 / Max: 28572.231. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentOptimized Round 2DefaultOptimized48121620SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 314.7814.8214.87
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentOptimized Round 2DefaultOptimized48121620Min: 14.57 / Avg: 14.78 / Max: 14.98Min: 14.79 / Avg: 14.82 / Max: 14.85Min: 14.81 / Avg: 14.86 / Max: 14.92

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyDefaultOptimizedOptimized Round 23691215SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 312.9613.1713.28
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyDefaultOptimizedOptimized Round 248121620Min: 12.84 / Avg: 12.96 / Max: 13.07Min: 12.91 / Avg: 13.17 / Max: 13.41Min: 13.12 / Avg: 13.28 / Max: 13.43

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pDefaultOptimized Round 2Optimized2004006008001000SE +/- 2.94, N = 3SE +/- 3.55, N = 3SE +/- 2.15, N = 3882.23857.65857.09MIN: 487.91 / MAX: 1141.18MIN: 454.83 / MAX: 1098.81MIN: 480.14 / MAX: 1093.451. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pDefaultOptimized Round 2Optimized150300450600750Min: 876.9 / Avg: 882.23 / Max: 887.04Min: 850.88 / Avg: 857.65 / Max: 862.92Min: 854.34 / Avg: 857.09 / Max: 861.341. (CC) gcc options: -pthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pOptimized Round 2OptimizedDefault3691215SE +/- 0.068, N = 3SE +/- 0.028, N = 3SE +/- 0.062, N = 39.9669.8769.5981. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pOptimized Round 2OptimizedDefault3691215Min: 9.83 / Avg: 9.97 / Max: 10.04Min: 9.83 / Avg: 9.88 / Max: 9.93Min: 9.51 / Avg: 9.6 / Max: 9.721. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeDefaultOptimizedOptimized Round 23691215SE +/- 0.065, N = 3SE +/- 0.043, N = 3SE +/- 0.055, N = 39.0509.0519.2031. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeDefaultOptimizedOptimized Round 23691215Min: 8.98 / Avg: 9.05 / Max: 9.18Min: 9 / Avg: 9.05 / Max: 9.14Min: 9.12 / Avg: 9.2 / Max: 9.311. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: pthread_onceDefaultOptimizedOptimized Round 20.40290.80581.20871.61162.0145SE +/- 0.00153, N = 3SE +/- 0.00158, N = 3SE +/- 0.00076, N = 31.786411.789531.79088
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: pthread_onceDefaultOptimizedOptimized Round 2246810Min: 1.78 / Avg: 1.79 / Max: 1.79Min: 1.79 / Avg: 1.79 / Max: 1.79Min: 1.79 / Avg: 1.79 / Max: 1.79

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsllOptimized Round 2DefaultOptimized0.45550.9111.36651.8222.2775SE +/- 0.00018, N = 3SE +/- 0.00006, N = 3SE +/- 0.00238, N = 31.789641.789782.02441
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsllOptimized Round 2DefaultOptimized246810Min: 1.79 / Avg: 1.79 / Max: 1.79Min: 1.79 / Avg: 1.79 / Max: 1.79Min: 2.02 / Avg: 2.02 / Max: 2.03

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: atanhOptimized Round 2OptimizedDefault3691215SE +/- 0.04337, N = 3SE +/- 0.06812, N = 3SE +/- 0.01297, N = 39.538289.9232510.27080
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: atanhOptimized Round 2OptimizedDefault3691215Min: 9.49 / Avg: 9.54 / Max: 9.62Min: 9.79 / Avg: 9.92 / Max: 10Min: 10.25 / Avg: 10.27 / Max: 10.29

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: asinhOptimized Round 2OptimizedDefault246810SE +/- 0.00187, N = 3SE +/- 0.03742, N = 3SE +/- 0.07902, N = 38.101738.377988.77616
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: asinhOptimized Round 2OptimizedDefault3691215Min: 8.1 / Avg: 8.1 / Max: 8.1Min: 8.33 / Avg: 8.38 / Max: 8.45Min: 8.66 / Avg: 8.78 / Max: 8.93

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: tanhOptimized Round 2DefaultOptimized3691215SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.15, N = 310.7510.7810.91
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: tanhOptimized Round 2DefaultOptimized3691215Min: 10.7 / Avg: 10.75 / Max: 10.79Min: 10.77 / Avg: 10.78 / Max: 10.78Min: 10.68 / Avg: 10.91 / Max: 11.2

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinhOptimized Round 2OptimizedDefault246810SE +/- 0.04836, N = 3SE +/- 0.00451, N = 3SE +/- 0.00239, N = 37.462347.561527.80609
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinhOptimized Round 2OptimizedDefault3691215Min: 7.37 / Avg: 7.46 / Max: 7.52Min: 7.56 / Avg: 7.56 / Max: 7.57Min: 7.8 / Avg: 7.81 / Max: 7.81

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: modfOptimized Round 2DefaultOptimized0.45630.91261.36891.82522.2815SE +/- 0.00349, N = 3SE +/- 0.00054, N = 3SE +/- 0.00065, N = 32.027292.027942.02817
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: modfOptimized Round 2DefaultOptimized246810Min: 2.02 / Avg: 2.03 / Max: 2.03Min: 2.03 / Avg: 2.03 / Max: 2.03Min: 2.03 / Avg: 2.03 / Max: 2.03

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: log2OptimizedOptimized Round 2Default1.34022.68044.02065.36086.701SE +/- 0.00364, N = 3SE +/- 0.00287, N = 3SE +/- 0.01640, N = 34.055864.137865.95632
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: log2OptimizedOptimized Round 2Default246810Min: 4.05 / Avg: 4.06 / Max: 4.06Min: 4.13 / Avg: 4.14 / Max: 4.14Min: 5.94 / Avg: 5.96 / Max: 5.99

OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsOptimizedDefaultOptimized Round 20.45540.91081.36621.82162.277SE +/- 0.00136, N = 3SE +/- 0.00032, N = 3SE +/- 0.00074, N = 31.792622.023832.02413
OpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsOptimizedDefaultOptimized Round 2246810Min: 1.79 / Avg: 1.79 / Max: 1.79Min: 2.02 / Avg: 2.02 / Max: 2.02Min: 2.02 / Avg: 2.02 / Max: 2.03

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KDefaultOptimized Round 2Optimized80160240320400SE +/- 0.42, N = 3SE +/- 0.88, N = 3SE +/- 1.89, N = 3379.97377.33375.64MIN: 147.77 / MAX: 417.62MIN: 142.61 / MAX: 416MIN: 139.22 / MAX: 417.311. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KDefaultOptimized Round 2Optimized70140210280350Min: 379.13 / Avg: 379.97 / Max: 380.46Min: 375.59 / Avg: 377.33 / Max: 378.36Min: 373.14 / Avg: 375.64 / Max: 379.341. (CC) gcc options: -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BDefaultOptimizedOptimized Round 210K20K30K40K50KSE +/- 25.58, N = 3SE +/- 317.18, N = 3SE +/- 153.42, N = 347235.7247137.9947122.151. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BDefaultOptimizedOptimized Round 28K16K24K32K40KMin: 47193.75 / Avg: 47235.72 / Max: 47282.04Min: 46535.24 / Avg: 47137.99 / Max: 47610.65Min: 46829.37 / Avg: 47122.15 / Max: 47348.051. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterOptimizedDefaultOptimized Round 2246810SE +/- 0.032, N = 3SE +/- 0.020, N = 3SE +/- 0.031, N = 38.0198.1198.1881. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterOptimizedDefaultOptimized Round 23691215Min: 7.96 / Avg: 8.02 / Max: 8.07Min: 8.08 / Avg: 8.12 / Max: 8.14Min: 8.14 / Avg: 8.19 / Max: 8.241. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverOptimized Round 2OptimizedDefault246810SE +/- 0.026, N = 3SE +/- 0.060, N = 3SE +/- 0.095, N = 46.5756.6376.7051. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverOptimized Round 2OptimizedDefault3691215Min: 6.53 / Avg: 6.57 / Max: 6.62Min: 6.52 / Avg: 6.64 / Max: 6.72Min: 6.53 / Avg: 6.71 / Max: 6.951. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pOptimized Round 2OptimizedDefault20406080100SE +/- 0.48, N = 3SE +/- 0.22, N = 3SE +/- 0.35, N = 3100.8698.2397.101. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pOptimized Round 2OptimizedDefault20406080100Min: 99.91 / Avg: 100.86 / Max: 101.36Min: 97.83 / Avg: 98.23 / Max: 98.58Min: 96.44 / Avg: 97.1 / Max: 97.621. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.COptimized Round 2DefaultOptimized6K12K18K24K30KSE +/- 7.88, N = 3SE +/- 40.15, N = 3SE +/- 10.23, N = 326578.2926534.1426530.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.COptimized Round 2DefaultOptimized5K10K15K20K25KMin: 26563.65 / Avg: 26578.29 / Max: 26590.67Min: 26469.3 / Avg: 26534.14 / Max: 26607.57Min: 26512.24 / Avg: 26530.56 / Max: 26547.591. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianOptimizedOptimized Round 2Default246810SE +/- 0.020, N = 3SE +/- 0.029, N = 3SE +/- 0.018, N = 36.0596.0956.213
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianOptimizedOptimized Round 2Default246810Min: 6.03 / Avg: 6.06 / Max: 6.1Min: 6.05 / Avg: 6.1 / Max: 6.15Min: 6.18 / Avg: 6.21 / Max: 6.24

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pOptimized Round 2OptimizedDefault2004006008001000SE +/- 1.39, N = 3SE +/- 0.73, N = 3SE +/- 5.28, N = 3885.22881.35873.54MIN: 286.93 / MAX: 1010.66MIN: 290.02 / MAX: 1005.96MIN: 263.8 / MAX: 1009.61. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pOptimized Round 2OptimizedDefault160320480640800Min: 882.6 / Avg: 885.22 / Max: 887.3Min: 879.9 / Avg: 881.35 / Max: 882.2Min: 865.33 / Avg: 873.54 / Max: 883.411. (CC) gcc options: -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.COptimized Round 2OptimizedDefault10002000300040005000SE +/- 43.01, N = 3SE +/- 31.84, N = 3SE +/- 21.84, N = 34894.094894.034818.151. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.COptimized Round 2OptimizedDefault9001800270036004500Min: 4837.97 / Avg: 4894.09 / Max: 4978.61Min: 4845.67 / Avg: 4894.03 / Max: 4954.09Min: 4780.51 / Avg: 4818.15 / Max: 4856.161. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 9Jan2020Model: Rhodopsin ProteinOptimizedOptimized Round 2Default612182430SE +/- 0.41, N = 3SE +/- 0.19, N = 3SE +/- 0.13, N = 324.3324.2523.621. (CXX) g++ options: -O3 -rdynamic -ljpeg -lpng -lz -lfftw3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 9Jan2020Model: Rhodopsin ProteinOptimizedOptimized Round 2Default612182430Min: 23.7 / Avg: 24.33 / Max: 25.1Min: 23.9 / Avg: 24.25 / Max: 24.57Min: 23.44 / Avg: 23.62 / Max: 23.881. (CXX) g++ options: -O3 -rdynamic -ljpeg -lpng -lz -lfftw3 -lm

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm8192.in input file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsDefaultOptimizedOptimized Round 20.090.180.270.360.45SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 30.400.400.401. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsDefaultOptimizedOptimized Round 212345Min: 0.4 / Avg: 0.4 / Max: 0.41Min: 0.39 / Avg: 0.4 / Max: 0.4Min: 0.4 / Avg: 0.4 / Max: 0.411. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp