9400f-sep

Intel Core i5-9400F testing with a MSI B360M GAMING PLUS (MS-7B19) v1.0 (1.10 BIOS) and MSI NVIDIA NV106 1GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009281-FI-9400FSEP853
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 7 Tests
Creator Workloads 4 Tests
Database Test Suite 2 Tests
Fortran Tests 5 Tests
HPC - High Performance Computing 18 Tests
Imaging 2 Tests
Machine Learning 7 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 5 Tests
Multi-Core 4 Tests
NVIDIA GPU Compute 4 Tests
OpenMPI Tests 5 Tests
Python Tests 3 Tests
Scientific Computing 11 Tests
Server 2 Tests
Single-Threaded 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux 5.9-rc2
September 27 2020
  6 Hours, 12 Minutes
Run 2
September 28 2020
  5 Hours, 24 Minutes
Linux 5.9-rc7
September 28 2020
  7 Hours, 15 Minutes
Invert Hiding All Results Option
  6 Hours, 17 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


9400f-sep OpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-9400F @ 4.10GHz (6 Cores)MSI B360M GAMING PLUS (MS-7B19) v1.0 (1.10 BIOS)Intel Cannon Lake PCH16GB256GB SAMSUNG MZVPW256HEGL-000H7MSI NVIDIA NV106 1GBRealtek ALC887-VDG237HLIntel I219-VUbuntu 20.045.9.0-050900rc2-generic (x86_64) 202008235.9.0-050900rc7daily20200928-generic (x86_64) 20200927GNOME Shell 3.36.0X Server 1.20.7modesetting 1.20.74.3 Mesa 20.0.2GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution9400f-sep BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xca- Python 3.8.2- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Vulnerable: No microcode + tsx_async_abort: Not affected

Linux 5.9-rc2Run 2Linux 5.9-rc7Result OverviewPhoronix Test Suite100%102%105%107%110%OpenCVKripkeTimed MAFFT AlignmentMlpack BenchmarkApache CouchDBAOM AV1GPAWIncompact3DLeelaChessZeroLibRawBYTE Unix BenchmarkLAMMPS Molecular Dynamics SimulatoreSpeak-NG Speech EngineMonte Carlo Simulations of Ionised NebulaeGROMACSFFTENAMDDolfynWebP Image EncodeHierarchical INTegrationMobile Neural NetworkNCNNTimed HMMer SearchCaffeTNN

9400f-sep lczero: BLASlczero: Eigenlczero: Randnamd: ATPase Simulation - 327,506 Atomsdolfyn: Computational Fluid Dynamicsffte: N=256, 3D Complex FFT Routinehmmer: Pfam Database Searchincompact3d: Cylindermafft: Multiple Sequence Alignment - LSU RNAmocassin: Dust 2D tau100.0lammps: Rhodopsin Proteinwebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionbyte: Dhrystone 2libraw: Post-Processing Benchmarkaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimeespeak: Text-To-Speech Synthesiscouchdb: 100 - 1000 - 24gromacs: Water Benchmarkcaffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200gpaw: Carbon Nanotubemnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinytnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1hint: FLOATmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionkripke: opencv: DNN - Deep Neural Networkinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000Linux 5.9-rc2Run 2Linux 5.9-rc710049532534112.8625120.23023721.231042479117.809490.35256011.5132404.3291.6582.60518.8367.76644.02140265449.626.990.252.1517.613.4239.5330.289122.1250.649320836507480042159857514.7675.84826.7063.6274.01437.14719.9622.596.305.313.834.998.231.4421.2889.2621.6922.0040.0236.98326.243330.222407938209.3476449.6283.4826.393.131124344329751095047.31460364.510069382.8690520.18623639.485828289117.675484.56903111.2492414.3411.6592.60018.6847.75943.70339996685.527.200.262.1717.683.4339.6030.115123.0220.652319506377379953159899509.3585.83826.7293.6424.00137.05119.9022.616.285.313.794.998.241.4521.3188.6021.7121.9839.9136.99325.094330.058406942592.1491848.4579.4226.343.431109870730719899492542632.8746420.26823615.219818037117.813484.95216911.2512414.3561.6592.60518.5957.77743.61140160217.627.240.252.1717.723.4339.6630.202120.6810.651319986383679862159607512.0315.82826.6413.6393.99237.08019.8622.546.345.343.805.008.241.4321.2888.5421.6621.9739.9436.98324.790329.988407244100.9730551.0180.9526.353.441150790331881098588.91464209.7OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASLinux 5.9-rc2Run 2Linux 5.9-rc72004006008001000SE +/- 0.58, N = 3SE +/- 1.67, N = 3SE +/- 6.44, N = 3100410069891. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASLinux 5.9-rc2Run 2Linux 5.9-rc72004006008001000Min: 1003 / Avg: 1004 / Max: 1005Min: 1003 / Avg: 1006.33 / Max: 1008Min: 981 / Avg: 989.33 / Max: 10021. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.9-rc2Run 2Linux 5.9-rc72004006008001000SE +/- 2.91, N = 3SE +/- 4.91, N = 3SE +/- 2.08, N = 39539389491. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.9-rc2Run 2Linux 5.9-rc72004006008001000Min: 948 / Avg: 953.33 / Max: 958Min: 932 / Avg: 938.33 / Max: 948Min: 946 / Avg: 949 / Max: 9531. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomLinux 5.9-rc2Linux 5.9-rc750K100K150K200K250KSE +/- 849.90, N = 3SE +/- 182.72, N = 32534112542631. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomLinux 5.9-rc2Linux 5.9-rc740K80K120K160K200KMin: 251745 / Avg: 253410.67 / Max: 254537Min: 253997 / Avg: 254263 / Max: 2546131. (CXX) g++ options: -flto -pthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.9-rc2Run 2Linux 5.9-rc70.64681.29361.94042.58723.234SE +/- 0.00067, N = 3SE +/- 0.00965, N = 3SE +/- 0.00569, N = 32.862512.869052.87464
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 2.86 / Avg: 2.86 / Max: 2.86Min: 2.86 / Avg: 2.87 / Max: 2.89Min: 2.86 / Avg: 2.87 / Max: 2.88

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsLinux 5.9-rc2Run 2Linux 5.9-rc7510152025SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 320.2320.1920.27
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsLinux 5.9-rc2Run 2Linux 5.9-rc7510152025Min: 20.22 / Avg: 20.23 / Max: 20.24Min: 20.09 / Avg: 20.19 / Max: 20.35Min: 20.19 / Avg: 20.27 / Max: 20.32

FFTE

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.9-rc2Run 2Linux 5.9-rc75K10K15K20K25KSE +/- 69.82, N = 3SE +/- 114.50, N = 3SE +/- 151.87, N = 323721.2323639.4923615.221. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.9-rc2Run 2Linux 5.9-rc74K8K12K16K20KMin: 23587.86 / Avg: 23721.23 / Max: 23823.75Min: 23504.99 / Avg: 23639.49 / Max: 23867.24Min: 23399.29 / Avg: 23615.22 / Max: 23908.181. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.9-rc2Run 2Linux 5.9-rc7306090120150SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.13, N = 3117.81117.68117.811. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.9-rc2Run 2Linux 5.9-rc720406080100Min: 117.74 / Avg: 117.81 / Max: 117.85Min: 117.6 / Avg: 117.68 / Max: 117.8Min: 117.64 / Avg: 117.81 / Max: 118.071. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.9-rc2Run 2Linux 5.9-rc7110220330440550SE +/- 5.96, N = 3SE +/- 0.45, N = 3SE +/- 0.19, N = 3490.35484.57484.951. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.9-rc2Run 2Linux 5.9-rc790180270360450Min: 483.89 / Avg: 490.35 / Max: 502.27Min: 483.76 / Avg: 484.57 / Max: 485.29Min: 484.57 / Avg: 484.95 / Max: 485.151. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNALinux 5.9-rc2Run 2Linux 5.9-rc73691215SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 311.5111.2511.251. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNALinux 5.9-rc2Run 2Linux 5.9-rc73691215Min: 11.39 / Avg: 11.51 / Max: 11.71Min: 11.21 / Avg: 11.25 / Max: 11.28Min: 11.22 / Avg: 11.25 / Max: 11.311. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Linux 5.9-rc2Run 2Linux 5.9-rc7501001502002502402412411. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.9-rc2Run 2Linux 5.9-rc70.98011.96022.94033.92044.9005SE +/- 0.019, N = 3SE +/- 0.028, N = 3SE +/- 0.021, N = 34.3294.3414.3561. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 4.29 / Avg: 4.33 / Max: 4.36Min: 4.29 / Avg: 4.34 / Max: 4.38Min: 4.32 / Avg: 4.36 / Max: 4.391. (CXX) g++ options: -O3 -pthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.9-rc2Run 2Linux 5.9-rc70.37330.74661.11991.49321.8665SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 31.6581.6591.6591. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 1.66 / Avg: 1.66 / Max: 1.66Min: 1.66 / Avg: 1.66 / Max: 1.66Min: 1.66 / Avg: 1.66 / Max: 1.671. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.9-rc2Run 2Linux 5.9-rc70.58611.17221.75832.34442.9305SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 32.6052.6002.6051. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 2.6 / Avg: 2.61 / Max: 2.61Min: 2.6 / Avg: 2.6 / Max: 2.6Min: 2.6 / Avg: 2.6 / Max: 2.611. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.9-rc2Run 2Linux 5.9-rc7510152025SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 318.8418.6818.601. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.9-rc2Run 2Linux 5.9-rc7510152025Min: 18.7 / Avg: 18.84 / Max: 18.98Min: 18.59 / Avg: 18.68 / Max: 18.81Min: 18.53 / Avg: 18.6 / Max: 18.721. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.9-rc2Run 2Linux 5.9-rc7246810SE +/- 0.005, N = 3SE +/- 0.002, N = 3SE +/- 0.007, N = 37.7667.7597.7771. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.9-rc2Run 2Linux 5.9-rc73691215Min: 7.76 / Avg: 7.77 / Max: 7.77Min: 7.76 / Avg: 7.76 / Max: 7.76Min: 7.77 / Avg: 7.78 / Max: 7.791. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.9-rc2Run 2Linux 5.9-rc71020304050SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 344.0243.7043.611. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.9-rc2Run 2Linux 5.9-rc7918273645Min: 43.92 / Avg: 44.02 / Max: 44.19Min: 43.47 / Avg: 43.7 / Max: 43.99Min: 43.54 / Avg: 43.61 / Max: 43.691. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.9-rc2Run 2Linux 5.9-rc79M18M27M36M45MSE +/- 62534.72, N = 3SE +/- 183683.54, N = 3SE +/- 164482.54, N = 340265449.639996685.540160217.6
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.9-rc2Run 2Linux 5.9-rc77M14M21M28M35MMin: 40140399.7 / Avg: 40265449.57 / Max: 40329890.2Min: 39647892.3 / Avg: 39996685.53 / Max: 40270964Min: 39833377.3 / Avg: 40160217.57 / Max: 40355965.7

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.9-rc2Run 2Linux 5.9-rc7612182430SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 326.9927.2027.241. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.9-rc2Run 2Linux 5.9-rc7612182430Min: 26.97 / Avg: 26.99 / Max: 27.01Min: 27.09 / Avg: 27.2 / Max: 27.27Min: 27.23 / Avg: 27.24 / Max: 27.251. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.9-rc2Run 2Linux 5.9-rc70.05850.1170.17550.2340.2925SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.250.260.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.9-rc2Run 2Linux 5.9-rc712345Min: 0.25 / Avg: 0.25 / Max: 0.25Min: 0.25 / Avg: 0.26 / Max: 0.26Min: 0.25 / Avg: 0.25 / Max: 0.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.9-rc2Run 2Linux 5.9-rc70.48830.97661.46491.95322.4415SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.152.172.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 2.14 / Avg: 2.15 / Max: 2.16Min: 2.16 / Avg: 2.17 / Max: 2.17Min: 2.16 / Avg: 2.17 / Max: 2.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.9-rc2Run 2Linux 5.9-rc748121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 317.6117.6817.721. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.9-rc2Run 2Linux 5.9-rc748121620Min: 17.58 / Avg: 17.61 / Max: 17.64Min: 17.67 / Avg: 17.68 / Max: 17.7Min: 17.68 / Avg: 17.72 / Max: 17.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.9-rc2Run 2Linux 5.9-rc70.77181.54362.31543.08723.859SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 33.423.433.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 3.41 / Avg: 3.42 / Max: 3.43Min: 3.42 / Avg: 3.43 / Max: 3.43Min: 3.42 / Avg: 3.43 / Max: 3.441. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.9-rc2Run 2Linux 5.9-rc7918273645SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 339.5339.6039.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.9-rc2Run 2Linux 5.9-rc7816243240Min: 39.46 / Avg: 39.53 / Max: 39.56Min: 39.5 / Avg: 39.6 / Max: 39.67Min: 39.6 / Avg: 39.66 / Max: 39.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.9-rc2Run 2Linux 5.9-rc7714212835SE +/- 0.28, N = 4SE +/- 0.06, N = 4SE +/- 0.09, N = 430.2930.1230.201. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.9-rc2Run 2Linux 5.9-rc7714212835Min: 29.46 / Avg: 30.29 / Max: 30.59Min: 29.95 / Avg: 30.11 / Max: 30.26Min: 30.05 / Avg: 30.2 / Max: 30.471. (CC) gcc options: -O2 -std=c99

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.9-rc2Run 2Linux 5.9-rc7306090120150SE +/- 0.84, N = 3SE +/- 0.65, N = 3SE +/- 0.36, N = 3122.13123.02120.681. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.9-rc2Run 2Linux 5.9-rc720406080100Min: 120.6 / Avg: 122.13 / Max: 123.48Min: 121.73 / Avg: 123.02 / Max: 123.72Min: 120.21 / Avg: 120.68 / Max: 121.391. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.9-rc2Run 2Linux 5.9-rc70.14670.29340.44010.58680.7335SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 30.6490.6520.6511. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 0.64 / Avg: 0.65 / Max: 0.65Min: 0.65 / Avg: 0.65 / Max: 0.66Min: 0.65 / Avg: 0.65 / Max: 0.651. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.9-rc2Run 2Linux 5.9-rc77K14K21K28K35KSE +/- 71.84, N = 3SE +/- 36.07, N = 3SE +/- 61.16, N = 33208331950319981. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.9-rc2Run 2Linux 5.9-rc76K12K18K24K30KMin: 31939 / Avg: 32082.67 / Max: 32156Min: 31910 / Avg: 31950 / Max: 32022Min: 31877 / Avg: 31998 / Max: 320741. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.9-rc2Run 2Linux 5.9-rc714K28K42K56K70KSE +/- 601.48, N = 15SE +/- 48.38, N = 3SE +/- 82.25, N = 36507463773638361. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.9-rc2Run 2Linux 5.9-rc711K22K33K44K55KMin: 63936 / Avg: 65074.4 / Max: 71809Min: 63699 / Avg: 63773 / Max: 63864Min: 63679 / Avg: 63836 / Max: 639571. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.9-rc2Run 2Linux 5.9-rc720K40K60K80K100KSE +/- 248.10, N = 3SE +/- 100.13, N = 3SE +/- 28.18, N = 38004279953798621. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.9-rc2Run 2Linux 5.9-rc714K28K42K56K70KMin: 79742 / Avg: 80041.67 / Max: 80534Min: 79758 / Avg: 79953 / Max: 80090Min: 79808 / Avg: 79862 / Max: 799031. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.9-rc2Run 2Linux 5.9-rc730K60K90K120K150KSE +/- 44.77, N = 3SE +/- 123.90, N = 3SE +/- 80.95, N = 31598571598991596071. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.9-rc2Run 2Linux 5.9-rc730K60K90K120K150KMin: 159804 / Avg: 159857 / Max: 159946Min: 159716 / Avg: 159898.67 / Max: 160135Min: 159507 / Avg: 159606.67 / Max: 1597671. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.9-rc2Run 2Linux 5.9-rc7110220330440550SE +/- 0.44, N = 3SE +/- 1.57, N = 3SE +/- 1.80, N = 3514.77509.36512.031. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.9-rc2Run 2Linux 5.9-rc790180270360450Min: 514.06 / Avg: 514.77 / Max: 515.56Min: 506.25 / Avg: 509.36 / Max: 511.33Min: 509.07 / Avg: 512.03 / Max: 515.31. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.9-rc2Run 2Linux 5.9-rc71.31582.63163.94745.26326.579SE +/- 0.021, N = 3SE +/- 0.034, N = 3SE +/- 0.031, N = 35.8485.8385.828MIN: 5.73 / MAX: 16.53MIN: 5.72 / MAX: 17.66MIN: 5.73 / MAX: 17.381. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 5.81 / Avg: 5.85 / Max: 5.88Min: 5.77 / Avg: 5.84 / Max: 5.88Min: 5.77 / Avg: 5.83 / Max: 5.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.9-rc2Run 2Linux 5.9-rc7612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 326.7126.7326.64MIN: 26.53 / MAX: 36.39MIN: 26.56 / MAX: 35.29MIN: 26.46 / MAX: 361. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.9-rc2Run 2Linux 5.9-rc7612182430Min: 26.66 / Avg: 26.71 / Max: 26.74Min: 26.71 / Avg: 26.73 / Max: 26.75Min: 26.6 / Avg: 26.64 / Max: 26.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.9-rc2Run 2Linux 5.9-rc70.81951.6392.45853.2784.0975SE +/- 0.026, N = 3SE +/- 0.028, N = 3SE +/- 0.029, N = 33.6273.6423.639MIN: 3.5 / MAX: 14.42MIN: 3.51 / MAX: 15.02MIN: 3.51 / MAX: 14.31. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 3.58 / Avg: 3.63 / Max: 3.66Min: 3.6 / Avg: 3.64 / Max: 3.69Min: 3.58 / Avg: 3.64 / Max: 3.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.9-rc2Run 2Linux 5.9-rc70.90321.80642.70963.61284.516SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 34.0144.0013.992MIN: 3.96 / MAX: 5.33MIN: 3.95 / MAX: 5.17MIN: 3.94 / MAX: 5.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 4.01 / Avg: 4.01 / Max: 4.02Min: 4 / Avg: 4 / Max: 4Min: 3.99 / Avg: 3.99 / Max: 41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.9-rc2Run 2Linux 5.9-rc7918273645SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 337.1537.0537.08MIN: 36.83 / MAX: 46.99MIN: 36.88 / MAX: 46.75MIN: 36.95 / MAX: 46.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.9-rc2Run 2Linux 5.9-rc7816243240Min: 36.95 / Avg: 37.15 / Max: 37.37Min: 37.01 / Avg: 37.05 / Max: 37.08Min: 37.07 / Avg: 37.08 / Max: 37.091. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 319.9619.9019.86MIN: 19.83 / MAX: 29.34MIN: 19.81 / MAX: 25.82MIN: 19.8 / MAX: 21.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025Min: 19.91 / Avg: 19.96 / Max: 20.04Min: 19.87 / Avg: 19.9 / Max: 19.94Min: 19.86 / Avg: 19.86 / Max: 19.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 322.5922.6122.54MIN: 22.45 / MAX: 24.84MIN: 22.47 / MAX: 24.91MIN: 22.4 / MAX: 23.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025Min: 22.57 / Avg: 22.59 / Max: 22.61Min: 22.56 / Avg: 22.61 / Max: 22.68Min: 22.47 / Avg: 22.54 / Max: 22.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.9-rc2Run 2Linux 5.9-rc7246810SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 36.306.286.34MIN: 6.13 / MAX: 17.25MIN: 6.13 / MAX: 15.53MIN: 6.15 / MAX: 17.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.9-rc2Run 2Linux 5.9-rc73691215Min: 6.2 / Avg: 6.3 / Max: 6.37Min: 6.22 / Avg: 6.28 / Max: 6.32Min: 6.31 / Avg: 6.34 / Max: 6.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.9-rc2Run 2Linux 5.9-rc71.20152.4033.60454.8066.0075SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 35.315.315.34MIN: 5.19 / MAX: 16.2MIN: 5.21 / MAX: 15.2MIN: 5.19 / MAX: 16.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 5.21 / Avg: 5.31 / Max: 5.37Min: 5.25 / Avg: 5.31 / Max: 5.36Min: 5.33 / Avg: 5.34 / Max: 5.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.9-rc2Run 2Linux 5.9-rc70.86181.72362.58543.44724.309SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 33.833.793.80MIN: 3.73 / MAX: 11.24MIN: 3.73 / MAX: 8.72MIN: 3.71 / MAX: 11.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 3.75 / Avg: 3.83 / Max: 3.9Min: 3.75 / Avg: 3.79 / Max: 3.83Min: 3.78 / Avg: 3.8 / Max: 3.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.9-rc2Run 2Linux 5.9-rc71.1252.253.3754.55.625SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 34.994.995.00MIN: 4.88 / MAX: 16.5MIN: 4.85 / MAX: 15.87MIN: 4.87 / MAX: 16.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 4.92 / Avg: 4.99 / Max: 5.04Min: 4.89 / Avg: 4.99 / Max: 5.08Min: 4.95 / Avg: 5 / Max: 5.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.9-rc2Run 2Linux 5.9-rc7246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 38.238.248.24MIN: 8.19 / MAX: 8.89MIN: 8.2 / MAX: 8.32MIN: 8.19 / MAX: 15.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.9-rc2Run 2Linux 5.9-rc73691215Min: 8.22 / Avg: 8.23 / Max: 8.25Min: 8.24 / Avg: 8.24 / Max: 8.24Min: 8.22 / Avg: 8.24 / Max: 8.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.9-rc2Run 2Linux 5.9-rc70.32630.65260.97891.30521.6315SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.441.451.43MIN: 1.4 / MAX: 2.57MIN: 1.41 / MAX: 1.53MIN: 1.4 / MAX: 2.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 1.43 / Avg: 1.44 / Max: 1.45Min: 1.42 / Avg: 1.45 / Max: 1.47Min: 1.42 / Avg: 1.43 / Max: 1.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 321.2821.3121.28MIN: 21.13 / MAX: 22.62MIN: 21.17 / MAX: 23.62MIN: 21.03 / MAX: 29.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025Min: 21.27 / Avg: 21.28 / Max: 21.31Min: 21.28 / Avg: 21.31 / Max: 21.34Min: 21.27 / Avg: 21.28 / Max: 21.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.9-rc2Run 2Linux 5.9-rc720406080100SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 389.2688.6088.54MIN: 89 / MAX: 97.78MIN: 88.31 / MAX: 98MIN: 88.32 / MAX: 96.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.9-rc2Run 2Linux 5.9-rc720406080100Min: 89.21 / Avg: 89.26 / Max: 89.31Min: 88.44 / Avg: 88.6 / Max: 88.73Min: 88.49 / Avg: 88.54 / Max: 88.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.9-rc2Run 2Linux 5.9-rc7510152025SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 321.6921.7121.66MIN: 21.57 / MAX: 22.93MIN: 21.6 / MAX: 22.03MIN: 21.58 / MAX: 22.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.9-rc2Run 2Linux 5.9-rc7510152025Min: 21.66 / Avg: 21.69 / Max: 21.71Min: 21.68 / Avg: 21.71 / Max: 21.77Min: 21.64 / Avg: 21.66 / Max: 21.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 322.0021.9821.97MIN: 21.91 / MAX: 31.96MIN: 21.9 / MAX: 23.16MIN: 21.85 / MAX: 23.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.9-rc2Run 2Linux 5.9-rc7510152025Min: 21.97 / Avg: 22 / Max: 22.02Min: 21.97 / Avg: 21.98 / Max: 21.98Min: 21.96 / Avg: 21.97 / Max: 21.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.9-rc2Run 2Linux 5.9-rc7918273645SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 340.0239.9139.94MIN: 39.79 / MAX: 41.21MIN: 39.69 / MAX: 47.23MIN: 39.73 / MAX: 48.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.9-rc2Run 2Linux 5.9-rc7816243240Min: 39.99 / Avg: 40.02 / Max: 40.05Min: 39.85 / Avg: 39.91 / Max: 40.01Min: 39.87 / Avg: 39.94 / Max: 39.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.9-rc2Run 2Linux 5.9-rc7918273645SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 336.9836.9936.98MIN: 36.82 / MAX: 38.24MIN: 36.88 / MAX: 39.1MIN: 36.87 / MAX: 38.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.9-rc2Run 2Linux 5.9-rc7816243240Min: 36.93 / Avg: 36.98 / Max: 37.02Min: 36.99 / Avg: 36.99 / Max: 36.99Min: 36.95 / Avg: 36.98 / Max: 37.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.9-rc2Run 2Linux 5.9-rc770140210280350SE +/- 0.72, N = 3SE +/- 0.18, N = 3SE +/- 0.14, N = 3326.24325.09324.79MIN: 324.62 / MAX: 342.63MIN: 323.61 / MAX: 353.52MIN: 323.57 / MAX: 3261. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.9-rc2Run 2Linux 5.9-rc760120180240300Min: 325.36 / Avg: 326.24 / Max: 327.67Min: 324.91 / Avg: 325.09 / Max: 325.45Min: 324.59 / Avg: 324.79 / Max: 325.071. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.9-rc2Run 2Linux 5.9-rc770140210280350SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.24, N = 3330.22330.06329.99MIN: 329.35 / MAX: 331.15MIN: 329.13 / MAX: 331.7MIN: 328.95 / MAX: 331.251. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.9-rc2Run 2Linux 5.9-rc760120180240300Min: 330.11 / Avg: 330.22 / Max: 330.42Min: 329.84 / Avg: 330.06 / Max: 330.2Min: 329.51 / Avg: 329.99 / Max: 330.321. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.9-rc2Run 2Linux 5.9-rc790M180M270M360M450MSE +/- 568366.74, N = 3SE +/- 488732.80, N = 3SE +/- 402208.96, N = 3407938209.35406942592.15407244100.971. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.9-rc2Run 2Linux 5.9-rc770M140M210M280M350MMin: 407201074.77 / Avg: 407938209.35 / Max: 409056174.2Min: 406112935.2 / Avg: 406942592.15 / Max: 407805004.43Min: 406687814.99 / Avg: 407244100.97 / Max: 408025459.191. (CC) gcc options: -O3 -march=native -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.9-rc2Run 2Linux 5.9-rc71224364860SE +/- 0.50, N = 8SE +/- 0.51, N = 3SE +/- 0.53, N = 349.6248.4551.01
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.9-rc2Run 2Linux 5.9-rc71020304050Min: 48.14 / Avg: 49.62 / Max: 51.97Min: 47.92 / Avg: 48.45 / Max: 49.46Min: 49.95 / Avg: 51.01 / Max: 51.57

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.9-rc2Run 2Linux 5.9-rc720406080100SE +/- 3.59, N = 9SE +/- 1.24, N = 3SE +/- 2.47, N = 1283.4879.4280.95
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.9-rc2Run 2Linux 5.9-rc71632486480Min: 76.13 / Avg: 83.48 / Max: 109.5Min: 77.95 / Avg: 79.42 / Max: 81.89Min: 75.55 / Avg: 80.95 / Max: 99.09

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.9-rc2Run 2Linux 5.9-rc7612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 326.3926.3426.35
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.9-rc2Run 2Linux 5.9-rc7612182430Min: 26.37 / Avg: 26.39 / Max: 26.4Min: 26.33 / Avg: 26.34 / Max: 26.36Min: 26.34 / Avg: 26.35 / Max: 26.38

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.9-rc2Run 2Linux 5.9-rc70.7741.5482.3223.0963.87SE +/- 0.05, N = 3SE +/- 0.04, N = 7SE +/- 0.06, N = 33.133.433.44
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.9-rc2Run 2Linux 5.9-rc7246810Min: 3.05 / Avg: 3.13 / Max: 3.23Min: 3.27 / Avg: 3.43 / Max: 3.57Min: 3.38 / Avg: 3.44 / Max: 3.56

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4Linux 5.9-rc2Run 2Linux 5.9-rc72M4M6M8M10MSE +/- 139687.61, N = 3SE +/- 113034.19, N = 3SE +/- 167415.40, N = 91124344311098707115079031. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4Linux 5.9-rc2Run 2Linux 5.9-rc72M4M6M8M10MMin: 11001090 / Avg: 11243443.33 / Max: 11484980Min: 10873640 / Avg: 11098706.67 / Max: 11229650Min: 10643630 / Avg: 11507903.33 / Max: 121151201. (CXX) g++ options: -O3 -fopenmp

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkLinux 5.9-rc2Run 2Linux 5.9-rc77001400210028003500SE +/- 39.34, N = 15SE +/- 97.65, N = 15SE +/- 178.95, N = 152975307131881. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkLinux 5.9-rc2Run 2Linux 5.9-rc76001200180024003000Min: 2689 / Avg: 2974.87 / Max: 3221Min: 2684 / Avg: 3070.67 / Max: 4127Min: 2689 / Avg: 3187.67 / Max: 54551. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.9-rc2Linux 5.9-rc7200K400K600K800K1000KSE +/- 2125.05, N = 3SE +/- 1070.82, N = 31095047.31098588.9
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.9-rc2Linux 5.9-rc7200K400K600K800K1000KMin: 1090798.1 / Avg: 1095047.27 / Max: 1097248.6Min: 1097109.2 / Avg: 1098588.93 / Max: 1100669.6

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.9-rc2Linux 5.9-rc7300K600K900K1200K1500KSE +/- 3124.16, N = 3SE +/- 3346.81, N = 31460364.51464209.7
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.9-rc2Linux 5.9-rc7300K600K900K1200K1500KMin: 1454152.4 / Avg: 1460364.53 / Max: 1464052.1Min: 1459732.5 / Avg: 1464209.67 / Max: 1470757.5