Core i7 5775C Perf In September 2020

Intel Core i7-5775C testing with a MSI Z97-G45 GAMING (MS-7821) v1.0 (V2.9 BIOS) and MSI Intel Iris Pro 6200 3GB on Ubuntu 18.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009259-FI-COREI757768
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 6 Tests
Compression Tests 3 Tests
CPU Massive 13 Tests
Creator Workloads 13 Tests
Encoding 3 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 12 Tests
Imaging 4 Tests
Java 3 Tests
Machine Learning 5 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 5 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 4 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 3 Tests
Scientific Computing 7 Tests
Server CPU Tests 8 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
September 23 2020
  12 Hours, 56 Minutes
Run 2
September 24 2020
  13 Hours, 1 Minute
Run 3
September 24 2020
  12 Hours, 20 Minutes
Invert Hiding All Results Option
  12 Hours, 46 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i7 5775C Perf In September 2020OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-5775C @ 3.70GHz (4 Cores / 8 Threads)MSI Z97-G45 GAMING (MS-7821) v1.0 (V2.9 BIOS)Intel Broadwell-U DMI16GB120GB CT120BX100SSD1MSI Intel Iris Pro 6200 3GB (1150MHz)Intel Broadwell-U AudioVA2431Qualcomm Atheros Killer E220xUbuntu 18.105.0.0-999-generic (x86_64) 20190223GNOME Shell 3.30.2X Server 1.20.1modesetting 1.20.14.5 Mesa 19.2.0-devel (git-2631fd3 2019-07-24 cosmic-oibaf-ppa)1.1.102GCC 8.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I7 5775C Perf In September 2020 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x20- OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.10.1)- Python 2.7.16 + Python 3.6.8- l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

Run 1Run 2Run 3Result OverviewPhoronix Test Suite100%103%106%108%111%OpenCVBuild2NeatBenchoneDNNJava Gradle BuildSystem GZIP DecompressionC-BloscTimed Linux Kernel CompilationLibRawInfluxDBHuginMobile Neural NetworkMPVeSpeak-NG Speech EngineGNU Octave BenchmarkGPAWGLmark2Monte Carlo Simulations of Ionised NebulaeZstd CompressionNCNNNAMDlibavif avifencStress-NGMontage Astronomical Image Mosaic EngineTimed Apache CompilationBRL-CADAOM AV1TensorFlow LiteLAMMPS Molecular Dynamics SimulatorGROMACSOCRMyPDFSVT-AV1ASTC EncoderIncompact3DDaCapo BenchmarkRenaissanceLuxCoreRender

Core i7 5775C Perf In September 2020glmark2: 1920 x 1080blosc: blosclznamd: ATPase Simulation - 327,506 Atomsincompact3d: Cylindermocassin: Dust 2D tau100.0lammps: Rhodopsin Proteinjava-gradle-perf: Reactordacapobench: H2dacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treecompress-zstd: 3compress-zstd: 19libraw: Post-Processing Benchmarkonednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimesvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080pluxcorerender: DLSCavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-apache: Time To Compilebuild-linux-kernel: Time To Compilebuild2: Time To Compileespeak: Text-To-Speech Synthesismontage: Mosaic of M17, K band, 1.5 deg x 1.5 degsystem-decompress-gzip: mpv: Big Buck Bunny Sunflower 4K - Software Onlympv: Big Buck Bunny Sunflower 1080p - Software Onlygromacs: Water Benchmarktensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustivehugin: Panorama Photo Assistant + Stitching Timeocrmypdf: Processing 60 Page PDF Documentoctave-benchmark: stress-ng: MMAPstress-ng: NUMAstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: RdRandstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passinggpaw: Carbon Nanotubemnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - squeezenet_int8ncnn: CPU - mobilenet_v3ncnn: CPU - squeezenetncnn: CPU - mnasnetncnn: CPU - blazefacencnn: CPU - googlenet_int8ncnn: CPU - vgg16_int8ncnn: CPU - resnet18_int8ncnn: CPU - alexnetncnn: CPU - resnet50_int8ncnn: CPU - mobilenetv2_yolov3neatbench: CPUbrl-cad: VGR Performance Metricopencv: DNN - Deep Neural Networkinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000influxdb: 1024 - 10000 - 2,5000,1 - 10000Run 1Run 2Run 39856788.04.10818769.2301432962.864265.78033565257785245731974.9162138.8572664.2613779.50319921.2294865.5003735.4549284.5982494.723.128.138.26351120.14721.160910.066215.9594558.903175.5693.388480.21.6513.442.6433.821.49612.1940.67195.324115.2988.5177.87730.578177.847215.34540.64092.3323.447353.18902.290.485541870782571038737236658935448670775878.4010.9372.66589.1681.90656.1648.70754.2285.99286.58202671.08817.9731337895.18240337.1937547.2646822.8545.271751.85743806.0319391.2725159.222591.446030.691693803.99453045.7458.696263395.17653.1109.27144.3414.9566.92854.38025.326.004.926.572.2071.78243.0139.4624.89133.0226.878.16454745441855281.8909598.9971205.09826787.74.09223768.5735882952.866260.30433645242787146211964.4812135.7872671.8033989.69719713.4024803.6303700.0129409.7442498.223.228.248.30212120.29820.377510.061115.7029535.209169.7553.666490.21.6613.472.6633.671.49812.2190.67195.312115.4408.4927.78930.602177.883209.96540.75692.2363.385349.48911.230.484540887780986338651436561935374170635508.4110.9272.61588.4082.22556.0808.65354.3285.43297.06205022.47818.9032002407.75240331.8837723.7746865.5144.211741.82744929.8119265.7725153.832617.526030.961698242.99445829.2158.606453494.07648.6139.39044.8514.9676.94954.88825.535.994.956.552.1971.54240.4939.4324.20132.9426.668.11453584928852344.4915807.8982263.69886887.94.09524766.9204312942.870260.03633325228794246381999.3862106.8132620.7573909.15019384.7844712.3663787.3129242.4852502.023.327.958.31908118.52920.849110.124916.4471577.334176.7603.716630.21.6613.472.6533.831.49812.2090.67195.390114.9898.4917.79730.518180.367208.52940.45692.0873.381344.54911.490.485540902781014738616836563135378670648308.4210.9272.61588.4882.73656.1188.71054.1985.43293.65207144.00816.7331529776.24240319.2637590.5846879.6645.911770.49744313.5419207.2425103.462396.666125.051728927.62458759.5758.436385777.22648.3259.39944.9014.9686.93554.71325.386.024.936.652.2371.58240.8839.3924.33132.8726.497.89453655494838679.5903216.0982578.8OpenBenchmarking.org

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Run 3Run 2Run 12004006008001000988982985

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 3Run 2Run 115003000450060007500SE +/- 12.72, N = 3SE +/- 11.18, N = 3SE +/- 15.22, N = 36887.96787.76788.01. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 3Run 2Run 112002400360048006000Min: 6865.2 / Avg: 6887.9 / Max: 6909.2Min: 6765.4 / Avg: 6787.67 / Max: 6800.6Min: 6771.5 / Avg: 6788 / Max: 6818.41. (CXX) g++ options: -rdynamic

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 3Run 2Run 10.92431.84862.77293.69724.6215SE +/- 0.00576, N = 3SE +/- 0.00571, N = 3SE +/- 0.01035, N = 34.095244.092234.10818
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 3Run 2Run 1246810Min: 4.08 / Avg: 4.1 / Max: 4.1Min: 4.08 / Avg: 4.09 / Max: 4.1Min: 4.09 / Avg: 4.11 / Max: 4.13

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderRun 3Run 2Run 1170340510680850SE +/- 0.78, N = 3SE +/- 0.27, N = 3SE +/- 0.56, N = 3766.92768.57769.231. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderRun 3Run 2Run 1140280420560700Min: 765.38 / Avg: 766.92 / Max: 767.94Min: 768.1 / Avg: 768.57 / Max: 769.03Min: 768.61 / Avg: 769.23 / Max: 770.351. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Run 3Run 2Run 160120180240300SE +/- 0.33, N = 3SE +/- 0.88, N = 3SE +/- 1.53, N = 32942952961. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Run 3Run 2Run 150100150200250Min: 294 / Avg: 294.33 / Max: 295Min: 293 / Avg: 294.67 / Max: 296Min: 294 / Avg: 296 / Max: 2991. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinRun 3Run 2Run 10.64581.29161.93742.58323.229SE +/- 0.014, N = 3SE +/- 0.007, N = 3SE +/- 0.012, N = 32.8702.8662.8641. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinRun 3Run 2Run 1246810Min: 2.84 / Avg: 2.87 / Max: 2.89Min: 2.85 / Avg: 2.87 / Max: 2.87Min: 2.84 / Avg: 2.86 / Max: 2.881. (CXX) g++ options: -O3 -pthread -lm

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorRun 3Run 2Run 160120180240300SE +/- 4.63, N = 9SE +/- 3.55, N = 9SE +/- 3.33, N = 7260.04260.30265.78
OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorRun 3Run 2Run 150100150200250Min: 246.68 / Avg: 260.04 / Max: 294.44Min: 248.96 / Avg: 260.3 / Max: 285.77Min: 252.03 / Avg: 265.78 / Max: 276.3

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Run 3Run 2Run 17001400210028003500SE +/- 28.23, N = 20SE +/- 31.72, N = 20SE +/- 28.03, N = 20333233643356
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Run 3Run 2Run 16001200180024003000Min: 3082 / Avg: 3332.15 / Max: 3573Min: 3030 / Avg: 3364.1 / Max: 3567Min: 3113 / Avg: 3355.55 / Max: 3619

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonRun 3Run 2Run 111002200330044005500SE +/- 55.50, N = 4SE +/- 72.34, N = 4SE +/- 47.91, N = 4522852425257
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonRun 3Run 2Run 19001800270036004500Min: 5111 / Avg: 5227.5 / Max: 5372Min: 5122 / Avg: 5242 / Max: 5451Min: 5144 / Avg: 5256.5 / Max: 5376

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRun 3Run 2Run 12K4K6K8K10KSE +/- 38.63, N = 3SE +/- 81.37, N = 4SE +/- 34.21, N = 4794278717852
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRun 3Run 2Run 114002800420056007000Min: 7866 / Avg: 7942 / Max: 7992Min: 7731 / Avg: 7871 / Max: 8063Min: 7777 / Avg: 7852 / Max: 7943

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansRun 3Run 2Run 110002000300040005000SE +/- 104.15, N = 4SE +/- 52.68, N = 3SE +/- 49.10, N = 10463846214573
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansRun 3Run 2Run 18001600240032004000Min: 4346 / Avg: 4638 / Max: 4816Min: 4522 / Avg: 4620.67 / Max: 4702Min: 4247 / Avg: 4572.5 / Max: 4781

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyRun 3Run 2Run 1400800120016002000SE +/- 15.65, N = 5SE +/- 15.89, N = 5SE +/- 12.40, N = 51999.391964.481974.92
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyRun 3Run 2Run 130060090012001500Min: 1962.3 / Avg: 1999.39 / Max: 2056.7Min: 1909.13 / Avg: 1964.48 / Max: 2002.53Min: 1937.36 / Avg: 1974.92 / Max: 2003.59

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestRun 3Run 2Run 15001000150020002500SE +/- 26.67, N = 5SE +/- 17.20, N = 25SE +/- 25.11, N = 52106.812135.792138.86
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestRun 3Run 2Run 1400800120016002000Min: 2032.4 / Avg: 2106.81 / Max: 2187.8Min: 2007.67 / Avg: 2135.79 / Max: 2354.6Min: 2050.12 / Avg: 2138.86 / Max: 2206.11

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSRun 3Run 2Run 16001200180024003000SE +/- 21.08, N = 5SE +/- 26.47, N = 25SE +/- 28.08, N = 52620.762671.802664.26
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSRun 3Run 2Run 15001000150020002500Min: 2546.97 / Avg: 2620.76 / Max: 2666.18Min: 2444.23 / Avg: 2671.8 / Max: 3133.85Min: 2579.7 / Avg: 2664.26 / Max: 2718.87

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark BayesRun 3Run 2Run 19001800270036004500SE +/- 109.38, N = 20SE +/- 92.43, N = 20SE +/- 108.68, N = 203909.153989.703779.50
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark BayesRun 3Run 2Run 17001400210028003500Min: 3245.72 / Avg: 3909.15 / Max: 4480.09Min: 3302.32 / Avg: 3989.7 / Max: 4424.14Min: 3282.64 / Avg: 3779.5 / Max: 4658.92

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Savina Reactors.IORun 3Run 2Run 14K8K12K16K20KSE +/- 150.18, N = 5SE +/- 201.83, N = 5SE +/- 208.42, N = 1119384.7819713.4019921.23
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Savina Reactors.IORun 3Run 2Run 13K6K9K12K15KMin: 19161.15 / Avg: 19384.78 / Max: 19969.82Min: 19038.91 / Avg: 19713.4 / Max: 20285.53Min: 19170.06 / Avg: 19921.23 / Max: 21097.92

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankRun 3Run 2Run 110002000300040005000SE +/- 87.30, N = 20SE +/- 125.25, N = 20SE +/- 116.60, N = 204712.374803.634865.50
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankRun 3Run 2Run 18001600240032004000Min: 3805.72 / Avg: 4712.37 / Max: 5525.12Min: 4046.06 / Avg: 4803.63 / Max: 6100.88Min: 3858.82 / Avg: 4865.5 / Max: 5772

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutRun 3Run 2Run 18001600240032004000SE +/- 8.96, N = 5SE +/- 43.96, N = 5SE +/- 29.28, N = 253787.313700.013735.45
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutRun 3Run 2Run 17001400210028003500Min: 3760.63 / Avg: 3787.31 / Max: 3810.08Min: 3566.87 / Avg: 3700.01 / Max: 3803.83Min: 3479.92 / Avg: 3735.45 / Max: 4008.59

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeRun 3Run 2Run 12K4K6K8K10KSE +/- 86.29, N = 5SE +/- 124.27, N = 5SE +/- 119.47, N = 79242.499409.749284.60
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeRun 3Run 2Run 116003200480064008000Min: 8945.22 / Avg: 9242.48 / Max: 9457.97Min: 9110.16 / Avg: 9409.74 / Max: 9701.84Min: 9018.16 / Avg: 9284.6 / Max: 9788.72

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 3Run 2Run 15001000150020002500SE +/- 1.88, N = 3SE +/- 2.17, N = 3SE +/- 0.95, N = 32502.02498.22494.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 3Run 2Run 1400800120016002000Min: 2498.9 / Avg: 2502 / Max: 2505.4Min: 2493.9 / Avg: 2498.17 / Max: 2501Min: 2493.3 / Avg: 2494.67 / Max: 2496.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 3Run 2Run 1612182430SE +/- 0.29, N = 3SE +/- 0.32, N = 3SE +/- 0.22, N = 323.323.223.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 3Run 2Run 1510152025Min: 22.8 / Avg: 23.27 / Max: 23.8Min: 22.8 / Avg: 23.17 / Max: 23.8Min: 22.8 / Avg: 23.07 / Max: 23.51. (CC) gcc options: -O3 -pthread -lz -llzma

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 3Run 2Run 1714212835SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 327.9528.2428.131. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 3Run 2Run 1612182430Min: 27.81 / Avg: 27.95 / Max: 28.13Min: 28.16 / Avg: 28.24 / Max: 28.31Min: 28.11 / Avg: 28.13 / Max: 28.141. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPURun 3Run 2Run 1246810SE +/- 0.07930, N = 3SE +/- 0.05947, N = 3SE +/- 0.01407, N = 38.319088.302128.26351MIN: 8.03MIN: 8.01MIN: 8.061. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPURun 3Run 2Run 13691215Min: 8.24 / Avg: 8.32 / Max: 8.48Min: 8.22 / Avg: 8.3 / Max: 8.42Min: 8.25 / Avg: 8.26 / Max: 8.291. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPURun 3Run 2Run 1306090120150SE +/- 0.44, N = 3SE +/- 1.07, N = 3SE +/- 0.31, N = 3118.53120.30120.15MIN: 116.71MIN: 117.48MIN: 118.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPURun 3Run 2Run 120406080100Min: 117.84 / Avg: 118.53 / Max: 119.34Min: 118.52 / Avg: 120.3 / Max: 122.22Min: 119.52 / Avg: 120.15 / Max: 120.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 3Run 2Run 1510152025SE +/- 0.17, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 320.8520.3821.16MIN: 20.23MIN: 19.16MIN: 20.761. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 3Run 2Run 1510152025Min: 20.54 / Avg: 20.85 / Max: 21.11Min: 20.34 / Avg: 20.38 / Max: 20.41Min: 21.08 / Avg: 21.16 / Max: 21.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPURun 3Run 2Run 13691215SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.1210.0610.07MIN: 10MIN: 9.97MIN: 9.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPURun 3Run 2Run 13691215Min: 10.08 / Avg: 10.12 / Max: 10.18Min: 10.05 / Avg: 10.06 / Max: 10.08Min: 10.05 / Avg: 10.07 / Max: 10.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPURun 3Run 2Run 148121620SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 316.4515.7015.96MIN: 16.19MIN: 15.54MIN: 15.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPURun 3Run 2Run 148121620Min: 16.3 / Avg: 16.45 / Max: 16.52Min: 15.63 / Avg: 15.7 / Max: 15.75Min: 15.89 / Avg: 15.96 / Max: 16.011. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 3Run 2Run 1120240360480600SE +/- 4.80, N = 3SE +/- 4.58, N = 3SE +/- 8.18, N = 4577.33535.21558.90MIN: 569.19MIN: 527.99MIN: 532.111. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 3Run 2Run 1100200300400500Min: 572.38 / Avg: 577.33 / Max: 586.93Min: 529.27 / Avg: 535.21 / Max: 544.21Min: 537.99 / Avg: 558.9 / Max: 577.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 3Run 2Run 14080120160200SE +/- 1.78, N = 15SE +/- 2.75, N = 4SE +/- 1.99, N = 15176.76169.76175.57MIN: 164.88MIN: 166.09MIN: 161.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 3Run 2Run 1306090120150Min: 166.13 / Avg: 176.76 / Max: 186.11Min: 166.62 / Avg: 169.75 / Max: 177.99Min: 162.75 / Avg: 175.57 / Max: 182.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 3Run 2Run 10.83621.67242.50863.34484.181SE +/- 0.06193, N = 15SE +/- 0.03590, N = 3SE +/- 0.03639, N = 153.716633.666493.38848MIN: 3.28MIN: 3.46MIN: 3.171. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 3Run 2Run 1246810Min: 3.33 / Avg: 3.72 / Max: 4.16Min: 3.61 / Avg: 3.67 / Max: 3.73Min: 3.27 / Avg: 3.39 / Max: 3.731. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassRun 3Run 2Run 10.0450.090.1350.180.225SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.20.20.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassRun 3Run 2Run 112345Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassRun 3Run 2Run 10.37350.7471.12051.4941.8675SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.661.661.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassRun 3Run 2Run 1246810Min: 1.65 / Avg: 1.66 / Max: 1.66Min: 1.65 / Avg: 1.66 / Max: 1.66Min: 1.65 / Avg: 1.65 / Max: 1.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 3Run 2Run 13691215SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 313.4713.4713.441. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 3Run 2Run 148121620Min: 13.35 / Avg: 13.47 / Max: 13.54Min: 13.38 / Avg: 13.47 / Max: 13.54Min: 13.32 / Avg: 13.44 / Max: 13.51. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassRun 3Run 2Run 10.59851.1971.79552.3942.9925SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.652.662.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassRun 3Run 2Run 1246810Min: 2.63 / Avg: 2.65 / Max: 2.66Min: 2.65 / Avg: 2.66 / Max: 2.66Min: 2.64 / Avg: 2.64 / Max: 2.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeRun 3Run 2Run 1816243240SE +/- 0.06, N = 3SE +/- 0.23, N = 3SE +/- 0.05, N = 333.8333.6733.821. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeRun 3Run 2Run 1714212835Min: 33.77 / Avg: 33.83 / Max: 33.94Min: 33.21 / Avg: 33.67 / Max: 33.99Min: 33.74 / Avg: 33.82 / Max: 33.91. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pRun 3Run 2Run 10.33710.67421.01131.34841.6855SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 31.4981.4981.4961. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pRun 3Run 2Run 1246810Min: 1.5 / Avg: 1.5 / Max: 1.5Min: 1.5 / Avg: 1.5 / Max: 1.5Min: 1.5 / Avg: 1.5 / Max: 1.51. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pRun 3Run 2Run 13691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.2112.2212.191. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pRun 3Run 2Run 148121620Min: 12.19 / Avg: 12.21 / Max: 12.22Min: 12.2 / Avg: 12.22 / Max: 12.24Min: 12.19 / Avg: 12.19 / Max: 12.211. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCRun 3Run 2Run 10.15080.30160.45240.60320.754SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.670.670.67MIN: 0.65 / MAX: 0.68MIN: 0.65 / MAX: 0.68MIN: 0.65 / MAX: 0.68
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCRun 3Run 2Run 1246810Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.68

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 3Run 2Run 14080120160200SE +/- 0.30, N = 3SE +/- 0.22, N = 3SE +/- 0.31, N = 3195.39195.31195.321. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 3Run 2Run 14080120160200Min: 194.82 / Avg: 195.39 / Max: 195.83Min: 195.01 / Avg: 195.31 / Max: 195.74Min: 194.92 / Avg: 195.32 / Max: 195.941. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 3Run 2Run 1306090120150SE +/- 0.03, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 3114.99115.44115.301. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 3Run 2Run 120406080100Min: 114.96 / Avg: 114.99 / Max: 115.05Min: 115.2 / Avg: 115.44 / Max: 115.75Min: 115.15 / Avg: 115.3 / Max: 115.491. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 3Run 2Run 1246810SE +/- 0.009, N = 3SE +/- 0.002, N = 3SE +/- 0.029, N = 38.4918.4928.5171. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 3Run 2Run 13691215Min: 8.48 / Avg: 8.49 / Max: 8.51Min: 8.49 / Avg: 8.49 / Max: 8.5Min: 8.47 / Avg: 8.52 / Max: 8.571. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 3Run 2Run 1246810SE +/- 0.021, N = 3SE +/- 0.012, N = 3SE +/- 0.014, N = 37.7977.7897.8771. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 3Run 2Run 13691215Min: 7.78 / Avg: 7.8 / Max: 7.84Min: 7.77 / Avg: 7.79 / Max: 7.81Min: 7.85 / Avg: 7.88 / Max: 7.91. (CXX) g++ options: -O3 -fPIC

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 3Run 2Run 1714212835SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 330.5230.6030.58
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 3Run 2Run 1714212835Min: 30.45 / Avg: 30.52 / Max: 30.66Min: 30.53 / Avg: 30.6 / Max: 30.64Min: 30.45 / Avg: 30.58 / Max: 30.76

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 3Run 2Run 14080120160200SE +/- 2.90, N = 3SE +/- 0.85, N = 3SE +/- 0.67, N = 3180.37177.88177.85
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 3Run 2Run 1306090120150Min: 177.25 / Avg: 180.37 / Max: 186.17Min: 177.01 / Avg: 177.88 / Max: 179.59Min: 176.92 / Avg: 177.85 / Max: 179.14

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileRun 3Run 2Run 150100150200250SE +/- 1.52, N = 3SE +/- 1.30, N = 3SE +/- 3.72, N = 3208.53209.97215.35
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileRun 3Run 2Run 14080120160200Min: 206.89 / Avg: 208.53 / Max: 211.56Min: 207.98 / Avg: 209.96 / Max: 212.42Min: 209.98 / Avg: 215.35 / Max: 222.49

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 3Run 2Run 1918273645SE +/- 0.37, N = 4SE +/- 0.44, N = 4SE +/- 0.35, N = 1740.4640.7640.641. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 3Run 2Run 1816243240Min: 39.67 / Avg: 40.46 / Max: 41.47Min: 40.29 / Avg: 40.76 / Max: 42.09Min: 35.83 / Avg: 40.64 / Max: 42.761. (CC) gcc options: -O2 -std=c99

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degRun 3Run 2Run 120406080100SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.24, N = 392.0992.2492.331. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degRun 3Run 2Run 120406080100Min: 92.06 / Avg: 92.09 / Max: 92.11Min: 92.18 / Avg: 92.24 / Max: 92.32Min: 91.87 / Avg: 92.33 / Max: 92.641. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 3Run 2Run 10.77561.55122.32683.10243.878SE +/- 0.004, N = 3SE +/- 0.010, N = 3SE +/- 0.068, N = 143.3813.3853.447
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 3Run 2Run 1246810Min: 3.38 / Avg: 3.38 / Max: 3.39Min: 3.37 / Avg: 3.39 / Max: 3.41Min: 3.38 / Avg: 3.45 / Max: 4.33

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software OnlyRun 3Run 2Run 180160240320400SE +/- 2.64, N = 3SE +/- 6.84, N = 3SE +/- 1.84, N = 3344.54349.48353.18MIN: 237.94 / MAX: 441.79MIN: 232.45 / MAX: 461.62MIN: 243.18 / MAX: 461.461. mpv 0.29.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software OnlyRun 3Run 2Run 160120180240300Min: 339.89 / Avg: 344.54 / Max: 349.05Min: 336.07 / Avg: 349.48 / Max: 358.54Min: 350.3 / Avg: 353.18 / Max: 356.591. mpv 0.29.0

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software OnlyRun 3Run 2Run 12004006008001000SE +/- 3.09, N = 3SE +/- 1.33, N = 3SE +/- 6.74, N = 3911.49911.23902.29MIN: 602.06 / MAX: 994.9MIN: 570.75 / MAX: 995.92MIN: 560.47 / MAX: 995.421. mpv 0.29.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software OnlyRun 3Run 2Run 1160320480640800Min: 905.31 / Avg: 911.49 / Max: 914.69Min: 909.53 / Avg: 911.23 / Max: 913.85Min: 890.74 / Avg: 902.29 / Max: 914.11. mpv 0.29.0

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkRun 3Run 2Run 10.10910.21820.32730.43640.5455SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 30.4850.4840.4851. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkRun 3Run 2Run 1246810Min: 0.48 / Avg: 0.48 / Max: 0.49Min: 0.48 / Avg: 0.48 / Max: 0.49Min: 0.48 / Avg: 0.49 / Max: 0.491. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 3Run 2Run 1120K240K360K480K600KSE +/- 19.86, N = 3SE +/- 15.59, N = 3SE +/- 37.42, N = 3540902540887541870
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 3Run 2Run 190K180K270K360K450KMin: 540871 / Avg: 540902 / Max: 540939Min: 540860 / Avg: 540887 / Max: 540914Min: 541814 / Avg: 541870 / Max: 541941

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 3Run 2Run 12M4M6M8M10MSE +/- 496.06, N = 3SE +/- 600.45, N = 3SE +/- 551.94, N = 3781014778098637825710
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 3Run 2Run 11.4M2.8M4.2M5.6M7MMin: 7809160 / Avg: 7810146.67 / Max: 7810730Min: 7808820 / Avg: 7809863.33 / Max: 7810900Min: 7824660 / Avg: 7825710 / Max: 7826530

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 3Run 2Run 180K160K240K320K400KSE +/- 37.63, N = 3SE +/- 40.08, N = 3SE +/- 122.99, N = 3386168386514387372
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 3Run 2Run 170K140K210K280K350KMin: 386113 / Avg: 386168 / Max: 386240Min: 386439 / Avg: 386514 / Max: 386576Min: 387127 / Avg: 387371.67 / Max: 387516

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 3Run 2Run 180K160K240K320K400KSE +/- 23.50, N = 3SE +/- 25.78, N = 3SE +/- 67.22, N = 3365631365619366589
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 3Run 2Run 160K120K180K240K300KMin: 365607 / Avg: 365631 / Max: 365678Min: 365570 / Avg: 365619.33 / Max: 365657Min: 366502 / Avg: 366588.67 / Max: 366721

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 3Run 2Run 180K160K240K320K400KSE +/- 65.69, N = 3SE +/- 22.82, N = 3SE +/- 49.05, N = 3353786353741354486
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 3Run 2Run 160K120K180K240K300KMin: 353683 / Avg: 353785.67 / Max: 353908Min: 353708 / Avg: 353741.33 / Max: 353785Min: 354418 / Avg: 354485.67 / Max: 354581

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 3Run 2Run 11.5M3M4.5M6M7.5MSE +/- 160.93, N = 3SE +/- 384.23, N = 3SE +/- 488.48, N = 3706483070635507077587
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 3Run 2Run 11.2M2.4M3.6M4.8M6MMin: 7064510 / Avg: 7064830 / Max: 7065020Min: 7062800 / Avg: 7063550 / Max: 7064070Min: 7076870 / Avg: 7077586.67 / Max: 7078520

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 3Run 2Run 1246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 38.428.418.401. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 3Run 2Run 13691215Min: 8.41 / Avg: 8.42 / Max: 8.43Min: 8.4 / Avg: 8.41 / Max: 8.42Min: 8.33 / Avg: 8.4 / Max: 8.461. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 3Run 2Run 13691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 310.9210.9210.931. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 3Run 2Run 13691215Min: 10.92 / Avg: 10.92 / Max: 10.93Min: 10.92 / Avg: 10.92 / Max: 10.93Min: 10.93 / Avg: 10.93 / Max: 10.931. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 3Run 2Run 11632486480SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 372.6172.6172.661. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 3Run 2Run 11428425670Min: 72.6 / Avg: 72.61 / Max: 72.62Min: 72.6 / Avg: 72.61 / Max: 72.62Min: 72.65 / Avg: 72.66 / Max: 72.681. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 3Run 2Run 1130260390520650SE +/- 0.06, N = 3SE +/- 0.20, N = 3SE +/- 0.06, N = 3588.48588.40589.161. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 3Run 2Run 1100200300400500Min: 588.41 / Avg: 588.48 / Max: 588.59Min: 588.09 / Avg: 588.4 / Max: 588.78Min: 589.08 / Avg: 589.16 / Max: 589.271. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 3Run 2Run 120406080100SE +/- 0.57, N = 3SE +/- 0.19, N = 3SE +/- 0.44, N = 382.7482.2381.91
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 3Run 2Run 11632486480Min: 81.7 / Avg: 82.74 / Max: 83.66Min: 81.93 / Avg: 82.22 / Max: 82.59Min: 81.06 / Avg: 81.91 / Max: 82.51

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 6.2.4Processing 60 Page PDF DocumentRun 3Run 2Run 11326395265SE +/- 0.07, N = 3SE +/- 0.51, N = 3SE +/- 0.19, N = 356.1256.0856.16
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 6.2.4Processing 60 Page PDF DocumentRun 3Run 2Run 11122334455Min: 55.99 / Avg: 56.12 / Max: 56.19Min: 55.08 / Avg: 56.08 / Max: 56.78Min: 55.88 / Avg: 56.16 / Max: 56.53

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 4.4.1Run 3Run 2Run 1246810SE +/- 0.042, N = 5SE +/- 0.039, N = 5SE +/- 0.037, N = 58.7108.6538.707
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 4.4.1Run 3Run 2Run 13691215Min: 8.66 / Avg: 8.71 / Max: 8.88Min: 8.59 / Avg: 8.65 / Max: 8.8Min: 8.62 / Avg: 8.71 / Max: 8.83

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPRun 3Run 2Run 11224364860SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 354.1954.3254.221. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPRun 3Run 2Run 11122334455Min: 54.16 / Avg: 54.19 / Max: 54.23Min: 54.26 / Avg: 54.32 / Max: 54.36Min: 54.16 / Avg: 54.22 / Max: 54.31. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMARun 3Run 2Run 120406080100SE +/- 0.22, N = 3SE +/- 0.41, N = 3SE +/- 0.41, N = 385.4385.4385.991. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMARun 3Run 2Run 11632486480Min: 84.99 / Avg: 85.43 / Max: 85.67Min: 84.86 / Avg: 85.43 / Max: 86.23Min: 85.51 / Avg: 85.99 / Max: 86.811. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDRun 3Run 2Run 160120180240300SE +/- 2.18, N = 3SE +/- 0.84, N = 3SE +/- 0.44, N = 3293.65297.06286.581. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDRun 3Run 2Run 150100150200250Min: 289.35 / Avg: 293.65 / Max: 296.42Min: 295.39 / Avg: 297.06 / Max: 297.95Min: 285.81 / Avg: 286.58 / Max: 287.341. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicRun 3Run 2Run 140K80K120K160K200KSE +/- 2157.56, N = 15SE +/- 2587.53, N = 7SE +/- 533.86, N = 3207144.00205022.47202671.081. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicRun 3Run 2Run 140K80K120K160K200KMin: 201590.73 / Avg: 207144 / Max: 220563.08Min: 201623.74 / Avg: 205022.47 / Max: 220507.8Min: 201603.5 / Avg: 202671.08 / Max: 203220.271. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoRun 3Run 2Run 12004006008001000SE +/- 1.75, N = 3SE +/- 0.45, N = 3SE +/- 0.22, N = 3816.73818.90817.971. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoRun 3Run 2Run 1140280420560700Min: 813.29 / Avg: 816.73 / Max: 819.04Min: 818.06 / Avg: 818.9 / Max: 819.6Min: 817.58 / Avg: 817.97 / Max: 818.341. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocRun 3Run 2Run 17M14M21M28M35MSE +/- 17315.95, N = 3SE +/- 170156.05, N = 3SE +/- 56736.14, N = 331529776.2432002407.7531337895.181. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocRun 3Run 2Run 16M12M18M24M30MMin: 31497664.3 / Avg: 31529776.24 / Max: 31557063.57Min: 31665845.32 / Avg: 32002407.75 / Max: 32214318.56Min: 31232861.65 / Avg: 31337895.18 / Max: 31427599.821. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandRun 3Run 2Run 150K100K150K200K250KSE +/- 41.56, N = 3SE +/- 49.46, N = 3SE +/- 51.51, N = 3240319.26240331.88240337.191. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandRun 3Run 2Run 140K80K120K160K200KMin: 240256.15 / Avg: 240319.26 / Max: 240397.67Min: 240265.04 / Avg: 240331.88 / Max: 240428.44Min: 240284.9 / Avg: 240337.19 / Max: 240440.21. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingRun 3Run 2Run 18K16K24K32K40KSE +/- 193.79, N = 3SE +/- 73.58, N = 3SE +/- 221.11, N = 337590.5837723.7737547.261. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingRun 3Run 2Run 17K14K21K28K35KMin: 37213.73 / Avg: 37590.58 / Max: 37857.42Min: 37636.97 / Avg: 37723.77 / Max: 37870.08Min: 37247.87 / Avg: 37547.26 / Max: 37978.81. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILERun 3Run 2Run 110K20K30K40K50KSE +/- 23.83, N = 3SE +/- 55.35, N = 3SE +/- 45.03, N = 346879.6646865.5146822.851. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILERun 3Run 2Run 18K16K24K32K40KMin: 46837.36 / Avg: 46879.66 / Max: 46919.83Min: 46793.76 / Avg: 46865.51 / Max: 46974.4Min: 46752.18 / Avg: 46822.85 / Max: 46906.541. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheRun 3Run 2Run 11020304050SE +/- 0.64, N = 5SE +/- 0.67, N = 15SE +/- 0.81, N = 1545.9144.2145.271. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheRun 3Run 2Run 1918273645Min: 43.93 / Avg: 45.91 / Max: 47.93Min: 39.87 / Avg: 44.21 / Max: 48.1Min: 36.63 / Avg: 45.27 / Max: 49.51. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressRun 3Run 2Run 1400800120016002000SE +/- 4.44, N = 3SE +/- 9.37, N = 3SE +/- 10.47, N = 31770.491741.821751.851. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressRun 3Run 2Run 130060090012001500Min: 1761.63 / Avg: 1770.49 / Max: 1775.27Min: 1725.46 / Avg: 1741.82 / Max: 1757.92Min: 1741.04 / Avg: 1751.85 / Max: 1772.781. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresRun 3Run 2Run 1160K320K480K640K800KSE +/- 504.30, N = 3SE +/- 226.37, N = 3SE +/- 103.59, N = 3744313.54744929.81743806.031. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresRun 3Run 2Run 1130K260K390K520K650KMin: 743504.22 / Avg: 744313.54 / Max: 745239.45Min: 744515.9 / Avg: 744929.81 / Max: 745295.65Min: 743627.26 / Avg: 743806.03 / Max: 743986.111. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathRun 3Run 2Run 14K8K12K16K20KSE +/- 102.55, N = 3SE +/- 68.81, N = 3SE +/- 45.24, N = 319207.2419265.7719391.271. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathRun 3Run 2Run 13K6K9K12K15KMin: 19078.4 / Avg: 19207.24 / Max: 19409.85Min: 19140.47 / Avg: 19265.77 / Max: 19377.69Min: 19309.93 / Avg: 19391.27 / Max: 19466.251. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathRun 3Run 2Run 15K10K15K20K25KSE +/- 60.90, N = 3SE +/- 1.27, N = 3SE +/- 1.66, N = 325103.4625153.8325159.221. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathRun 3Run 2Run 14K8K12K16K20KMin: 24981.67 / Avg: 25103.46 / Max: 25164.37Min: 25151.77 / Avg: 25153.83 / Max: 25156.15Min: 25157.03 / Avg: 25159.22 / Max: 25162.471. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingRun 3Run 2Run 16001200180024003000SE +/- 10.61, N = 3SE +/- 2.87, N = 3SE +/- 6.23, N = 32396.662617.522591.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingRun 3Run 2Run 15001000150020002500Min: 2376.96 / Avg: 2396.66 / Max: 2413.32Min: 2611.81 / Avg: 2617.52 / Max: 2620.89Min: 2580.72 / Avg: 2591.44 / Max: 2602.291. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityRun 3Run 2Run 113002600390052006500SE +/- 72.33, N = 3SE +/- 102.63, N = 3SE +/- 57.67, N = 36125.056030.966030.691. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityRun 3Run 2Run 111002200330044005500Min: 5988.78 / Avg: 6125.05 / Max: 6235.24Min: 5857.82 / Avg: 6030.96 / Max: 6213.02Min: 5927.86 / Avg: 6030.69 / Max: 6127.361. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingRun 3Run 2Run 1400K800K1200K1600K2000KSE +/- 16739.57, N = 3SE +/- 14863.93, N = 3SE +/- 19078.01, N = 31728927.621698242.991693803.991. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingRun 3Run 2Run 1300K600K900K1200K1500KMin: 1711607.8 / Avg: 1728927.62 / Max: 1762399.97Min: 1676524.32 / Avg: 1698242.99 / Max: 1726681.68Min: 1660581.53 / Avg: 1693803.99 / Max: 1726666.761. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsRun 3Run 2Run 1100K200K300K400K500KSE +/- 2883.26, N = 3SE +/- 7777.27, N = 3SE +/- 384.03, N = 3458759.57445829.21453045.741. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsRun 3Run 2Run 180K160K240K320K400KMin: 453021.44 / Avg: 458759.57 / Max: 462123.59Min: 430296.05 / Avg: 445829.21 / Max: 454301.93Min: 452278.85 / Avg: 453045.74 / Max: 453465.961. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingRun 3Run 2Run 11326395265SE +/- 0.11, N = 3SE +/- 0.13, N = 3SE +/- 0.28, N = 358.4358.6058.691. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingRun 3Run 2Run 11224364860Min: 58.27 / Avg: 58.43 / Max: 58.63Min: 58.37 / Avg: 58.6 / Max: 58.83Min: 58.33 / Avg: 58.69 / Max: 59.231. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingRun 3Run 2Run 11.4M2.8M4.2M5.6M7MSE +/- 98775.41, N = 5SE +/- 20978.35, N = 3SE +/- 110965.05, N = 36385777.226453494.076263395.171. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingRun 3Run 2Run 11.1M2.2M3.3M4.4M5.5MMin: 5991954.87 / Avg: 6385777.22 / Max: 6505142.68Min: 6414494.89 / Avg: 6453494.07 / Max: 6486394.07Min: 6042101.86 / Avg: 6263395.17 / Max: 6388591.011. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeRun 3Run 2Run 1140280420560700SE +/- 0.86, N = 3SE +/- 0.96, N = 3SE +/- 0.92, N = 3648.33648.61653.111. (CC) gcc options: -pthread -shared -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeRun 3Run 2Run 1120240360480600Min: 647.11 / Avg: 648.33 / Max: 649.97Min: 646.97 / Avg: 648.61 / Max: 650.3Min: 651.35 / Avg: 653.11 / Max: 654.451. (CC) gcc options: -pthread -shared -lxc -lblas -lmpi

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 3Run 2Run 13691215SE +/- 0.014, N = 3SE +/- 0.008, N = 3SE +/- 0.069, N = 39.3999.3909.271MIN: 9.34 / MAX: 22.37MIN: 9.34 / MAX: 10.45MIN: 9.11 / MAX: 10.351. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 3Run 2Run 13691215Min: 9.37 / Avg: 9.4 / Max: 9.42Min: 9.38 / Avg: 9.39 / Max: 9.4Min: 9.16 / Avg: 9.27 / Max: 9.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 3Run 2Run 11020304050SE +/- 0.20, N = 3SE +/- 0.11, N = 3SE +/- 0.23, N = 344.9044.8544.34MIN: 44.57 / MAX: 57.52MIN: 44.55 / MAX: 73.46MIN: 43.74 / MAX: 91.681. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 3Run 2Run 1918273645Min: 44.63 / Avg: 44.9 / Max: 45.29Min: 44.63 / Avg: 44.85 / Max: 45Min: 44 / Avg: 44.34 / Max: 44.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 3Run 2Run 11.11782.23563.35344.47125.589SE +/- 0.023, N = 3SE +/- 0.018, N = 3SE +/- 0.003, N = 34.9684.9674.956MIN: 4.92 / MAX: 6.4MIN: 4.91 / MAX: 6.37MIN: 4.92 / MAX: 5.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 3Run 2Run 1246810Min: 4.94 / Avg: 4.97 / Max: 5.01Min: 4.93 / Avg: 4.97 / Max: 4.99Min: 4.95 / Avg: 4.96 / Max: 4.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 3Run 2Run 1246810SE +/- 0.009, N = 3SE +/- 0.018, N = 3SE +/- 0.009, N = 36.9356.9496.928MIN: 6.9 / MAX: 19.71MIN: 6.9 / MAX: 19.59MIN: 6.88 / MAX: 7.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 3Run 2Run 13691215Min: 6.92 / Avg: 6.93 / Max: 6.95Min: 6.91 / Avg: 6.95 / Max: 6.97Min: 6.91 / Avg: 6.93 / Max: 6.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 3Run 2Run 11224364860SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.40, N = 354.7154.8954.38MIN: 54.42 / MAX: 67.42MIN: 54.66 / MAX: 73.33MIN: 53.57 / MAX: 66.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 3Run 2Run 11122334455Min: 54.58 / Avg: 54.71 / Max: 54.83Min: 54.78 / Avg: 54.89 / Max: 55.02Min: 53.83 / Avg: 54.38 / Max: 55.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8Run 3Run 2Run 1612182430SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.03, N = 325.3825.5325.32MIN: 25.08 / MAX: 38.1MIN: 25.2 / MAX: 106.62MIN: 25.06 / MAX: 26.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8Run 3Run 2Run 1612182430Min: 25.23 / Avg: 25.38 / Max: 25.47Min: 25.35 / Avg: 25.53 / Max: 25.85Min: 25.27 / Avg: 25.32 / Max: 25.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3Run 3Run 2Run 1246810SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 36.025.996.00MIN: 5.93 / MAX: 6.12MIN: 5.94 / MAX: 6.2MIN: 5.94 / MAX: 6.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3Run 3Run 2Run 1246810Min: 5.95 / Avg: 6.02 / Max: 6.06Min: 5.96 / Avg: 5.99 / Max: 6.02Min: 5.98 / Avg: 6 / Max: 6.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetRun 3Run 2Run 11.11382.22763.34144.45525.569SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 34.934.954.92MIN: 4.86 / MAX: 5.04MIN: 4.89 / MAX: 9.04MIN: 4.86 / MAX: 5.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetRun 3Run 2Run 1246810Min: 4.87 / Avg: 4.93 / Max: 4.97Min: 4.9 / Avg: 4.95 / Max: 4.99Min: 4.88 / Avg: 4.92 / Max: 4.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetRun 3Run 2Run 1246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 36.656.556.57MIN: 6.59 / MAX: 6.71MIN: 6.5 / MAX: 9.36MIN: 6.5 / MAX: 7.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetRun 3Run 2Run 13691215Min: 6.61 / Avg: 6.65 / Max: 6.68Min: 6.53 / Avg: 6.55 / Max: 6.59Min: 6.54 / Avg: 6.57 / Max: 6.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceRun 3Run 2Run 10.50181.00361.50542.00722.509SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 32.232.192.20MIN: 2.17 / MAX: 2.43MIN: 2.12 / MAX: 2.46MIN: 2.16 / MAX: 2.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceRun 3Run 2Run 1246810Min: 2.18 / Avg: 2.23 / Max: 2.26Min: 2.13 / Avg: 2.19 / Max: 2.25Min: 2.18 / Avg: 2.2 / Max: 2.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8Run 3Run 2Run 11632486480SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 371.5871.5471.78MIN: 71.35 / MAX: 75.03MIN: 71.13 / MAX: 84.34MIN: 71.24 / MAX: 84.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8Run 3Run 2Run 11428425670Min: 71.49 / Avg: 71.58 / Max: 71.75Min: 71.4 / Avg: 71.54 / Max: 71.62Min: 71.6 / Avg: 71.78 / Max: 71.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8Run 3Run 2Run 150100150200250SE +/- 1.27, N = 3SE +/- 0.25, N = 3SE +/- 0.98, N = 3240.88240.49243.01MIN: 237.74 / MAX: 295.55MIN: 238.92 / MAX: 251.71MIN: 239.48 / MAX: 317.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8Run 3Run 2Run 14080120160200Min: 239.53 / Avg: 240.88 / Max: 243.41Min: 240 / Avg: 240.49 / Max: 240.82Min: 242.01 / Avg: 243.01 / Max: 244.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8Run 3Run 2Run 1918273645SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 339.3939.4339.46MIN: 39.26 / MAX: 40.72MIN: 39.25 / MAX: 41.06MIN: 39.21 / MAX: 41.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8Run 3Run 2Run 1816243240Min: 39.37 / Avg: 39.39 / Max: 39.4Min: 39.36 / Avg: 39.43 / Max: 39.47Min: 39.41 / Avg: 39.46 / Max: 39.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetRun 3Run 2Run 1612182430SE +/- 0.31, N = 3SE +/- 0.22, N = 3SE +/- 0.16, N = 324.3324.2024.89MIN: 23.74 / MAX: 27.78MIN: 23.76 / MAX: 25.06MIN: 24.39 / MAX: 26.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetRun 3Run 2Run 1612182430Min: 23.85 / Avg: 24.33 / Max: 24.9Min: 23.86 / Avg: 24.2 / Max: 24.61Min: 24.68 / Avg: 24.89 / Max: 25.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8Run 3Run 2Run 1306090120150SE +/- 0.25, N = 3SE +/- 0.14, N = 3SE +/- 0.08, N = 3132.87132.94133.02MIN: 132.31 / MAX: 135.34MIN: 132.44 / MAX: 146.41MIN: 132.43 / MAX: 145.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8Run 3Run 2Run 120406080100Min: 132.45 / Avg: 132.87 / Max: 133.32Min: 132.67 / Avg: 132.94 / Max: 133.12Min: 132.88 / Avg: 133.02 / Max: 133.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3Run 3Run 2Run 1612182430SE +/- 0.30, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 326.4926.6626.87MIN: 25.81 / MAX: 27.66MIN: 25.93 / MAX: 40.64MIN: 26.43 / MAX: 27.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3Run 3Run 2Run 1612182430Min: 25.92 / Avg: 26.49 / Max: 26.91Min: 26.51 / Avg: 26.66 / Max: 26.96Min: 26.79 / Avg: 26.87 / Max: 26.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPURun 3Run 2Run 1246810SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 37.898.118.16
OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPURun 3Run 2Run 13691215Min: 7.75 / Avg: 7.89 / Max: 7.98Min: 8.09 / Avg: 8.11 / Max: 8.14Min: 8.05 / Avg: 8.16 / Max: 8.25

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricRun 3Run 2Run 110K20K30K40K50K4536545358454741. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 3Run 2Run 112002400360048006000SE +/- 173.70, N = 15SE +/- 68.30, N = 3SE +/- 336.20, N = 125494492854411. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 3Run 2Run 110002000300040005000Min: 4766 / Avg: 5494.07 / Max: 6825Min: 4852 / Avg: 4927.67 / Max: 5064Min: 4545 / Avg: 5441.42 / Max: 88031. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 3Run 2Run 1200K400K600K800K1000KSE +/- 3971.45, N = 3SE +/- 3868.34, N = 3SE +/- 12233.47, N = 3838679.5852344.4855281.8
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 3Run 2Run 1150K300K450K600K750KMin: 831364.2 / Avg: 838679.5 / Max: 845017.1Min: 844708.8 / Avg: 852344.4 / Max: 857241.7Min: 842841.5 / Avg: 855281.83 / Max: 879747.6

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 3Run 2Run 1200K400K600K800K1000KSE +/- 2328.20, N = 3SE +/- 3710.45, N = 3SE +/- 5150.33, N = 3903216.0915807.8909598.9
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 3Run 2Run 1160K320K480K640K800KMin: 900601.8 / Avg: 903216.03 / Max: 907860.2Min: 909007 / Avg: 915807.8 / Max: 921780Min: 901634.1 / Avg: 909598.93 / Max: 919238.1

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 3Run 2Run 1200K400K600K800K1000KSE +/- 4766.81, N = 3SE +/- 3299.07, N = 3SE +/- 4110.61, N = 3982578.8982263.6971205.0
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 3Run 2Run 1200K400K600K800K1000KMin: 973965.7 / Avg: 982578.77 / Max: 990424.9Min: 975671.7 / Avg: 982263.6 / Max: 985807.9Min: 963174.6 / Avg: 971205 / Max: 976745.3

107 Results Shown

GLmark2
C-Blosc
NAMD
Incompact3D
Monte Carlo Simulations of Ionised Nebulae
LAMMPS Molecular Dynamics Simulator
Java Gradle Build
DaCapo Benchmark:
  H2
  Jython
  Tradesoap
  Tradebeans
Renaissance:
  Scala Dotty
  Rand Forest
  Apache Spark ALS
  Apache Spark Bayes
  Savina Reactors.IO
  Apache Spark PageRank
  In-Memory Database Shootout
  Akka Unbalanced Cobwebbed Tree
Zstd Compression:
  3
  19
LibRaw
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
SVT-AV1:
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
LuxCoreRender
libavif avifenc:
  0
  2
  8
  10
Timed Apache Compilation
Timed Linux Kernel Compilation
Build2
eSpeak-NG Speech Engine
Montage Astronomical Image Mosaic Engine
System GZIP Decompression
MPV:
  Big Buck Bunny Sunflower 4K - Software Only
  Big Buck Bunny Sunflower 1080p - Software Only
GROMACS
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
ASTC Encoder:
  Fast
  Medium
  Thorough
  Exhaustive
Hugin
OCRMyPDF
GNU Octave Benchmark
Stress-NG:
  MMAP
  NUMA
  MEMFD
  Atomic
  Crypto
  Malloc
  RdRand
  Forking
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
GPAW
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - squeezenet_int8
  CPU - mobilenet_v3
  CPU - squeezenet
  CPU - mnasnet
  CPU - blazeface
  CPU - googlenet_int8
  CPU - vgg16_int8
  CPU - resnet18_int8
  CPU - alexnet
  CPU - resnet50_int8
  CPU - mobilenetv2_yolov3
NeatBench
BRL-CAD
OpenCV
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
  1024 - 10000 - 2,5000,1 - 10000