Ubuntu 20.10 Ryzen 7 1800X

AMD Ryzen 7 1800X Eight-Core testing with a MSI X370 XPOWER GAMING TITANIUM (MS-7A31) v1.0 (1.F0 BIOS) and llvmpipe on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010116-FI-UBUNTU20155
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 10 Tests
Compression Tests 3 Tests
CPU Massive 14 Tests
Creator Workloads 8 Tests
Database Test Suite 6 Tests
Fortran Tests 5 Tests
Game Development 2 Tests
HPC - High Performance Computing 16 Tests
Imaging 4 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 7 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 4 Tests
Multi-Core 8 Tests
NVIDIA GPU Compute 4 Tests
OpenCL 2 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 5 Tests
Python 3 Tests
Scientific Computing 8 Tests
Server 7 Tests
Server CPU Tests 8 Tests
Single-Threaded 5 Tests
Speech 2 Tests
Telephony 2 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
October 10 2020
  8 Hours, 3 Minutes
Run 2
October 10 2020
  6 Hours, 28 Minutes
Run 3
October 11 2020
  6 Hours, 55 Minutes
Invert Hiding All Results Option
  7 Hours, 9 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ubuntu 20.10 Ryzen 7 1800XOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 1800X Eight-Core @ 3.60GHz (8 Cores / 16 Threads)MSI X370 XPOWER GAMING TITANIUM (MS-7A31) v1.0 (1.F0 BIOS)AMD 17h8GBSamsung SSD 950 PRO 256GBllvmpipeAMD Baffin HDMI/DPIntel I211Ubuntu 20.105.8.0-21-generic (x86_64)GNOME Shell 3.38.0X Server 1.20.8modesetting 1.20.83.3 Mesa 20.1.8 (LLVM 10.0.1 256 bits)GCC 10.2.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionUbuntu 20.10 Ryzen 7 1800X BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8001137- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Run 1Run 2Run 3Result OverviewPhoronix Test Suite100%104%109%113%118%LeelaChessZeroOpenCVASTC EncodereSpeak-NG Speech EngineLibRawLAMMPS Molecular Dynamics SimulatorNAMDDolfynMobile Neural NetworkTensorFlow LiteRodiniaGROMACSSystem GZIP DecompressionFFTETimed HMMer SearchIncompact3DSQLite SpeedtestRNNoiseWebP Image EncodeGIMPBYTE Unix BenchmarkFacebook RocksDBKeyDBMonte Carlo Simulations of Ionised NebulaeTNNPyPerformancePHPBenchDarktableZstd CompressionHierarchical INTegrationInfluxDBBlenderPostgreSQL pgbenchMlpack BenchmarkXZ CompressionLevelDBPyBench

Ubuntu 20.10 Ryzen 7 1800Xlczero: BLASlczero: Eigenrodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomsdolfyn: Computational Fluid Dynamicsffte: N=256, 3D Complex FFT Routinehmmer: Pfam Database Searchincompact3d: Cylindermocassin: Dust 2D tau100.0lammps: Rhodopsin Proteinwebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionbyte: Dhrystone 2compress-zstd: 3compress-zstd: 19libraw: Post-Processing Benchmarkbuild-llvm: Time To Compilecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9espeak: Text-To-Speech Synthesisrnnoise: system-decompress-gzip: leveldb: Hot Readleveldb: Fill Syncleveldb: Fill Syncleveldb: Overwriteleveldb: Overwriteleveldb: Rand Fillleveldb: Rand Fillleveldb: Rand Readleveldb: Seek Randleveldb: Rand Deleteleveldb: Seq Fillleveldb: Seq Fillkeydb: gromacs: Water Benchmarktensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2pgbench: 1 - 1 - Read Onlypgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Writepgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 50 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 100 - 1 - Read Onlypgbench: 100 - 1 - Read Only - Average Latencypgbench: 100 - 1 - Read Writepgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 50 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Writepgbench: 100 - 50 - Read Write - Average Latencyastcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustivesqlite-speedtest: Timed Time - Size 1,000darktable: Boat - CPU-onlydarktable: Masskrug - CPU-onlydarktable: Server Rack - CPU-onlydarktable: Server Room - CPU-onlygimp: resizegimp: rotategimp: auto-levelsgimp: unsharp-maskmnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3tnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1rocksdb: Rand Fillrocksdb: Rand Readrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingblender: BMW27 - CPU-Onlypybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonhint: FLOATphpbench: PHP Benchmark Suitemlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionopencv: DNN - Deep Neural Networkinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000influxdb: 1024 - 10000 - 2,5000,1 - 10000Run 1Run 2Run 3409412274.97199.289147.41027.54220.2482.6853218.55535639.842020994129.444444.9053042445.0611.4912.35020.3368.04241.06039259546.43077.624.528.82929.22841.78931.93320.3073.0457.3710.28812.09221.582.44521.482.6257.38511.09075.09721.880.996485532.120.59822861035049602102271599541644443164443171320.0584092.4482145100.23351197.959159370.0633972.5201808800.27760578.2606.559.7632.66269.3769.54915.2789.8490.1918.8679.40313.11114.01516.52810.18458.5495.6111.28561.045288.211259.6667822194227728088344417711551783204.87114130238713913816020.960731.21321989.1969.4588332414029.1628553478953.9377.0820.664.0011783976154.01103143.01114050.645145099.387147.78727.68420.2672.7229118.59934981.096014979130.817448.4906112475.0351.4832.38620.5248.11341.10939084297.93065.824.328.2741.73733.60520.3953.1047.4970.28677.47021.482.58921.582.5357.32811.14475.07222.080.584489796.270.58523185735406932111021618861661863186710177140.0574202.3812132740.23450998.285160020.0633952.5291809220.27756518.8556.609.8533.24273.8870.66015.3619.8530.1978.9149.55513.27214.18416.62110.54060.1345.66211.27964.871290.012259.7297860014233848387705817521571352204.61113630138613813714320.960531.41301989.1969.2589329959321.7779853959855.3176.3920.704.0211724965287.91098456.81112563.8481484109.447148.17227.48620.1982.7768619.18435017.003068122131.739453.3121742464.8781.4912.38220.5038.12842.71339590161.23056.724.227.6541.90733.46720.6343.0717.3400.28792.86921.482.64021.283.3627.40311.10375.60921.881.282483724.600.58823854435953602116801636421685223255800177630.0564222.3732140260.23450598.982159810.0633902.5611808430.27655389.0346.637.6233.80278.6669.86015.4399.8610.1968.9039.50913.32014.21416.75410.23558.5625.57111.29461.104295.052257.9557891604261998989294917411634793203.13113430038613813714320.9607311301999.2169.8589333079563.6861353396755.5676.5220.554.0311748957894.81098121.61109424.7OpenBenchmarking.org

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASRun 1Run 2Run 3100200300400500SE +/- 6.39, N = 3SE +/- 3.71, N = 3SE +/- 4.16, N = 34094514811. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASRun 1Run 2Run 390180270360450Min: 397 / Avg: 408.67 / Max: 419Min: 446 / Avg: 450.67 / Max: 458Min: 475 / Avg: 481 / Max: 4891. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenRun 1Run 2Run 3100200300400500SE +/- 5.41, N = 4SE +/- 7.62, N = 3SE +/- 3.48, N = 34124504841. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenRun 1Run 2Run 390180270360450Min: 398 / Avg: 411.75 / Max: 423Min: 436 / Avg: 450.33 / Max: 462Min: 478 / Avg: 483.67 / Max: 4901. (CXX) g++ options: -flto -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDRun 160120180240300SE +/- 0.67, N = 3274.971. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DRun 1Run 2Run 320406080100SE +/- 0.59, N = 3SE +/- 1.47, N = 4SE +/- 3.27, N = 1299.2999.39109.451. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DRun 1Run 2Run 320406080100Min: 98.12 / Avg: 99.29 / Max: 99.93Min: 96.3 / Avg: 99.39 / Max: 102.67Min: 96.18 / Avg: 109.45 / Max: 134.831. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRun 1Run 2Run 3306090120150SE +/- 3.66, N = 12SE +/- 0.44, N = 3SE +/- 0.62, N = 3147.41147.79148.171. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRun 1Run 2Run 3306090120150Min: 142.84 / Avg: 147.41 / Max: 187.63Min: 147.08 / Avg: 147.79 / Max: 148.59Min: 147.15 / Avg: 148.17 / Max: 149.281. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRun 1Run 2Run 3714212835SE +/- 0.16, N = 3SE +/- 0.18, N = 3SE +/- 0.13, N = 327.5427.6827.491. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRun 1Run 2Run 3612182430Min: 27.23 / Avg: 27.54 / Max: 27.74Min: 27.46 / Avg: 27.68 / Max: 28.04Min: 27.36 / Avg: 27.49 / Max: 27.741. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterRun 1Run 2Run 3510152025SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 320.2520.2720.201. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterRun 1Run 2Run 3510152025Min: 20.2 / Avg: 20.25 / Max: 20.28Min: 20.26 / Avg: 20.27 / Max: 20.27Min: 20.17 / Avg: 20.2 / Max: 20.231. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1Run 2Run 30.62481.24961.87442.49923.124SE +/- 0.00115, N = 3SE +/- 0.00163, N = 3SE +/- 0.00347, N = 32.685322.722912.77686
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1Run 2Run 3246810Min: 2.68 / Avg: 2.69 / Max: 2.69Min: 2.72 / Avg: 2.72 / Max: 2.73Min: 2.77 / Avg: 2.78 / Max: 2.78

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsRun 1Run 2Run 3510152025SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.24, N = 1518.5618.6019.18
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsRun 1Run 2Run 3510152025Min: 18.39 / Avg: 18.56 / Max: 18.66Min: 18.53 / Avg: 18.6 / Max: 18.63Min: 18.46 / Avg: 19.18 / Max: 21.14

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineRun 1Run 2Run 38K16K24K32K40KSE +/- 108.02, N = 3SE +/- 86.86, N = 3SE +/- 73.64, N = 335639.8434981.1035017.001. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineRun 1Run 2Run 36K12K18K24K30KMin: 35424.17 / Avg: 35639.84 / Max: 35758.51Min: 34841.11 / Avg: 34981.1 / Max: 35140.18Min: 34914.4 / Avg: 35017 / Max: 35159.821. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRun 1Run 2Run 3306090120150SE +/- 0.21, N = 3SE +/- 0.17, N = 3SE +/- 0.25, N = 3129.44130.82131.741. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRun 1Run 2Run 320406080100Min: 129.15 / Avg: 129.44 / Max: 129.85Min: 130.56 / Avg: 130.82 / Max: 131.13Min: 131.4 / Avg: 131.74 / Max: 132.231. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderRun 1Run 2Run 3100200300400500SE +/- 2.16, N = 3SE +/- 1.97, N = 3SE +/- 3.66, N = 3444.91448.49453.311. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderRun 1Run 2Run 380160240320400Min: 442.13 / Avg: 444.91 / Max: 449.17Min: 444.78 / Avg: 448.49 / Max: 451.52Min: 446.02 / Avg: 453.31 / Max: 457.531. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Run 1Run 2Run 350100150200250SE +/- 2.52, N = 3SE +/- 0.67, N = 32442472461. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Run 1Run 2Run 34080120160200Min: 244 / Avg: 247 / Max: 252Min: 245 / Avg: 245.67 / Max: 2471. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinRun 1Run 2Run 31.13872.27743.41614.55485.6935SE +/- 0.097, N = 12SE +/- 0.061, N = 6SE +/- 0.047, N = 35.0615.0354.8781. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinRun 1Run 2Run 3246810Min: 4.34 / Avg: 5.06 / Max: 5.47Min: 4.8 / Avg: 5.04 / Max: 5.18Min: 4.79 / Avg: 4.88 / Max: 4.941. (CXX) g++ options: -O3 -pthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultRun 1Run 2Run 30.33550.6711.00651.3421.6775SE +/- 0.008, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 31.4911.4831.4911. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultRun 1Run 2Run 3246810Min: 1.48 / Avg: 1.49 / Max: 1.51Min: 1.48 / Avg: 1.48 / Max: 1.49Min: 1.49 / Avg: 1.49 / Max: 1.51. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Run 1Run 2Run 30.53691.07381.61072.14762.6845SE +/- 0.003, N = 3SE +/- 0.037, N = 3SE +/- 0.028, N = 62.3502.3862.3821. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Run 1Run 2Run 3246810Min: 2.34 / Avg: 2.35 / Max: 2.36Min: 2.34 / Avg: 2.39 / Max: 2.46Min: 2.33 / Avg: 2.38 / Max: 2.481. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessRun 1Run 2Run 3510152025SE +/- 0.16, N = 3SE +/- 0.27, N = 5SE +/- 0.17, N = 1320.3420.5220.501. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessRun 1Run 2Run 3510152025Min: 20.01 / Avg: 20.34 / Max: 20.54Min: 20.14 / Avg: 20.52 / Max: 21.59Min: 20.12 / Avg: 20.5 / Max: 21.581. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionRun 1Run 2Run 3246810SE +/- 0.033, N = 3SE +/- 0.084, N = 8SE +/- 0.088, N = 78.0428.1138.1281. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionRun 1Run 2Run 33691215Min: 7.98 / Avg: 8.04 / Max: 8.1Min: 7.98 / Avg: 8.11 / Max: 8.5Min: 7.99 / Avg: 8.13 / Max: 8.481. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionRun 1Run 2Run 31020304050SE +/- 0.21, N = 3SE +/- 0.35, N = 3SE +/- 0.49, N = 1541.0641.1142.711. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionRun 1Run 2Run 3918273645Min: 40.65 / Avg: 41.06 / Max: 41.27Min: 40.64 / Avg: 41.11 / Max: 41.8Min: 40.66 / Avg: 42.71 / Max: 44.821. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Run 1Run 2Run 38M16M24M32M40MSE +/- 199903.37, N = 3SE +/- 326242.50, N = 3SE +/- 123828.10, N = 339259546.439084297.939590161.2
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Run 1Run 2Run 37M14M21M28M35MMin: 38999951.3 / Avg: 39259546.43 / Max: 39652672.4Min: 38752433.8 / Avg: 39084297.87 / Max: 39736750.4Min: 39342505 / Avg: 39590161.2 / Max: 39714043.8

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 37001400210028003500SE +/- 4.70, N = 3SE +/- 8.77, N = 3SE +/- 13.33, N = 33077.63065.83056.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 35001000150020002500Min: 3072.2 / Avg: 3077.63 / Max: 3087Min: 3049.7 / Avg: 3065.77 / Max: 3079.9Min: 3030.3 / Avg: 3056.7 / Max: 3073.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 3612182430SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.09, N = 324.524.324.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 3612182430Min: 24.4 / Avg: 24.47 / Max: 24.5Min: 24.3 / Avg: 24.3 / Max: 24.3Min: 24 / Avg: 24.17 / Max: 24.31. (CC) gcc options: -O3 -pthread -lz -llzma

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3714212835SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.21, N = 328.8228.2727.651. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3612182430Min: 28.68 / Avg: 28.82 / Max: 28.95Min: 28.2 / Avg: 28.27 / Max: 28.37Min: 27.24 / Avg: 27.65 / Max: 27.871. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileRun 12004006008001000SE +/- 3.07, N = 2929.23

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Run 1Run 2Run 31020304050SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 341.7941.7441.911. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Run 1Run 2Run 3918273645Min: 41.72 / Avg: 41.79 / Max: 41.9Min: 41.59 / Avg: 41.74 / Max: 41.92Min: 41.78 / Avg: 41.91 / Max: 41.981. (CC) gcc options: -pthread -fvisibility=hidden -O2

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 3816243240SE +/- 0.15, N = 4SE +/- 0.43, N = 20SE +/- 0.45, N = 2031.9333.6133.471. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 3714212835Min: 31.63 / Avg: 31.93 / Max: 32.26Min: 31.74 / Avg: 33.61 / Max: 36.41Min: 31.62 / Avg: 33.47 / Max: 36.381. (CC) gcc options: -O2 -std=c99

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1Run 2Run 3510152025SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.27, N = 520.3120.4020.631. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1Run 2Run 3510152025Min: 20.24 / Avg: 20.31 / Max: 20.36Min: 20.26 / Avg: 20.4 / Max: 20.47Min: 20.07 / Avg: 20.63 / Max: 21.31. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 30.69841.39682.09522.79363.492SE +/- 0.031, N = 3SE +/- 0.027, N = 3SE +/- 0.032, N = 33.0453.1043.071
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 3246810Min: 3.01 / Avg: 3.05 / Max: 3.11Min: 3.07 / Avg: 3.1 / Max: 3.16Min: 3.01 / Avg: 3.07 / Max: 3.11

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadRun 1Run 2Run 3246810SE +/- 0.053, N = 3SE +/- 0.012, N = 3SE +/- 0.098, N = 37.3717.4977.3401. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadRun 1Run 2Run 33691215Min: 7.29 / Avg: 7.37 / Max: 7.47Min: 7.48 / Avg: 7.5 / Max: 7.52Min: 7.2 / Avg: 7.34 / Max: 7.531. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 2Run 30.0450.090.1350.180.225SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.20.20.21. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 2Run 312345Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.21. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 2Run 32K4K6K8K10KSE +/- 28.05, N = 3SE +/- 35.18, N = 3SE +/- 11.04, N = 38812.098677.478792.871. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 2Run 315003000450060007500Min: 8761.56 / Avg: 8812.09 / Max: 8858.45Min: 8614.34 / Avg: 8677.47 / Max: 8735.94Min: 8777.46 / Avg: 8792.87 / Max: 8814.271. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 2Run 3510152025SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.00, N = 321.521.421.41. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 2Run 3510152025Min: 21.4 / Avg: 21.47 / Max: 21.5Min: 21.3 / Avg: 21.4 / Max: 21.5Min: 21.4 / Avg: 21.4 / Max: 21.41. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 2Run 320406080100SE +/- 0.10, N = 3SE +/- 0.18, N = 3SE +/- 0.05, N = 382.4582.5982.641. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 2Run 31632486480Min: 82.31 / Avg: 82.44 / Max: 82.65Min: 82.32 / Avg: 82.59 / Max: 82.94Min: 82.58 / Avg: 82.64 / Max: 82.731. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 2Run 3510152025SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 321.421.521.21. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 2Run 3510152025Min: 21.3 / Avg: 21.4 / Max: 21.5Min: 21.3 / Avg: 21.47 / Max: 21.6Min: 21.2 / Avg: 21.23 / Max: 21.31. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 2Run 320406080100SE +/- 0.15, N = 3SE +/- 0.32, N = 3SE +/- 0.10, N = 382.6382.5483.361. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 2Run 31632486480Min: 82.4 / Avg: 82.62 / Max: 82.91Min: 82.02 / Avg: 82.54 / Max: 83.13Min: 83.21 / Avg: 83.36 / Max: 83.561. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadRun 1Run 2Run 3246810SE +/- 0.018, N = 3SE +/- 0.037, N = 3SE +/- 0.083, N = 37.3857.3287.4031. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadRun 1Run 2Run 33691215Min: 7.35 / Avg: 7.39 / Max: 7.42Min: 7.27 / Avg: 7.33 / Max: 7.4Min: 7.27 / Avg: 7.4 / Max: 7.551. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomRun 1Run 2Run 33691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 311.0911.1411.101. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomRun 1Run 2Run 33691215Min: 11.06 / Avg: 11.09 / Max: 11.11Min: 11.13 / Avg: 11.14 / Max: 11.16Min: 11.05 / Avg: 11.1 / Max: 11.161. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteRun 1Run 2Run 320406080100SE +/- 0.16, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 375.1075.0775.611. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteRun 1Run 2Run 31530456075Min: 74.93 / Avg: 75.1 / Max: 75.41Min: 75 / Avg: 75.07 / Max: 75.12Min: 75.46 / Avg: 75.61 / Max: 75.711. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 2Run 3510152025SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 321.822.021.81. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 2Run 3510152025Min: 21.8 / Avg: 21.83 / Max: 21.9Min: 21.8 / Avg: 21.97 / Max: 22.1Min: 21.7 / Avg: 21.77 / Max: 21.81. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 2Run 320406080100SE +/- 0.18, N = 3SE +/- 0.23, N = 3SE +/- 0.06, N = 381.0080.5881.281. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 2Run 31632486480Min: 80.67 / Avg: 81 / Max: 81.29Min: 80.21 / Avg: 80.58 / Max: 81.02Min: 81.18 / Avg: 81.28 / Max: 81.391. (CXX) g++ options: -O3 -lsnappy -lpthread

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Run 1Run 2Run 3100K200K300K400K500KSE +/- 1220.17, N = 3SE +/- 1281.12, N = 3SE +/- 302.69, N = 3485532.12489796.27483724.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Run 1Run 2Run 380K160K240K320K400KMin: 483749.8 / Avg: 485532.12 / Max: 487866.86Min: 487598.43 / Avg: 489796.27 / Max: 492035.77Min: 483180.73 / Avg: 483724.6 / Max: 484226.81. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkRun 1Run 2Run 30.13460.26920.40380.53840.673SE +/- 0.002, N = 3SE +/- 0.007, N = 4SE +/- 0.001, N = 30.5980.5850.5881. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkRun 1Run 2Run 3246810Min: 0.6 / Avg: 0.6 / Max: 0.6Min: 0.56 / Avg: 0.58 / Max: 0.6Min: 0.59 / Avg: 0.59 / Max: 0.591. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1Run 2Run 350K100K150K200K250KSE +/- 60.68, N = 3SE +/- 81.68, N = 3SE +/- 360.88, N = 3228610231857238544
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1Run 2Run 340K80K120K160K200KMin: 228529 / Avg: 228610.33 / Max: 228729Min: 231763 / Avg: 231857.33 / Max: 232020Min: 238137 / Avg: 238544.33 / Max: 239264

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1Run 2Run 3800K1600K2400K3200K4000KSE +/- 283.78, N = 3SE +/- 1584.74, N = 3SE +/- 537.03, N = 3350496035406933595360
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1Run 2Run 3600K1200K1800K2400K3000KMin: 3504400 / Avg: 3504960 / Max: 3505320Min: 3538560 / Avg: 3540693.33 / Max: 3543790Min: 3594300 / Avg: 3595360 / Max: 3596040

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1Run 2Run 350K100K150K200K250KSE +/- 1415.81, N = 3SE +/- 830.07, N = 3SE +/- 1305.91, N = 3210227211102211680
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1Run 2Run 340K80K120K160K200KMin: 208020 / Avg: 210227.33 / Max: 212867Min: 209444 / Avg: 211102 / Max: 212004Min: 209143 / Avg: 211680 / Max: 213486

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1Run 2Run 340K80K120K160K200KSE +/- 41.39, N = 3SE +/- 215.93, N = 3SE +/- 128.82, N = 3159954161886163642
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1Run 2Run 330K60K90K120K150KMin: 159905 / Avg: 159953.67 / Max: 160036Min: 161647 / Avg: 161886 / Max: 162317Min: 163386 / Avg: 163641.67 / Max: 163797

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1Run 2Run 340K80K120K160K200KSE +/- 102.48, N = 3SE +/- 110.82, N = 3SE +/- 91.81, N = 3164444166186168522
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1Run 2Run 330K60K90K120K150KMin: 164245 / Avg: 164444 / Max: 164586Min: 165965 / Avg: 166186 / Max: 166311Min: 168339 / Avg: 168522.33 / Max: 168623

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1Run 2Run 3700K1400K2100K2800K3500KSE +/- 687.05, N = 3SE +/- 479.62, N = 3SE +/- 452.11, N = 3316444331867103255800
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1Run 2Run 3600K1200K1800K2400K3000KMin: 3163070 / Avg: 3164443.33 / Max: 3165170Min: 3185760 / Avg: 3186710 / Max: 3187300Min: 3255180 / Avg: 3255800 / Max: 3256680

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyRun 1Run 2Run 34K8K12K16K20KSE +/- 266.61, N = 3SE +/- 245.60, N = 4SE +/- 159.17, N = 31713217714177631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyRun 1Run 2Run 33K6K9K12K15KMin: 16830.53 / Avg: 17131.75 / Max: 17663.39Min: 17361.68 / Avg: 17714.28 / Max: 18433.13Min: 17448.39 / Avg: 17762.98 / Max: 17962.491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyRun 1Run 2Run 30.01310.02620.03930.05240.0655SE +/- 0.001, N = 3SE +/- 0.001, N = 4SE +/- 0.000, N = 30.0580.0570.0561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyRun 1Run 2Run 312345Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.05 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteRun 1Run 2Run 390180270360450SE +/- 2.16, N = 3SE +/- 3.50, N = 3SE +/- 1.98, N = 34094204221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteRun 1Run 2Run 380160240320400Min: 404.26 / Avg: 408.58 / Max: 410.76Min: 414.7 / Avg: 420.13 / Max: 426.68Min: 419.27 / Avg: 421.51 / Max: 425.471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyRun 1Run 2Run 30.55081.10161.65242.20322.754SE +/- 0.013, N = 3SE +/- 0.020, N = 3SE +/- 0.011, N = 32.4482.3812.3731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyRun 1Run 2Run 3246810Min: 2.44 / Avg: 2.45 / Max: 2.47Min: 2.34 / Avg: 2.38 / Max: 2.41Min: 2.35 / Avg: 2.37 / Max: 2.391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyRun 1Run 2Run 350K100K150K200K250KSE +/- 368.53, N = 3SE +/- 775.05, N = 3SE +/- 1392.24, N = 32145102132742140261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyRun 1Run 2Run 340K80K120K160K200KMin: 213891.49 / Avg: 214509.51 / Max: 215166.34Min: 211761.1 / Avg: 213274.27 / Max: 214322.1Min: 212577.02 / Avg: 214026.23 / Max: 216809.921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyRun 1Run 2Run 30.05270.10540.15810.21080.2635SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.2330.2340.2341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyRun 1Run 2Run 312345Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.23 / Max: 0.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteRun 1Run 2Run 3110220330440550SE +/- 0.67, N = 3SE +/- 0.36, N = 3SE +/- 1.23, N = 35115095051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteRun 1Run 2Run 390180270360450Min: 509.54 / Avg: 510.52 / Max: 511.81Min: 508.13 / Avg: 508.82 / Max: 509.31Min: 502.96 / Avg: 505.24 / Max: 507.21. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyRun 1Run 2Run 320406080100SE +/- 0.13, N = 3SE +/- 0.07, N = 3SE +/- 0.24, N = 397.9698.2998.981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyRun 1Run 2Run 320406080100Min: 97.71 / Avg: 97.96 / Max: 98.15Min: 98.19 / Avg: 98.29 / Max: 98.42Min: 98.6 / Avg: 98.98 / Max: 99.431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyRun 1Run 2Run 33K6K9K12K15KSE +/- 170.78, N = 7SE +/- 263.47, N = 3SE +/- 163.07, N = 31593716002159811. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyRun 1Run 2Run 33K6K9K12K15KMin: 15481.19 / Avg: 15936.69 / Max: 16694.92Min: 15545.4 / Avg: 16001.59 / Max: 16458.09Min: 15746.75 / Avg: 15981.03 / Max: 16294.661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyRun 1Run 2Run 30.01420.02840.04260.05680.071SE +/- 0.001, N = 7SE +/- 0.001, N = 3SE +/- 0.001, N = 30.0630.0630.0631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyRun 1Run 2Run 312345Min: 0.06 / Avg: 0.06 / Max: 0.07Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteRun 1Run 2Run 390180270360450SE +/- 2.75, N = 3SE +/- 1.87, N = 3SE +/- 1.39, N = 33973953901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteRun 1Run 2Run 370140210280350Min: 391.38 / Avg: 396.87 / Max: 399.84Min: 391.72 / Avg: 395.46 / Max: 397.47Min: 388.36 / Avg: 390.48 / Max: 393.11. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyRun 1Run 2Run 30.57621.15241.72862.30482.881SE +/- 0.018, N = 3SE +/- 0.012, N = 3SE +/- 0.009, N = 32.5202.5292.5611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyRun 1Run 2Run 3246810Min: 2.5 / Avg: 2.52 / Max: 2.56Min: 2.52 / Avg: 2.53 / Max: 2.55Min: 2.54 / Avg: 2.56 / Max: 2.581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyRun 1Run 2Run 340K80K120K160K200KSE +/- 395.20, N = 3SE +/- 437.46, N = 3SE +/- 283.41, N = 31808801809221808431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyRun 1Run 2Run 330K60K90K120K150KMin: 180089.97 / Avg: 180879.78 / Max: 181300.9Min: 180051.65 / Avg: 180922.01 / Max: 181434.49Min: 180296.89 / Avg: 180842.59 / Max: 181248.181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyRun 1Run 2Run 30.06230.12460.18690.24920.3115SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.2770.2770.2761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyRun 1Run 2Run 312345Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.28 / Avg: 0.28 / Max: 0.281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read WriteRun 1Run 2Run 313002600390052006500SE +/- 88.40, N = 3SE +/- 86.60, N = 3SE +/- 79.29, N = 36057565155381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read WriteRun 1Run 2Run 311002200330044005500Min: 5897.49 / Avg: 6057.08 / Max: 6202.76Min: 5496.79 / Avg: 5650.53 / Max: 5796.49Min: 5442.79 / Avg: 5537.7 / Max: 5695.181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyRun 1Run 2Run 33691215SE +/- 0.121, N = 3SE +/- 0.136, N = 3SE +/- 0.128, N = 38.2608.8559.0341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyRun 1Run 2Run 33691215Min: 8.06 / Avg: 8.26 / Max: 8.48Min: 8.63 / Avg: 8.85 / Max: 9.1Min: 8.78 / Avg: 9.03 / Max: 9.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1Run 2Run 3246810SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.556.606.631. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1Run 2Run 33691215Min: 6.48 / Avg: 6.55 / Max: 6.6Min: 6.59 / Avg: 6.6 / Max: 6.61Min: 6.62 / Avg: 6.63 / Max: 6.631. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1Run 2Run 33691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.59, N = 159.769.857.621. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1Run 2Run 33691215Min: 9.75 / Avg: 9.76 / Max: 9.77Min: 9.81 / Avg: 9.85 / Max: 9.87Min: 5.49 / Avg: 7.62 / Max: 9.981. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1Run 2Run 3816243240SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 332.6633.2433.801. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1Run 2Run 3714212835Min: 32.54 / Avg: 32.66 / Max: 32.78Min: 33.12 / Avg: 33.24 / Max: 33.37Min: 33.74 / Avg: 33.8 / Max: 33.891. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1Run 2Run 360120180240300SE +/- 0.17, N = 3SE +/- 0.50, N = 3SE +/- 0.08, N = 3269.37273.88278.661. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1Run 2Run 350100150200250Min: 269.03 / Avg: 269.37 / Max: 269.6Min: 272.88 / Avg: 273.88 / Max: 274.43Min: 278.52 / Avg: 278.66 / Max: 278.781. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1Run 2Run 31632486480SE +/- 0.31, N = 3SE +/- 0.20, N = 3SE +/- 0.17, N = 369.5570.6669.861. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1Run 2Run 31428425670Min: 69.23 / Avg: 69.55 / Max: 70.16Min: 70.3 / Avg: 70.66 / Max: 70.96Min: 69.65 / Avg: 69.86 / Max: 70.21. (CC) gcc options: -O2 -ldl -lz -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyRun 1Run 2Run 348121620SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 315.2815.3615.44
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyRun 1Run 2Run 348121620Min: 15.27 / Avg: 15.28 / Max: 15.29Min: 15.34 / Avg: 15.36 / Max: 15.4Min: 15.43 / Avg: 15.44 / Max: 15.44

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyRun 1Run 2Run 33691215SE +/- 0.013, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 39.8499.8539.861
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyRun 1Run 2Run 33691215Min: 9.83 / Avg: 9.85 / Max: 9.87Min: 9.84 / Avg: 9.85 / Max: 9.86Min: 9.85 / Avg: 9.86 / Max: 9.87

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyRun 1Run 2Run 30.04430.08860.13290.17720.2215SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1910.1970.196
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyRun 1Run 2Run 312345Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.19 / Avg: 0.2 / Max: 0.2

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyRun 1Run 2Run 3246810SE +/- 0.013, N = 3SE +/- 0.005, N = 3SE +/- 0.010, N = 38.8678.9148.903
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyRun 1Run 2Run 33691215Min: 8.85 / Avg: 8.87 / Max: 8.89Min: 8.9 / Avg: 8.91 / Max: 8.92Min: 8.89 / Avg: 8.9 / Max: 8.92

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeRun 1Run 2Run 33691215SE +/- 0.061, N = 3SE +/- 0.026, N = 3SE +/- 0.074, N = 39.4039.5559.509
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeRun 1Run 2Run 33691215Min: 9.33 / Avg: 9.4 / Max: 9.52Min: 9.53 / Avg: 9.55 / Max: 9.61Min: 9.41 / Avg: 9.51 / Max: 9.66

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateRun 1Run 2Run 33691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 313.1113.2713.32
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateRun 1Run 2Run 348121620Min: 13.1 / Avg: 13.11 / Max: 13.12Min: 13.25 / Avg: 13.27 / Max: 13.31Min: 13.28 / Avg: 13.32 / Max: 13.36

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsRun 1Run 2Run 348121620SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 314.0214.1814.21
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsRun 1Run 2Run 348121620Min: 13.97 / Avg: 14.02 / Max: 14.09Min: 14.14 / Avg: 14.18 / Max: 14.25Min: 14.16 / Avg: 14.21 / Max: 14.27

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskRun 1Run 2Run 348121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 316.5316.6216.75
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskRun 1Run 2Run 348121620Min: 16.49 / Avg: 16.53 / Max: 16.55Min: 16.57 / Avg: 16.62 / Max: 16.66Min: 16.69 / Avg: 16.75 / Max: 16.87

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 33691215SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 310.1810.5410.24MIN: 10.04 / MAX: 12.58MIN: 10.29 / MAX: 21.33MIN: 10.04 / MAX: 20.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 33691215Min: 10.13 / Avg: 10.18 / Max: 10.24Min: 10.43 / Avg: 10.54 / Max: 10.67Min: 10.2 / Avg: 10.24 / Max: 10.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 31326395265SE +/- 0.02, N = 3SE +/- 0.47, N = 3SE +/- 0.11, N = 358.5560.1358.56MIN: 57.43 / MAX: 81.26MIN: 59.4 / MAX: 70.84MIN: 57.55 / MAX: 83.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 31224364860Min: 58.53 / Avg: 58.55 / Max: 58.58Min: 59.54 / Avg: 60.13 / Max: 61.06Min: 58.34 / Avg: 58.56 / Max: 58.681. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 31.2742.5483.8225.0966.37SE +/- 0.006, N = 3SE +/- 0.013, N = 3SE +/- 0.065, N = 35.6105.6625.571MIN: 5.57 / MAX: 16.3MIN: 5.6 / MAX: 6.48MIN: 5.41 / MAX: 6.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 3246810Min: 5.6 / Avg: 5.61 / Max: 5.62Min: 5.64 / Avg: 5.66 / Max: 5.68Min: 5.45 / Avg: 5.57 / Max: 5.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 33691215SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 311.2911.2811.29MIN: 11.18 / MAX: 11.78MIN: 11.21 / MAX: 13.21MIN: 11.18 / MAX: 20.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 33691215Min: 11.21 / Avg: 11.29 / Max: 11.36Min: 11.25 / Avg: 11.28 / Max: 11.33Min: 11.22 / Avg: 11.29 / Max: 11.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 31428425670SE +/- 0.05, N = 3SE +/- 0.42, N = 3SE +/- 0.24, N = 361.0564.8761.10MIN: 60.76 / MAX: 69.97MIN: 63.93 / MAX: 75.96MIN: 60.59 / MAX: 71.791. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 31326395265Min: 60.96 / Avg: 61.05 / Max: 61.14Min: 64.13 / Avg: 64.87 / Max: 65.57Min: 60.74 / Avg: 61.1 / Max: 61.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Run 1Run 2Run 360120180240300SE +/- 0.60, N = 3SE +/- 0.73, N = 3SE +/- 0.93, N = 3288.21290.01295.05MIN: 264.02 / MAX: 313.33MIN: 262.59 / MAX: 317.05MIN: 264.47 / MAX: 323.71. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Run 1Run 2Run 350100150200250Min: 287.09 / Avg: 288.21 / Max: 289.14Min: 288.55 / Avg: 290.01 / Max: 290.8Min: 293.9 / Avg: 295.05 / Max: 296.91. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Run 1Run 2Run 360120180240300SE +/- 1.21, N = 3SE +/- 0.64, N = 3SE +/- 0.23, N = 3259.67259.73257.96MIN: 257.24 / MAX: 263.91MIN: 257.6 / MAX: 263.13MIN: 256.96 / MAX: 263.951. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Run 1Run 2Run 350100150200250Min: 258.35 / Avg: 259.67 / Max: 262.07Min: 258.46 / Avg: 259.73 / Max: 260.43Min: 257.7 / Avg: 257.95 / Max: 258.411. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillRun 1Run 2Run 3200K400K600K800K1000KSE +/- 3040.96, N = 3SE +/- 2633.23, N = 3SE +/- 5686.60, N = 37822197860017891601. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillRun 1Run 2Run 3140K280K420K560K700KMin: 776205 / Avg: 782219 / Max: 786011Min: 780930 / Avg: 786001.33 / Max: 789767Min: 779053 / Avg: 789160 / Max: 7987301. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadRun 1Run 2Run 39M18M27M36M45MSE +/- 363150.40, N = 3SE +/- 505609.10, N = 3SE +/- 569277.52, N = 34227728042338483426199891. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadRun 1Run 2Run 37M14M21M28M35MMin: 41700799 / Avg: 42277280.33 / Max: 42948126Min: 41707550 / Avg: 42338483 / Max: 43338321Min: 41517115 / Avg: 42619989 / Max: 434163391. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillRun 1Run 2Run 3200K400K600K800K1000KSE +/- 9402.70, N = 3SE +/- 1521.70, N = 3SE +/- 14899.07, N = 38834448770588929491. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillRun 1Run 2Run 3150K300K450K600K750KMin: 873588 / Avg: 883444 / Max: 902242Min: 875314 / Avg: 877058 / Max: 880090Min: 872928 / Avg: 892949.33 / Max: 9220731. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncRun 1Run 2Run 3400800120016002000SE +/- 6.69, N = 3SE +/- 5.33, N = 31771175217411. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncRun 1Run 2Run 330060090012001500Min: 1759 / Avg: 1771.33 / Max: 1782Min: 1736 / Avg: 1741.33 / Max: 17521. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingRun 1Run 2Run 3400K800K1200K1600K2000KSE +/- 13433.23, N = 3SE +/- 20461.74, N = 3SE +/- 22803.37, N = 31551783157135216347931. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingRun 1Run 2Run 3300K600K900K1200K1500KMin: 1533754 / Avg: 1551783.33 / Max: 1578048Min: 1547955 / Avg: 1571352.33 / Max: 1612128Min: 1597019 / Avg: 1634793 / Max: 16758121. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyRun 1Run 2Run 34080120160200SE +/- 1.77, N = 3SE +/- 0.69, N = 3SE +/- 0.58, N = 3204.87204.61203.13
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyRun 1Run 2Run 34080120160200Min: 202.02 / Avg: 204.87 / Max: 208.1Min: 203.85 / Avg: 204.61 / Max: 205.98Min: 202.38 / Avg: 203.13 / Max: 204.26

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRun 1Run 2Run 32004006008001000SE +/- 2.52, N = 3SE +/- 7.67, N = 3SE +/- 0.88, N = 3114111361134
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRun 1Run 2Run 32004006008001000Min: 1138 / Avg: 1141 / Max: 1146Min: 1128 / Avg: 1135.67 / Max: 1151Min: 1133 / Avg: 1134.33 / Max: 1136

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goRun 1Run 2Run 370140210280350SE +/- 0.33, N = 3SE +/- 0.33, N = 3302301300
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goRun 1Run 2Run 350100150200250Min: 301 / Avg: 301.67 / Max: 302Min: 300 / Avg: 300.33 / Max: 301

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Run 1Run 2Run 380160240320400SE +/- 0.33, N = 3387386386
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Run 1Run 2Run 370140210280350Min: 385 / Avg: 385.67 / Max: 386

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosRun 1Run 2Run 3306090120150SE +/- 0.33, N = 3139138138
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosRun 1Run 2Run 3306090120150Min: 137 / Avg: 137.67 / Max: 138

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatRun 1Run 2Run 3306090120150SE +/- 0.33, N = 3138137137
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatRun 1Run 2Run 3306090120150Min: 136 / Avg: 136.67 / Max: 137

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyRun 1Run 2Run 34080120160200SE +/- 0.58, N = 3SE +/- 0.33, N = 3160143143
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyRun 1Run 2Run 3306090120150Min: 159 / Avg: 160 / Max: 161Min: 143 / Avg: 143.33 / Max: 144

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibRun 1Run 2Run 3510152025SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 320.920.920.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibRun 1Run 2Run 3510152025Min: 20.9 / Avg: 20.9 / Max: 20.9Min: 20.9 / Avg: 20.93 / Max: 21Min: 20.9 / Avg: 20.93 / Max: 21

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRun 1Run 2Run 3130260390520650SE +/- 0.58, N = 3SE +/- 0.88, N = 3SE +/- 2.19, N = 3607605607
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRun 1Run 2Run 3110220330440550Min: 606 / Avg: 607 / Max: 608Min: 603 / Avg: 604.67 / Max: 606Min: 604 / Avg: 606.67 / Max: 611

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsRun 1Run 2Run 3714212835SE +/- 0.06, N = 3SE +/- 0.03, N = 331.231.431.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsRun 1Run 2Run 3714212835Min: 31.1 / Avg: 31.2 / Max: 31.3Min: 31.3 / Avg: 31.37 / Max: 31.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesRun 1Run 2Run 3306090120150SE +/- 0.33, N = 3132130130
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesRun 1Run 2Run 320406080100Min: 129 / Avg: 129.67 / Max: 130

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileRun 1Run 2Run 34080120160200SE +/- 0.33, N = 3198198199
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileRun 1Run 2Run 34080120160200Min: 198 / Avg: 198.67 / Max: 199

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRun 1Run 2Run 33691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 39.199.199.21
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRun 1Run 2Run 33691215Min: 9.16 / Avg: 9.19 / Max: 9.24Min: 9.17 / Avg: 9.19 / Max: 9.23Min: 9.18 / Avg: 9.21 / Max: 9.23

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateRun 1Run 2Run 31632486480SE +/- 0.06, N = 3SE +/- 0.19, N = 3SE +/- 0.19, N = 369.469.269.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateRun 1Run 2Run 31428425670Min: 69.3 / Avg: 69.4 / Max: 69.5Min: 68.8 / Avg: 69.17 / Max: 69.4Min: 69.4 / Avg: 69.77 / Max: 70

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonRun 1Run 2Run 3130260390520650SE +/- 0.67, N = 3SE +/- 1.00, N = 3SE +/- 1.20, N = 3588589589
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonRun 1Run 2Run 3100200300400500Min: 587 / Avg: 588.33 / Max: 589Min: 587 / Avg: 589 / Max: 590Min: 587 / Avg: 589.33 / Max: 591

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATRun 1Run 2Run 370M140M210M280M350MSE +/- 780060.67, N = 3SE +/- 656689.81, N = 3SE +/- 887469.08, N = 3332414029.16329959321.78333079563.691. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATRun 1Run 2Run 360M120M180M240M300MMin: 331521452.79 / Avg: 332414029.16 / Max: 333968452.96Min: 328680522.11 / Avg: 329959321.78 / Max: 330858005.42Min: 331308821.89 / Avg: 333079563.69 / Max: 334070571.831. (CC) gcc options: -O3 -march=native -lm

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. The number of iterations used is 1,000,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRun 1Run 2Run 3120K240K360K480K600KSE +/- 2332.91, N = 3SE +/- 1310.91, N = 3SE +/- 8526.20, N = 3534789539598533967
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRun 1Run 2Run 390K180K270K360K450KMin: 530596 / Avg: 534788.67 / Max: 538658Min: 537041 / Avg: 539597.67 / Max: 541379Min: 516915 / Avg: 533967.33 / Max: 542534

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRun 1Run 2Run 31224364860SE +/- 0.13, N = 3SE +/- 0.61, N = 3SE +/- 0.85, N = 353.9355.3155.56
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRun 1Run 2Run 31122334455Min: 53.71 / Avg: 53.93 / Max: 54.17Min: 54.22 / Avg: 55.31 / Max: 56.34Min: 53.86 / Avg: 55.56 / Max: 56.46

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRun 1Run 2Run 320406080100SE +/- 1.27, N = 12SE +/- 0.29, N = 3SE +/- 0.03, N = 377.0876.3976.52
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRun 1Run 2Run 31530456075Min: 73.23 / Avg: 77.08 / Max: 90.72Min: 75.82 / Avg: 76.39 / Max: 76.72Min: 76.45 / Avg: 76.52 / Max: 76.56

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmRun 1Run 2Run 3510152025SE +/- 0.08, N = 3SE +/- 0.23, N = 3SE +/- 0.17, N = 320.6620.7020.55
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmRun 1Run 2Run 3510152025Min: 20.51 / Avg: 20.66 / Max: 20.76Min: 20.39 / Avg: 20.7 / Max: 21.16Min: 20.38 / Avg: 20.55 / Max: 20.88

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRun 1Run 2Run 30.90681.81362.72043.62724.534SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 34.004.024.03
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRun 1Run 2Run 3246810Min: 3.97 / Avg: 4 / Max: 4.02Min: 4.01 / Avg: 4.02 / Max: 4.03Min: 3.99 / Avg: 4.03 / Max: 4.06

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 33K6K9K12K15KSE +/- 241.81, N = 12SE +/- 125.16, N = 3SE +/- 187.32, N = 151178311724117481. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 32K4K6K8K10KMin: 10520 / Avg: 11783.25 / Max: 13059Min: 11588 / Avg: 11724 / Max: 11974Min: 10411 / Avg: 11747.93 / Max: 129581. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KSE +/- 1242.96, N = 3SE +/- 1092.13, N = 3SE +/- 1502.67, N = 3976154.0965287.9957894.8
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KMin: 974723.3 / Avg: 976153.97 / Max: 978629.9Min: 963105.8 / Avg: 965287.93 / Max: 966462.5Min: 954895.2 / Avg: 957894.8 / Max: 959555.5

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KSE +/- 622.22, N = 3SE +/- 1153.13, N = 3SE +/- 3542.50, N = 31103143.01098456.81098121.6
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KMin: 1102111.3 / Avg: 1103143.03 / Max: 1104261.5Min: 1096305.9 / Avg: 1098456.83 / Max: 1100252.9Min: 1091046.9 / Avg: 1098121.63 / Max: 1101989.2

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KSE +/- 2076.71, N = 3SE +/- 2336.66, N = 3SE +/- 1427.98, N = 31114050.61112563.81109424.7
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KMin: 1110181 / Avg: 1114050.63 / Max: 1117292.2Min: 1109358.5 / Avg: 1112563.8 / Max: 1117111.7Min: 1106862.9 / Avg: 1109424.7 / Max: 1111798.9

114 Results Shown

LeelaChessZero:
  BLAS
  Eigen
Rodinia:
  OpenMP LavaMD
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
NAMD
Dolfyn
FFTE
Timed HMMer Search
Incompact3D
Monte Carlo Simulations of Ionised Nebulae
LAMMPS Molecular Dynamics Simulator
WebP Image Encode:
  Default
  Quality 100
  Quality 100, Lossless
  Quality 100, Highest Compression
  Quality 100, Lossless, Highest Compression
BYTE Unix Benchmark
Zstd Compression:
  3
  19
LibRaw
Timed LLVM Compilation
XZ Compression
eSpeak-NG Speech Engine
RNNoise
System GZIP Decompression
LevelDB:
  Hot Read
  Fill Sync
  Fill Sync
  Overwrite
  Overwrite
  Rand Fill
  Rand Fill
  Rand Read
  Seek Rand
  Rand Delete
  Seq Fill
  Seq Fill
KeyDB
GROMACS
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
PostgreSQL pgbench:
  1 - 1 - Read Only
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Write
  1 - 1 - Read Write - Average Latency
  1 - 50 - Read Only
  1 - 50 - Read Only - Average Latency
  1 - 50 - Read Write
  1 - 50 - Read Write - Average Latency
  100 - 1 - Read Only
  100 - 1 - Read Only - Average Latency
  100 - 1 - Read Write
  100 - 1 - Read Write - Average Latency
  100 - 50 - Read Only
  100 - 50 - Read Only - Average Latency
  100 - 50 - Read Write
  100 - 50 - Read Write - Average Latency
ASTC Encoder:
  Fast
  Medium
  Thorough
  Exhaustive
SQLite Speedtest
Darktable:
  Boat - CPU-only
  Masskrug - CPU-only
  Server Rack - CPU-only
  Server Room - CPU-only
GIMP:
  resize
  rotate
  auto-levels
  unsharp-mask
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
Facebook RocksDB:
  Rand Fill
  Rand Read
  Seq Fill
  Rand Fill Sync
  Read While Writing
Blender
PyBench
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
Hierarchical INTegration
PHPBench
Mlpack Benchmark:
  scikit_ica
  scikit_qda
  scikit_svm
  scikit_linearridgeregression
OpenCV
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
  1024 - 10000 - 2,5000,1 - 10000