Xeon E3-1280 v5 Oct

Intel Xeon E3-1280 v5 testing with a MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS) and ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SP on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010026-FI-XEONE312819
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
Chess Test Suite 2 Tests
C/C++ Compiler Tests 8 Tests
CPU Massive 10 Tests
Creator Workloads 5 Tests
Database Test Suite 4 Tests
Fortran Tests 5 Tests
HPC - High Performance Computing 18 Tests
Imaging 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 4 Tests
Multi-Core 5 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 5 Tests
Scientific Computing 10 Tests
Server 4 Tests
Server CPU Tests 3 Tests
Single-Threaded 4 Tests
Speech 3 Tests
Telephony 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
September 30 2020
  13 Hours, 50 Minutes
2
October 01 2020
  12 Hours, 17 Minutes
3
October 01 2020
  12 Hours, 59 Minutes
Invert Hiding All Results Option
  13 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon E3-1280 v5 OctProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution123Intel Xeon E3-1280 v5 @ 4.00GHz (4 Cores / 8 Threads)MSI Z170A SLI PLUS (MS-7998) v1.0 (2.A0 BIOS)Intel Xeon E3-1200 v5/E3-150032GB256GB TOSHIBA RD400ASUS AMD Radeon HD 7850 / R7 265 R9 270 1024SPRealtek ALC1150VA2431Intel I219-VUbuntu 20.045.9.0-050900rc2daily20200826-generic (x86_64) 20200825GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.5 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xdcGraphics Details- GLAMORPython Details- Python 3.8.2Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

123Result OverviewPhoronix Test Suite100%101%102%103%104%PostgreSQL pgbenchFFTEApache CouchDBeSpeak-NG Speech EngineMobile Neural NetworkKeyDBMlpack BenchmarkKripkeInfluxDBNCNNLeelaChessZeroTNNTimed MAFFT AlignmentBYTE Unix BenchmarkZstd CompressionLAMMPS Molecular Dynamics SimulatorDeepSpeechWebP Image EncodeLibRawRNNoiseMPVDolfynTSCPTimed HMMer SearchNAMDHierarchical INTegrationIncompact3DMonte Carlo Simulations of Ionised NebulaeTimed LLVM CompilationCaffeGPAW

Xeon E3-1280 v5 Octglmark2: 1920 x 1080lczero: BLASlczero: Eigenlczero: Randnamd: ATPase Simulation - 327,506 Atomsdolfyn: Computational Fluid Dynamicsffte: N=256, 3D Complex FFT Routinehmmer: Pfam Database Searchincompact3d: Cylindermafft: Multiple Sequence Alignment - LSU RNAmocassin: Dust 2D tau100.0lammps: 20k Atomslammps: Rhodopsin Proteinwebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionbyte: Dhrystone 2compress-zstd: 3compress-zstd: 19libraw: Post-Processing Benchmarktscp: AI Chess Performancebuild-llvm: Time To Compiledeepspeech: CPUespeak: Text-To-Speech Synthesisrnnoise: mpv: Big Buck Bunny Sunflower 4K - Software Onlympv: Big Buck Bunny Sunflower 1080p - Software Onlycouchdb: 100 - 1000 - 24keydb: pgbench: 1 - 1 - Read Onlypgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Writepgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 50 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlypgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 100 - 1 - Read Onlypgbench: 100 - 1 - Read Only - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 250 - Read Writepgbench: 1 - 250 - Read Write - Average Latencypgbench: 100 - 1 - Read Writepgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 50 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 50 - Read Writepgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencycaffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200gpaw: Carbon Nanotubemnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinytnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1hint: FLOATmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionkripke: influxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000influxdb: 1024 - 10000 - 2,5000,1 - 1000012329827536802073363.8075720.94820211.793367516122.444734.53977512.0793002.8772.7961.7062.67718.8037.97246.86039309015.62254.419.128.1911619801490.85877.7003331.65727.171377.561180.75147.518385851.58205360.0492094.7961148540.4361109270.902920772.718284175.934202400.049278359.561265942.9151965.109978490.511952821.050812513.079363613.762457921.988482451.84165515130722161333323152602.62310.04256.1065.5908.02261.63124.9228.877.836.504.566.5610.462.0923.14103.7223.5222.6447.0240.09357.318342.386390197794.5668667.0187.8927.664.3216736247889455.41016169.11027433.37666852070143.8098020.91819804.661180935122.538734.33251912.1483002.8872.8031.7062.67418.8767.98546.98739143549.82257.019.228.1311634561490.62077.9376131.49027.179377.031177.43149.152382973.15206180.0482014.9871144660.4371096540.912903592.769272183.646202290.049256390.0302451019.7331915.231970790.515944931.058827413.022345914.467442922.680459254.62365541131091161133322330604.06810.05655.8135.5707.67461.64524.8628.767.786.534.576.5610.452.0923.1493.3023.5022.6447.0739.92357.847337.741390087861.3408465.2986.6027.704.3316595780876094.31013846.11028323.27706832069463.8091920.91119709.169140635122.520737.04707812.1383002.8762.7991.7072.67718.9517.98647.07939339662.12252.719.128.1111624741490.68377.7047631.97127.251375.641181.91150.371382203.75205560.0491985.0631133060.4421076940.929919342.724262190.864202410.049251399.0912411037.5221845.424970110.515944241.060837112.990336514.866424023.753450155.77765520130891161112323334602.96710.09955.9505.5767.68361.62924.8928.767.786.524.586.5710.462.1023.1893.2923.5022.6447.0539.92357.364337.820390272679.1516765.5487.8927.704.3916583477872007.41012647.31023332.0OpenBenchmarking.org

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080160012001800240030002982

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS123170340510680850SE +/- 3.28, N = 3SE +/- 5.20, N = 3SE +/- 4.58, N = 37537667701. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS123140280420560700Min: 747 / Avg: 753.33 / Max: 758Min: 757 / Avg: 766 / Max: 775Min: 764 / Avg: 770 / Max: 7791. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123150300450600750SE +/- 2.96, N = 3SE +/- 6.56, N = 3SE +/- 7.23, N = 36806856831. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123120240360480600Min: 674 / Avg: 679.67 / Max: 684Min: 672 / Avg: 685 / Max: 693Min: 671 / Avg: 683 / Max: 6961. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Random12340K80K120K160K200KSE +/- 165.24, N = 3SE +/- 216.46, N = 3SE +/- 346.16, N = 32073362070142069461. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Random12340K80K120K160K200KMin: 207023 / Avg: 207336.33 / Max: 207584Min: 206610 / Avg: 207013.67 / Max: 207351Min: 206264 / Avg: 206946.33 / Max: 2073891. (CXX) g++ options: -flto -pthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1230.85721.71442.57163.42884.286SE +/- 0.00692, N = 3SE +/- 0.00405, N = 3SE +/- 0.01581, N = 33.807573.809803.80919
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms123246810Min: 3.8 / Avg: 3.81 / Max: 3.82Min: 3.8 / Avg: 3.81 / Max: 3.82Min: 3.79 / Avg: 3.81 / Max: 3.84

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics123510152025SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 320.9520.9220.91
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics123510152025Min: 20.89 / Avg: 20.95 / Max: 20.98Min: 20.84 / Avg: 20.92 / Max: 21Min: 20.87 / Avg: 20.91 / Max: 20.99

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1234K8K12K16K20KSE +/- 68.64, N = 3SE +/- 25.77, N = 3SE +/- 37.30, N = 320211.7919804.6619709.171. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1234K8K12K16K20KMin: 20076.96 / Avg: 20211.79 / Max: 20301.52Min: 19753.49 / Avg: 19804.66 / Max: 19835.58Min: 19634.59 / Avg: 19709.17 / Max: 19747.631. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search123306090120150SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3122.44122.54122.521. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100Min: 122.43 / Avg: 122.44 / Max: 122.45Min: 122.51 / Avg: 122.54 / Max: 122.55Min: 122.46 / Avg: 122.52 / Max: 122.631. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder123160320480640800SE +/- 0.92, N = 3SE +/- 0.52, N = 3SE +/- 0.78, N = 3734.54734.33737.051. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder123130260390520650Min: 732.95 / Avg: 734.54 / Max: 736.14Min: 733.29 / Avg: 734.33 / Max: 734.9Min: 735.57 / Avg: 737.05 / Max: 738.231. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1233691215SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 312.0812.1512.141. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA12348121620Min: 12.05 / Avg: 12.08 / Max: 12.12Min: 12 / Avg: 12.15 / Max: 12.23Min: 12.05 / Avg: 12.14 / Max: 12.211. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.012370140210280350SE +/- 0.58, N = 3SE +/- 0.33, N = 33003003001. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.012350100150200250Min: 299 / Avg: 300 / Max: 301Min: 300 / Avg: 300.33 / Max: 3011. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: 20k Atoms1230.64961.29921.94882.59843.248SE +/- 0.009, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 32.8772.8872.8761. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: 20k Atoms123246810Min: 2.86 / Avg: 2.88 / Max: 2.89Min: 2.87 / Avg: 2.89 / Max: 2.9Min: 2.87 / Avg: 2.88 / Max: 2.881. (CXX) g++ options: -O3 -pthread -lm

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin Protein1230.63071.26141.89212.52283.1535SE +/- 0.003, N = 3SE +/- 0.009, N = 3SE +/- 0.014, N = 32.7962.8032.7991. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin Protein123246810Min: 2.79 / Avg: 2.8 / Max: 2.8Min: 2.79 / Avg: 2.8 / Max: 2.82Min: 2.78 / Avg: 2.8 / Max: 2.831. (CXX) g++ options: -O3 -pthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default1230.38410.76821.15231.53641.9205SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 31.7061.7061.7071. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default123246810Min: 1.7 / Avg: 1.71 / Max: 1.71Min: 1.71 / Avg: 1.71 / Max: 1.71Min: 1.71 / Avg: 1.71 / Max: 1.711. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 1001230.60231.20461.80692.40923.0115SE +/- 0.004, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 32.6772.6742.6771. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100123246810Min: 2.67 / Avg: 2.68 / Max: 2.68Min: 2.67 / Avg: 2.67 / Max: 2.67Min: 2.68 / Avg: 2.68 / Max: 2.681. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless123510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 318.8018.8818.951. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless123510152025Min: 18.76 / Avg: 18.8 / Max: 18.86Min: 18.86 / Avg: 18.88 / Max: 18.9Min: 18.9 / Avg: 18.95 / Max: 19.021. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression123246810SE +/- 0.006, N = 3SE +/- 0.011, N = 3SE +/- 0.007, N = 37.9727.9857.9861. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression1233691215Min: 7.96 / Avg: 7.97 / Max: 7.98Min: 7.97 / Avg: 7.99 / Max: 8.01Min: 7.97 / Avg: 7.99 / Max: 81. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression1231122334455SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 346.8646.9947.081. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression1231020304050Min: 46.71 / Avg: 46.86 / Max: 47.02Min: 46.77 / Avg: 46.99 / Max: 47.14Min: 46.96 / Avg: 47.08 / Max: 47.291. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 21238M16M24M32M40MSE +/- 67592.30, N = 3SE +/- 148804.10, N = 3SE +/- 82694.69, N = 339309015.639143549.839339662.1
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 21237M14M21M28M35MMin: 39186435 / Avg: 39309015.57 / Max: 39419668.4Min: 38857472.4 / Avg: 39143549.83 / Max: 39357636.1Min: 39186350.5 / Avg: 39339662.07 / Max: 39470047.8

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31235001000150020002500SE +/- 4.73, N = 3SE +/- 4.95, N = 3SE +/- 7.17, N = 32254.42257.02252.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3123400800120016002000Min: 2244.9 / Avg: 2254.37 / Max: 2259.2Min: 2247.1 / Avg: 2256.97 / Max: 2262.6Min: 2238.4 / Avg: 2252.67 / Max: 2261.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19123510152025SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 319.119.219.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19123510152025Min: 19.1 / Avg: 19.1 / Max: 19.1Min: 19.1 / Avg: 19.17 / Max: 19.2Min: 19.1 / Avg: 19.1 / Max: 19.11. (CC) gcc options: -O3 -pthread -lz -llzma

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark123714212835SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 328.1928.1328.111. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark123612182430Min: 28.17 / Avg: 28.19 / Max: 28.21Min: 28.11 / Avg: 28.13 / Max: 28.15Min: 28.1 / Avg: 28.11 / Max: 28.131. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performance123200K400K600K800K1000KSE +/- 775.23, N = 5SE +/- 1252.48, N = 5SE +/- 1204.17, N = 51161980116345611624741. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performance123200K400K600K800K1000KMin: 1159532 / Avg: 1161980.2 / Max: 1164435Min: 1159532 / Avg: 1163456.4 / Max: 1166902Min: 1159532 / Avg: 1162473.6 / Max: 11669021. (CC) gcc options: -O3 -march=native

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To Compile12330060090012001500SE +/- 1.07, N = 3SE +/- 0.53, N = 3SE +/- 1.46, N = 31490.861490.621490.68
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To Compile12330060090012001500Min: 1488.83 / Avg: 1490.86 / Max: 1492.49Min: 1489.82 / Avg: 1490.62 / Max: 1491.62Min: 1488.83 / Avg: 1490.68 / Max: 1493.56

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU12320406080100SE +/- 0.05, N = 3SE +/- 0.15, N = 3SE +/- 0.05, N = 377.7077.9477.70
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU1231530456075Min: 77.59 / Avg: 77.7 / Max: 77.76Min: 77.76 / Avg: 77.94 / Max: 78.23Min: 77.64 / Avg: 77.7 / Max: 77.81

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835SE +/- 0.42, N = 5SE +/- 0.39, N = 4SE +/- 0.07, N = 431.6631.4931.971. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835Min: 29.98 / Avg: 31.66 / Max: 32.18Min: 30.36 / Avg: 31.49 / Max: 32.03Min: 31.88 / Avg: 31.97 / Max: 32.161. (CC) gcc options: -O2 -std=c99

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123612182430SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 327.1727.1827.251. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123612182430Min: 27.11 / Avg: 27.17 / Max: 27.25Min: 27.12 / Avg: 27.18 / Max: 27.28Min: 27.15 / Avg: 27.25 / Max: 27.311. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software Only12380160240320400SE +/- 0.43, N = 3SE +/- 0.96, N = 3SE +/- 0.41, N = 3377.56377.03375.64MIN: 230.76 / MAX: 545.43MIN: 235.28 / MAX: 545.43MIN: 230.76 / MAX: 545.431. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software Only12370140210280350Min: 377.05 / Avg: 377.56 / Max: 378.42Min: 375.6 / Avg: 377.03 / Max: 378.86Min: 375.04 / Avg: 375.64 / Max: 376.431. mpv 0.32.0

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only12330060090012001500SE +/- 5.14, N = 3SE +/- 2.75, N = 3SE +/- 2.51, N = 31180.751177.431181.91MIN: 666.63 / MAX: 1999.92MIN: 666.63 / MAX: 1999.92MIN: 705.84 / MAX: 1999.91. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only1232004006008001000Min: 1172.11 / Avg: 1180.75 / Max: 1189.91Min: 1172.51 / Avg: 1177.43 / Max: 1182.03Min: 1176.9 / Avg: 1181.91 / Max: 1184.71. mpv 0.32.0

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24123306090120150SE +/- 1.53, N = 3SE +/- 1.00, N = 3SE +/- 0.69, N = 3147.52149.15150.371. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24123306090120150Min: 145.95 / Avg: 147.52 / Max: 150.58Min: 147.45 / Avg: 149.15 / Max: 150.9Min: 149.36 / Avg: 150.37 / Max: 151.691. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612380K160K240K320K400KSE +/- 668.35, N = 3SE +/- 2020.02, N = 3SE +/- 386.83, N = 3385851.58382973.15382203.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612370K140K210K280K350KMin: 384600.89 / Avg: 385851.58 / Max: 386885.45Min: 379207.68 / Avg: 382973.15 / Max: 386123.69Min: 381502.54 / Avg: 382203.75 / Max: 382837.461. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only1234K8K12K16K20KSE +/- 139.58, N = 3SE +/- 32.05, N = 3SE +/- 97.71, N = 32053620618205561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only1234K8K12K16K20KMin: 20361.35 / Avg: 20536.34 / Max: 20812.21Min: 20556.9 / Avg: 20618.33 / Max: 20664.91Min: 20363.82 / Avg: 20556.39 / Max: 20681.521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency1230.0110.0220.0330.0440.055SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0490.0480.0491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency12312345Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write12350100150200250SE +/- 0.84, N = 3SE +/- 1.59, N = 3SE +/- 1.56, N = 32092011981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write1234080120160200Min: 207.27 / Avg: 208.57 / Max: 210.15Min: 198.28 / Avg: 200.59 / Max: 203.63Min: 194.86 / Avg: 197.61 / Max: 200.251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency1231.13922.27843.41764.55685.696SE +/- 0.020, N = 3SE +/- 0.039, N = 3SE +/- 0.040, N = 34.7964.9875.0631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency123246810Min: 4.76 / Avg: 4.8 / Max: 4.83Min: 4.91 / Avg: 4.99 / Max: 5.04Min: 4.99 / Avg: 5.06 / Max: 5.131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only12320K40K60K80K100KSE +/- 1174.02, N = 3SE +/- 1266.71, N = 3SE +/- 158.67, N = 31148541144661133061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only12320K40K60K80K100KMin: 113367.03 / Avg: 114854.31 / Max: 117171.47Min: 112949.04 / Avg: 114466.24 / Max: 116981.89Min: 112988.89 / Avg: 113305.64 / Max: 113480.671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency1230.09950.1990.29850.3980.4975SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 30.4360.4370.4421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency12312345Min: 0.43 / Avg: 0.44 / Max: 0.44Min: 0.43 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only12320K40K60K80K100KSE +/- 1362.67, N = 5SE +/- 1227.66, N = 3SE +/- 1157.60, N = 31109271096541076941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only12320K40K60K80K100KMin: 108233.44 / Avg: 110926.63 / Max: 116042.67Min: 107478.41 / Avg: 109654.19 / Max: 111727.46Min: 105607.9 / Avg: 107693.7 / Max: 109606.771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency1230.2090.4180.6270.8361.045SE +/- 0.011, N = 5SE +/- 0.010, N = 3SE +/- 0.010, N = 30.9020.9120.9291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency123246810Min: 0.86 / Avg: 0.9 / Max: 0.92Min: 0.9 / Avg: 0.91 / Max: 0.93Min: 0.91 / Avg: 0.93 / Max: 0.951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only12320K40K60K80K100KSE +/- 830.93, N = 11SE +/- 1424.81, N = 3SE +/- 917.40, N = 159207790359919341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only12316K32K48K64K80KMin: 87835.18 / Avg: 92077.16 / Max: 98610.8Min: 88470.82 / Avg: 90358.83 / Max: 93151.3Min: 86325.74 / Avg: 91933.53 / Max: 100062.491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency1230.6231.2461.8692.4923.115SE +/- 0.024, N = 11SE +/- 0.043, N = 3SE +/- 0.027, N = 152.7182.7692.7241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency123246810Min: 2.54 / Avg: 2.72 / Max: 2.85Min: 2.69 / Avg: 2.77 / Max: 2.83Min: 2.5 / Avg: 2.72 / Max: 2.91. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write12360120180240300SE +/- 0.45, N = 3SE +/- 0.46, N = 3SE +/- 4.01, N = 32842722621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write12350100150200250Min: 283.48 / Avg: 284.24 / Max: 285.05Min: 271.39 / Avg: 272.31 / Max: 272.79Min: 254.6 / Avg: 262.13 / Max: 268.271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency1234080120160200SE +/- 0.28, N = 3SE +/- 0.31, N = 3SE +/- 2.94, N = 3175.93183.65190.861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency123306090120150Min: 175.43 / Avg: 175.93 / Max: 176.4Min: 183.32 / Avg: 183.65 / Max: 184.27Min: 186.41 / Avg: 190.86 / Max: 196.411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only1234K8K12K16K20KSE +/- 32.90, N = 3SE +/- 20.00, N = 3SE +/- 6.47, N = 32024020229202411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only1234K8K12K16K20KMin: 20174.69 / Avg: 20240.49 / Max: 20273.49Min: 20207.31 / Avg: 20228.54 / Max: 20268.52Min: 20231.97 / Avg: 20240.71 / Max: 20253.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency1230.0110.0220.0330.0440.055SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0490.0490.0491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency12312345Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.05Min: 0.05 / Avg: 0.05 / Max: 0.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write12360120180240300SE +/- 1.21, N = 3SE +/- 0.44, N = 3SE +/- 2.05, N = 32782562511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write12350100150200250Min: 275.78 / Avg: 278.2 / Max: 279.54Min: 255.62 / Avg: 256.46 / Max: 257.12Min: 247.66 / Avg: 250.68 / Max: 254.581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency12390180270360450SE +/- 1.58, N = 3SE +/- 0.67, N = 3SE +/- 3.22, N = 3359.56390.03399.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency12370140210280350Min: 357.84 / Avg: 359.56 / Max: 362.72Min: 389.04 / Avg: 390.03 / Max: 391.32Min: 392.97 / Avg: 399.09 / Max: 403.871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write12360120180240300SE +/- 0.70, N = 3SE +/- 2.26, N = 3SE +/- 3.57, N = 42652452411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write12350100150200250Min: 263.83 / Avg: 265.22 / Max: 266.09Min: 241.93 / Avg: 245.28 / Max: 249.58Min: 233.51 / Avg: 241.18 / Max: 250.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency1232004006008001000SE +/- 2.51, N = 3SE +/- 9.31, N = 3SE +/- 15.27, N = 4942.921019.731037.521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency1232004006008001000Min: 939.79 / Avg: 942.91 / Max: 947.87Min: 1002.05 / Avg: 1019.73 / Max: 1033.64Min: 1000.1 / Avg: 1037.52 / Max: 1070.861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write1234080120160200SE +/- 1.55, N = 3SE +/- 1.54, N = 3SE +/- 3.02, N = 31961911841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write1234080120160200Min: 193.22 / Avg: 195.79 / Max: 198.58Min: 189.57 / Avg: 191.22 / Max: 194.3Min: 179.72 / Avg: 184.47 / Max: 190.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency1231.22042.44083.66124.88166.102SE +/- 0.041, N = 3SE +/- 0.042, N = 3SE +/- 0.088, N = 35.1095.2315.4241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency123246810Min: 5.04 / Avg: 5.11 / Max: 5.18Min: 5.15 / Avg: 5.23 / Max: 5.28Min: 5.26 / Avg: 5.42 / Max: 5.571. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only12320K40K60K80K100KSE +/- 96.80, N = 3SE +/- 53.15, N = 3SE +/- 11.66, N = 39784997079970111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only12320K40K60K80K100KMin: 97670.58 / Avg: 97848.57 / Max: 98003.53Min: 96977.47 / Avg: 97078.72 / Max: 97157.35Min: 96992.66 / Avg: 97010.89 / Max: 97032.611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency1230.11590.23180.34770.46360.5795SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.5110.5150.5151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency123246810Min: 0.51 / Avg: 0.51 / Max: 0.51Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only12320K40K60K80K100KSE +/- 79.84, N = 3SE +/- 150.99, N = 3SE +/- 409.00, N = 39528294493944241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only12317K34K51K68K85KMin: 95146.58 / Avg: 95282.47 / Max: 95423.02Min: 94275.76 / Avg: 94493.29 / Max: 94783.46Min: 93646.46 / Avg: 94424.32 / Max: 95032.461. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency1230.23850.4770.71550.9541.1925SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 31.0501.0581.0601. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency123246810Min: 1.05 / Avg: 1.05 / Max: 1.05Min: 1.06 / Avg: 1.06 / Max: 1.06Min: 1.05 / Avg: 1.06 / Max: 1.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only12320K40K60K80K100KSE +/- 1041.95, N = 3SE +/- 223.08, N = 3SE +/- 936.11, N = 78125182741837111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only12315K30K45K60K75KMin: 79232.39 / Avg: 81251.15 / Max: 82708.25Min: 82314.64 / Avg: 82740.53 / Max: 83068.63Min: 79277.9 / Avg: 83711.38 / Max: 86261.21. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency1230.69281.38562.07842.77123.464SE +/- 0.040, N = 3SE +/- 0.008, N = 3SE +/- 0.034, N = 73.0793.0222.9901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency123246810Min: 3.02 / Avg: 3.08 / Max: 3.16Min: 3.01 / Avg: 3.02 / Max: 3.04Min: 2.9 / Avg: 2.99 / Max: 3.151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write1238001600240032004000SE +/- 52.45, N = 4SE +/- 51.71, N = 4SE +/- 38.07, N = 33636345933651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write1236001200180024003000Min: 3525.33 / Avg: 3636.04 / Max: 3741.96Min: 3341.78 / Avg: 3458.94 / Max: 3592.64Min: 3310.71 / Avg: 3364.54 / Max: 3438.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency12348121620SE +/- 0.20, N = 4SE +/- 0.21, N = 4SE +/- 0.17, N = 313.7614.4714.871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency12348121620Min: 13.36 / Avg: 13.76 / Max: 14.19Min: 13.92 / Avg: 14.47 / Max: 14.96Min: 14.55 / Avg: 14.87 / Max: 15.11. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write12310002000300040005000SE +/- 93.25, N = 15SE +/- 74.38, N = 15SE +/- 93.27, N = 154579442942401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write1238001600240032004000Min: 3768.62 / Avg: 4578.6 / Max: 4946.88Min: 3789.94 / Avg: 4429.14 / Max: 4719.73Min: 3719.03 / Avg: 4239.85 / Max: 47771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency123612182430SE +/- 0.49, N = 15SE +/- 0.41, N = 15SE +/- 0.52, N = 1521.9922.6823.751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency123612182430Min: 20.22 / Avg: 21.99 / Max: 26.55Min: 21.2 / Avg: 22.68 / Max: 26.39Min: 20.94 / Avg: 23.75 / Max: 26.91. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write12310002000300040005000SE +/- 14.83, N = 3SE +/- 69.14, N = 14SE +/- 72.37, N = 154824459245011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write1238001600240032004000Min: 4797.09 / Avg: 4823.99 / Max: 4848.26Min: 4054.37 / Avg: 4592.26 / Max: 5041.23Min: 3911.11 / Avg: 4500.67 / Max: 4920.031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency1231326395265SE +/- 0.16, N = 3SE +/- 0.85, N = 14SE +/- 0.95, N = 1551.8454.6255.781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency1231122334455Min: 51.58 / Avg: 51.84 / Max: 52.13Min: 49.6 / Avg: 54.62 / Max: 61.68Min: 50.83 / Avg: 55.78 / Max: 63.941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10012314K28K42K56K70KSE +/- 63.17, N = 3SE +/- 27.22, N = 3SE +/- 55.92, N = 36551565541655201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10012311K22K33K44K55KMin: 65444 / Avg: 65515 / Max: 65641Min: 65490 / Avg: 65541 / Max: 65583Min: 65459 / Avg: 65520.33 / Max: 656321. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20012330K60K90K120K150KSE +/- 84.02, N = 3SE +/- 170.17, N = 3SE +/- 79.44, N = 31307221310911308911. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20012320K40K60K80K100KMin: 130613 / Avg: 130721.67 / Max: 130887Min: 130903 / Avg: 131091.33 / Max: 131431Min: 130805 / Avg: 130891.33 / Max: 1310501. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012330K60K90K120K150KSE +/- 124.23, N = 3SE +/- 51.19, N = 3SE +/- 109.27, N = 31613331611331611121. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012330K60K90K120K150KMin: 161098 / Avg: 161333.33 / Max: 161520Min: 161031 / Avg: 161132.67 / Max: 161194Min: 160981 / Avg: 161112 / Max: 1613291. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 20012370K140K210K280K350KSE +/- 117.16, N = 3SE +/- 99.10, N = 3SE +/- 166.75, N = 33231523223303233341. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 20012360K120K180K240K300KMin: 322986 / Avg: 323151.67 / Max: 323378Min: 322197 / Avg: 322330.33 / Max: 322524Min: 323028 / Avg: 323333.67 / Max: 3236021. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon Nanotube123130260390520650SE +/- 0.46, N = 3SE +/- 0.26, N = 3SE +/- 0.51, N = 3602.62604.07602.971. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon Nanotube123110220330440550Min: 601.94 / Avg: 602.62 / Max: 603.5Min: 603.78 / Avg: 604.07 / Max: 604.58Min: 602 / Avg: 602.97 / Max: 603.761. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.01233691215SE +/- 0.09, N = 11SE +/- 0.15, N = 4SE +/- 0.12, N = 510.0410.0610.10MIN: 9.52 / MAX: 33.64MIN: 9.56 / MAX: 33.37MIN: 9.56 / MAX: 33.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.01233691215Min: 9.58 / Avg: 10.04 / Max: 10.27Min: 9.61 / Avg: 10.06 / Max: 10.25Min: 9.63 / Avg: 10.1 / Max: 10.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-501231326395265SE +/- 0.32, N = 11SE +/- 0.38, N = 4SE +/- 0.36, N = 556.1155.8155.95MIN: 54.43 / MAX: 198.77MIN: 54.55 / MAX: 78.78MIN: 53.86 / MAX: 81.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-501231122334455Min: 54.7 / Avg: 56.11 / Max: 58.45Min: 54.67 / Avg: 55.81 / Max: 56.27Min: 54.79 / Avg: 55.95 / Max: 56.921. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_2241231.25782.51563.77345.03126.289SE +/- 0.011, N = 11SE +/- 0.007, N = 4SE +/- 0.008, N = 55.5905.5705.576MIN: 5.47 / MAX: 28.96MIN: 5.5 / MAX: 10.38MIN: 5.5 / MAX: 8.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224123246810Min: 5.52 / Avg: 5.59 / Max: 5.66Min: 5.55 / Avg: 5.57 / Max: 5.58Min: 5.55 / Avg: 5.58 / Max: 5.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0123246810SE +/- 0.334, N = 11SE +/- 0.009, N = 4SE +/- 0.005, N = 58.0227.6747.683MIN: 7.51 / MAX: 64.47MIN: 7.6 / MAX: 31MIN: 7.64 / MAX: 12.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.01233691215Min: 7.67 / Avg: 8.02 / Max: 11.37Min: 7.66 / Avg: 7.67 / Max: 7.7Min: 7.67 / Avg: 7.68 / Max: 7.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v31231428425670SE +/- 0.22, N = 11SE +/- 0.36, N = 4SE +/- 0.27, N = 561.6361.6561.63MIN: 60.24 / MAX: 85.44MIN: 60.21 / MAX: 138.47MIN: 60.1 / MAX: 85.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v31231224364860Min: 60.51 / Avg: 61.63 / Max: 62.34Min: 60.56 / Avg: 61.64 / Max: 62.15Min: 60.55 / Avg: 61.63 / Max: 61.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet123612182430SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 324.9224.8624.89MIN: 24.77 / MAX: 26.68MIN: 24.78 / MAX: 25.75MIN: 24.77 / MAX: 37.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet123612182430Min: 24.84 / Avg: 24.92 / Max: 25.08Min: 24.85 / Avg: 24.86 / Max: 24.88Min: 24.85 / Avg: 24.89 / Max: 24.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet123714212835SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 328.8728.7628.76MIN: 28.75 / MAX: 29.64MIN: 28.7 / MAX: 29.48MIN: 28.67 / MAX: 30.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet123612182430Min: 28.81 / Avg: 28.87 / Max: 28.97Min: 28.75 / Avg: 28.76 / Max: 28.78Min: 28.74 / Avg: 28.76 / Max: 28.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.06, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 37.837.787.78MIN: 7.7 / MAX: 27.26MIN: 7.72 / MAX: 9.38MIN: 7.71 / MAX: 9.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 7.77 / Avg: 7.83 / Max: 7.94Min: 7.78 / Avg: 7.78 / Max: 7.79Min: 7.77 / Avg: 7.78 / Max: 7.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 36.506.536.52MIN: 6.44 / MAX: 8.75MIN: 6.44 / MAX: 18.87MIN: 6.44 / MAX: 8.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v31233691215Min: 6.5 / Avg: 6.5 / Max: 6.51Min: 6.5 / Avg: 6.53 / Max: 6.57Min: 6.52 / Avg: 6.52 / Max: 6.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v21231.03052.0613.09154.1225.1525SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.564.574.58MIN: 4.51 / MAX: 7.21MIN: 4.53 / MAX: 6.2MIN: 4.52 / MAX: 6.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2123246810Min: 4.56 / Avg: 4.56 / Max: 4.57Min: 4.55 / Avg: 4.57 / Max: 4.58Min: 4.56 / Avg: 4.58 / Max: 4.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet123246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 36.566.566.57MIN: 6.51 / MAX: 8.26MIN: 6.51 / MAX: 9.26MIN: 6.53 / MAX: 8.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet1233691215Min: 6.55 / Avg: 6.56 / Max: 6.58Min: 6.55 / Avg: 6.56 / Max: 6.56Min: 6.57 / Avg: 6.57 / Max: 6.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b01233691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 310.4610.4510.46MIN: 10.42 / MAX: 10.78MIN: 10.41 / MAX: 13.2MIN: 10.42 / MAX: 11.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b01233691215Min: 10.45 / Avg: 10.46 / Max: 10.47Min: 10.45 / Avg: 10.45 / Max: 10.46Min: 10.45 / Avg: 10.46 / Max: 10.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface1230.47250.9451.41751.892.3625SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.092.092.10MIN: 2.07 / MAX: 2.19MIN: 2.07 / MAX: 2.91MIN: 2.08 / MAX: 2.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface123246810Min: 2.09 / Avg: 2.09 / Max: 2.09Min: 2.09 / Avg: 2.09 / Max: 2.09Min: 2.1 / Avg: 2.1 / Max: 2.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet123612182430SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 323.1423.1423.18MIN: 23.04 / MAX: 34.42MIN: 23.03 / MAX: 36.37MIN: 23.07 / MAX: 33.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet123510152025Min: 23.12 / Avg: 23.14 / Max: 23.18Min: 23.1 / Avg: 23.14 / Max: 23.19Min: 23.13 / Avg: 23.18 / Max: 23.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg1612320406080100SE +/- 10.25, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3103.7293.3093.29MIN: 93.04 / MAX: 2721.22MIN: 93.07 / MAX: 106.12MIN: 93.1 / MAX: 105.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg1612320406080100Min: 93.36 / Avg: 103.72 / Max: 124.22Min: 93.25 / Avg: 93.3 / Max: 93.33Min: 93.24 / Avg: 93.29 / Max: 93.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18123612182430SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 323.5223.5023.50MIN: 23.4 / MAX: 25.9MIN: 23.38 / MAX: 24.04MIN: 23.39 / MAX: 25.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18123510152025Min: 23.49 / Avg: 23.52 / Max: 23.56Min: 23.49 / Avg: 23.5 / Max: 23.52Min: 23.49 / Avg: 23.5 / Max: 23.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet123510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 322.6422.6422.64MIN: 22.54 / MAX: 24.97MIN: 22.56 / MAX: 24.94MIN: 22.56 / MAX: 25.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet123510152025Min: 22.59 / Avg: 22.64 / Max: 22.67Min: 22.63 / Avg: 22.64 / Max: 22.66Min: 22.62 / Avg: 22.64 / Max: 22.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet501231122334455SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 347.0247.0747.05MIN: 46.88 / MAX: 49.96MIN: 46.89 / MAX: 57.05MIN: 46.9 / MAX: 49.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet501231020304050Min: 47 / Avg: 47.02 / Max: 47.04Min: 47.02 / Avg: 47.07 / Max: 47.13Min: 47.01 / Avg: 47.05 / Max: 47.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny123918273645SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 340.0939.9239.92MIN: 39.88 / MAX: 53.45MIN: 39.8 / MAX: 41.77MIN: 39.78 / MAX: 42.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny123816243240Min: 40.03 / Avg: 40.09 / Max: 40.21Min: 39.9 / Avg: 39.92 / Max: 39.94Min: 39.9 / Avg: 39.92 / Max: 39.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212380160240320400SE +/- 0.43, N = 3SE +/- 0.33, N = 3SE +/- 0.32, N = 3357.32357.85357.36MIN: 355.94 / MAX: 359.3MIN: 356.17 / MAX: 360.63MIN: 356.03 / MAX: 359.251. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212360120180240300Min: 356.87 / Avg: 357.32 / Max: 358.17Min: 357.22 / Avg: 357.85 / Max: 358.35Min: 356.96 / Avg: 357.36 / Max: 357.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112370140210280350SE +/- 5.07, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3342.39337.74337.82MIN: 335.31 / MAX: 400.55MIN: 336.52 / MAX: 340.1MIN: 337.14 / MAX: 341.291. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300Min: 336.93 / Avg: 342.39 / Max: 352.52Min: 337.67 / Avg: 337.74 / Max: 337.88Min: 337.71 / Avg: 337.82 / Max: 338.021. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12380M160M240M320M400MSE +/- 345454.84, N = 3SE +/- 938991.49, N = 3SE +/- 897687.70, N = 3390197794.57390087861.34390272679.151. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12370M140M210M280M350MMin: 389536541.09 / Avg: 390197794.57 / Max: 390701842.79Min: 388215001.25 / Avg: 390087861.34 / Max: 391144338.7Min: 388563788.18 / Avg: 390272679.15 / Max: 391603882.81. (CC) gcc options: -O3 -march=native -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231530456075SE +/- 1.13, N = 3SE +/- 0.48, N = 3SE +/- 0.57, N = 367.0165.2965.54
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231326395265Min: 64.75 / Avg: 67.01 / Max: 68.17Min: 64.68 / Avg: 65.29 / Max: 66.23Min: 64.68 / Avg: 65.54 / Max: 66.61

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda12320406080100SE +/- 1.04, N = 3SE +/- 0.15, N = 3SE +/- 0.37, N = 387.8986.6087.89
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda12320406080100Min: 85.86 / Avg: 87.89 / Max: 89.23Min: 86.32 / Avg: 86.6 / Max: 86.84Min: 87.16 / Avg: 87.89 / Max: 88.37

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123714212835SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 327.6627.7027.70
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123612182430Min: 27.65 / Avg: 27.66 / Max: 27.68Min: 27.68 / Avg: 27.7 / Max: 27.72Min: 27.66 / Avg: 27.7 / Max: 27.73

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression1230.98781.97562.96343.95124.939SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 124.324.334.39
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression123246810Min: 4.29 / Avg: 4.32 / Max: 4.38Min: 4.32 / Avg: 4.33 / Max: 4.35Min: 4.21 / Avg: 4.39 / Max: 4.58

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.41234M8M12M16M20MSE +/- 104643.97, N = 3SE +/- 52933.71, N = 3SE +/- 52236.76, N = 31673624716595780165834771. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.41233M6M9M12M15MMin: 16627490 / Avg: 16736246.67 / Max: 16945480Min: 16508330 / Avg: 16595780 / Max: 16691180Min: 16484570 / Avg: 16583476.67 / Max: 166620701. (CXX) g++ options: -O3 -fopenmp

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123200K400K600K800K1000KSE +/- 3581.92, N = 3SE +/- 3852.84, N = 3SE +/- 3829.04, N = 3889455.4876094.3872007.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123150K300K450K600K750KMin: 882318.2 / Avg: 889455.4 / Max: 893558.5Min: 871245.9 / Avg: 876094.3 / Max: 883705.3Min: 864594.9 / Avg: 872007.37 / Max: 877379.8

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123200K400K600K800K1000KSE +/- 4142.78, N = 3SE +/- 3135.08, N = 3SE +/- 4099.63, N = 31016169.11013846.11012647.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123200K400K600K800K1000KMin: 1007966.6 / Avg: 1016169.13 / Max: 1021283.7Min: 1009157.1 / Avg: 1013846.07 / Max: 1019795.6Min: 1005596.5 / Avg: 1012647.33 / Max: 1019797

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123200K400K600K800K1000KSE +/- 2049.73, N = 3SE +/- 1131.06, N = 3SE +/- 1680.62, N = 31027433.31028323.21023332.0
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123200K400K600K800K1000KMin: 1023399.6 / Avg: 1027433.27 / Max: 1030083.6Min: 1026334.4 / Avg: 1028323.23 / Max: 1030251.1Min: 1020068.9 / Avg: 1023332 / Max: 1025661.8

98 Results Shown

GLmark2
LeelaChessZero:
  BLAS
  Eigen
  Rand
NAMD
Dolfyn
FFTE
Timed HMMer Search
Incompact3D
Timed MAFFT Alignment
Monte Carlo Simulations of Ionised Nebulae
LAMMPS Molecular Dynamics Simulator:
  20k Atoms
  Rhodopsin Protein
WebP Image Encode:
  Default
  Quality 100
  Quality 100, Lossless
  Quality 100, Highest Compression
  Quality 100, Lossless, Highest Compression
BYTE Unix Benchmark
Zstd Compression:
  3
  19
LibRaw
TSCP
Timed LLVM Compilation
DeepSpeech
eSpeak-NG Speech Engine
RNNoise
MPV:
  Big Buck Bunny Sunflower 4K - Software Only
  Big Buck Bunny Sunflower 1080p - Software Only
Apache CouchDB
KeyDB
PostgreSQL pgbench:
  1 - 1 - Read Only
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Write
  1 - 1 - Read Write - Average Latency
  1 - 50 - Read Only
  1 - 50 - Read Only - Average Latency
  1 - 100 - Read Only
  1 - 100 - Read Only - Average Latency
  1 - 250 - Read Only
  1 - 250 - Read Only - Average Latency
  1 - 50 - Read Write
  1 - 50 - Read Write - Average Latency
  100 - 1 - Read Only
  100 - 1 - Read Only - Average Latency
  1 - 100 - Read Write
  1 - 100 - Read Write - Average Latency
  1 - 250 - Read Write
  1 - 250 - Read Write - Average Latency
  100 - 1 - Read Write
  100 - 1 - Read Write - Average Latency
  100 - 50 - Read Only
  100 - 50 - Read Only - Average Latency
  100 - 100 - Read Only
  100 - 100 - Read Only - Average Latency
  100 - 250 - Read Only
  100 - 250 - Read Only - Average Latency
  100 - 50 - Read Write
  100 - 50 - Read Write - Average Latency
  100 - 100 - Read Write
  100 - 100 - Read Write - Average Latency
  100 - 250 - Read Write
  100 - 250 - Read Write - Average Latency
Caffe:
  AlexNet - CPU - 100
  AlexNet - CPU - 200
  GoogleNet - CPU - 100
  GoogleNet - CPU - 200
GPAW
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - squeezenet
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
Hierarchical INTegration
Mlpack Benchmark:
  scikit_ica
  scikit_qda
  scikit_svm
  scikit_linearridgeregression
Kripke
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
  1024 - 10000 - 2,5000,1 - 10000