Core i3 7100 September

Intel Core i3-7100 testing with a Gigabyte B250M-DS3H-CF (F9 BIOS) and Gigabyte Intel HD 630 3GB on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009230-FI-COREI371045
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 3 Tests
Compression Tests 3 Tests
CPU Massive 3 Tests
Creator Workloads 5 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 8 Tests
Imaging 3 Tests
Machine Learning 4 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 4 Tests
OpenMPI Tests 3 Tests
Scientific Computing 4 Tests
Server CPU Tests 2 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i3 7100
September 22 2020
  7 Hours, 49 Minutes
v5.9
September 23 2020
  7 Hours, 45 Minutes
v5.9 Try 2
September 23 2020
  7 Hours, 35 Minutes
Invert Hiding All Results Option
  7 Hours, 43 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i3 7100 SeptemberOpenBenchmarking.orgPhoronix Test SuiteIntel Core i3-7100 @ 3.90GHz (2 Cores / 4 Threads)Gigabyte B250M-DS3H-CF (F9 BIOS)Intel Xeon E3-1200 v6/7th + B2508GB250GB Western Digital WDS250G1B0A-Gigabyte Intel HD 630 3GB (1100MHz)Realtek ALC887-VDVA2431Realtek RTL8111/8168/8411Ubuntu 19.105.9.0-050900rc1daily20200822-generic (x86_64) 20200821GNOME Shell 3.34.1X Server 1.20.5modesetting 1.20.54.5 Mesa 19.2.8GCC 9.2.1 20191008ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I3 7100 September BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6- Python 2.7.17rc1 + Python 3.7.5- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected

Core i3 7100v5.9v5.9 Try 2Result OverviewPhoronix Test Suite100%101%103%104%106%NAMDIncompact3DeSpeak-NG Speech EngineMobile Neural NetworkMonte Carlo Simulations of Ionised NebulaeLibRawGLmark2OSBenchZstd CompressionLAMMPS Molecular Dynamics SimulatorWebP Image EncodeNCNNSystem GZIP DecompressiondcrawAOM AV1TensorFlow LiteSystem ZLIB DecompressionOpenCV

Core i3 7100 Septemberglmark2: 1920 x 1080osbench: Create Filesosbench: Create Threadsosbench: Launch Programsosbench: Create Processesosbench: Memory Allocationsnamd: ATPase Simulation - 327,506 Atomsincompact3d: Cylindermocassin: Dust 2D tau100.0lammps: Rhodopsin Proteinwebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressioncompress-zstd: 3compress-zstd: 19libraw: Post-Processing Benchmarkaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimedcraw: RAW To PPM Image Conversionespeak: Text-To-Speech Synthesissystem-decompress-gzip: system-decompress-zlib: tensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - squeezenet_int8ncnn: CPU - mobilenet_v3ncnn: CPU - squeezenetncnn: CPU - mnasnetncnn: CPU - blazefacencnn: CPU - googlenet_int8ncnn: CPU - vgg16_int8ncnn: CPU - resnet18_int8ncnn: CPU - alexnetncnn: CPU - resnet50_int8ncnn: CPU - mobilenetv2_yolov3opencv: Features 2Dopencv: Object Detectionopencv: DNN - Deep Neural NetworkCore i3 7100v5.9v5.9 Try 247716.46231514.298757116.08044334.06604183.0020117.528631346.341924151.5241.7252.71622.8218.25454.5661544.312.618.880.161.4510.232.3329.8842.70132.1063.2951920.455071915396132304676552426237106407591197556715.13466.7357.78410.87389.54531.419.106.279.752.6286.59381.4459.1726.15189.7842.9419271152981715447816.48389614.250278119.99289233.29038681.3182997.113321314.697794141.5251.7242.71422.6188.21754.0111548.612.718.910.161.4510.222.3329.9042.72432.2013.2961918.087074915608132315676556106233866409511197473315.07266.1327.70610.81488.25631.369.046.239.662.6286.51382.2158.9726.02189.4642.7919003353252730648016.44656414.090538117.87414633.80378181.3126577.124811313.170574161.5321.7262.71322.7138.21354.2351549.012.619.030.161.4510.232.3329.7942.70931.5623.3011919.276824915627132300006557346234666410051197386715.03166.1817.69010.84488.20631.379.066.249.692.6186.45382.1059.0226.14189.5042.82189856546457193OpenBenchmarking.org

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080v5.9 Try 2v5.9Core i3 7100100200300400500480478477

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Filesv5.9 Try 2v5.9Core i3 710048121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 316.4516.4816.461. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Filesv5.9 Try 2v5.9Core i3 710048121620Min: 16.41 / Avg: 16.45 / Max: 16.47Min: 16.44 / Avg: 16.48 / Max: 16.54Min: 16.37 / Avg: 16.46 / Max: 16.521. (CC) gcc options: -lm

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threadsv5.9 Try 2v5.9Core i3 710048121620SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 314.0914.2514.301. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threadsv5.9 Try 2v5.9Core i3 710048121620Min: 14.05 / Avg: 14.09 / Max: 14.12Min: 14.17 / Avg: 14.25 / Max: 14.3Min: 14.19 / Avg: 14.3 / Max: 14.381. (CC) gcc options: -lm

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programsv5.9 Try 2v5.9Core i3 7100306090120150SE +/- 1.75, N = 3SE +/- 0.82, N = 3SE +/- 0.65, N = 3117.87119.99116.081. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programsv5.9 Try 2v5.9Core i3 710020406080100Min: 115.16 / Avg: 117.87 / Max: 121.15Min: 119.1 / Avg: 119.99 / Max: 121.63Min: 115.43 / Avg: 116.08 / Max: 117.381. (CC) gcc options: -lm

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processesv5.9 Try 2v5.9Core i3 7100816243240SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.54, N = 333.8033.2934.071. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processesv5.9 Try 2v5.9Core i3 7100714212835Min: 33.57 / Avg: 33.8 / Max: 34.07Min: 33.16 / Avg: 33.29 / Max: 33.38Min: 33.33 / Avg: 34.07 / Max: 35.111. (CC) gcc options: -lm

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocationsv5.9 Try 2v5.9Core i3 710020406080100SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 381.3181.3283.001. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocationsv5.9 Try 2v5.9Core i3 71001632486480Min: 81.26 / Avg: 81.31 / Max: 81.38Min: 81.29 / Avg: 81.32 / Max: 81.36Min: 82.98 / Avg: 83 / Max: 83.021. (CC) gcc options: -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsv5.9 Try 2v5.9Core i3 7100246810SE +/- 0.01481, N = 3SE +/- 0.00743, N = 3SE +/- 0.10701, N = 37.124817.113327.52863
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsv5.9 Try 2v5.9Core i3 71003691215Min: 7.1 / Avg: 7.12 / Max: 7.15Min: 7.1 / Avg: 7.11 / Max: 7.13Min: 7.31 / Avg: 7.53 / Max: 7.64

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinderv5.9 Try 2v5.9Core i3 710030060090012001500SE +/- 0.54, N = 3SE +/- 2.19, N = 3SE +/- 2.53, N = 31313.171314.701346.341. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinderv5.9 Try 2v5.9Core i3 71002004006008001000Min: 1312.6 / Avg: 1313.17 / Max: 1314.25Min: 1311.93 / Avg: 1314.7 / Max: 1319.02Min: 1341.88 / Avg: 1346.34 / Max: 1350.631. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0v5.9 Try 2v5.9Core i3 710090180270360450SE +/- 0.88, N = 3SE +/- 0.58, N = 34164144151. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0v5.9 Try 2v5.9Core i3 710070140210280350Min: 414 / Avg: 415.67 / Max: 417Min: 414 / Avg: 415 / Max: 4161. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin Proteinv5.9 Try 2v5.9Core i3 71000.34470.68941.03411.37881.7235SE +/- 0.006, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 31.5321.5251.5241. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin Proteinv5.9 Try 2v5.9Core i3 7100246810Min: 1.52 / Avg: 1.53 / Max: 1.54Min: 1.52 / Avg: 1.52 / Max: 1.53Min: 1.52 / Avg: 1.52 / Max: 1.531. (CXX) g++ options: -O3 -pthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Defaultv5.9 Try 2v5.9Core i3 71000.38840.77681.16521.55361.942SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.7261.7241.7251. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Defaultv5.9 Try 2v5.9Core i3 7100246810Min: 1.72 / Avg: 1.73 / Max: 1.73Min: 1.72 / Avg: 1.72 / Max: 1.73Min: 1.72 / Avg: 1.73 / Max: 1.731. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100v5.9 Try 2v5.9Core i3 71000.61111.22221.83332.44443.0555SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 32.7132.7142.7161. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100v5.9 Try 2v5.9Core i3 7100246810Min: 2.71 / Avg: 2.71 / Max: 2.71Min: 2.71 / Avg: 2.71 / Max: 2.72Min: 2.72 / Avg: 2.72 / Max: 2.721. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessv5.9 Try 2v5.9Core i3 7100510152025SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 322.7122.6222.821. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessv5.9 Try 2v5.9Core i3 7100510152025Min: 22.67 / Avg: 22.71 / Max: 22.77Min: 22.55 / Avg: 22.62 / Max: 22.71Min: 22.81 / Avg: 22.82 / Max: 22.841. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compressionv5.9 Try 2v5.9Core i3 7100246810SE +/- 0.001, N = 3SE +/- 0.008, N = 3SE +/- 0.013, N = 38.2138.2178.2541. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compressionv5.9 Try 2v5.9Core i3 71003691215Min: 8.21 / Avg: 8.21 / Max: 8.21Min: 8.2 / Avg: 8.22 / Max: 8.23Min: 8.23 / Avg: 8.25 / Max: 8.271. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressionv5.9 Try 2v5.9Core i3 71001224364860SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.28, N = 354.2454.0154.571. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compressionv5.9 Try 2v5.9Core i3 71001122334455Min: 54.21 / Avg: 54.24 / Max: 54.27Min: 54 / Avg: 54.01 / Max: 54.03Min: 54.28 / Avg: 54.57 / Max: 55.131. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3v5.9 Try 2v5.9Core i3 710030060090012001500SE +/- 4.02, N = 3SE +/- 4.45, N = 3SE +/- 7.95, N = 31549.01548.61544.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3v5.9 Try 2v5.9Core i3 710030060090012001500Min: 1541.4 / Avg: 1548.97 / Max: 1555.1Min: 1540 / Avg: 1548.63 / Max: 1554.8Min: 1528.4 / Avg: 1544.3 / Max: 1552.61. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19v5.9 Try 2v5.9Core i3 71003691215SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 312.612.712.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19v5.9 Try 2v5.9Core i3 710048121620Min: 12.6 / Avg: 12.63 / Max: 12.7Min: 12.7 / Avg: 12.7 / Max: 12.7Min: 12.6 / Avg: 12.63 / Max: 12.71. (CC) gcc options: -O3 -pthread -lz -llzma

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmarkv5.9 Try 2v5.9Core i3 7100510152025SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 319.0318.9118.881. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmarkv5.9 Try 2v5.9Core i3 7100510152025Min: 19.02 / Avg: 19.03 / Max: 19.04Min: 18.83 / Avg: 18.91 / Max: 18.97Min: 18.81 / Avg: 18.88 / Max: 18.991. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-Passv5.9 Try 2v5.9Core i3 71000.0360.0720.1080.1440.18SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.160.160.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-Passv5.9 Try 2v5.9Core i3 710012345Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Passv5.9 Try 2v5.9Core i3 71000.32630.65260.97891.30521.6315SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.451.451.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Passv5.9 Try 2v5.9Core i3 7100246810Min: 1.44 / Avg: 1.45 / Max: 1.45Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.45 / Avg: 1.45 / Max: 1.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtimev5.9 Try 2v5.9Core i3 71003691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 310.2310.2210.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtimev5.9 Try 2v5.9Core i3 71003691215Min: 10.22 / Avg: 10.23 / Max: 10.23Min: 10.21 / Avg: 10.22 / Max: 10.23Min: 10.22 / Avg: 10.23 / Max: 10.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Passv5.9 Try 2v5.9Core i3 71000.52431.04861.57292.09722.6215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.332.332.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Passv5.9 Try 2v5.9Core i3 7100246810Min: 2.33 / Avg: 2.33 / Max: 2.33Min: 2.32 / Avg: 2.33 / Max: 2.33Min: 2.33 / Avg: 2.33 / Max: 2.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtimev5.9 Try 2v5.9Core i3 7100714212835SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 329.7929.9029.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtimev5.9 Try 2v5.9Core i3 7100714212835Min: 29.61 / Avg: 29.79 / Max: 29.89Min: 29.78 / Avg: 29.9 / Max: 29.99Min: 29.81 / Avg: 29.88 / Max: 29.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dcraw

This test times how long it takes to convert several high-resolution RAW NEF image files to PPM image format using dcraw. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image Conversionv5.9 Try 2v5.9Core i3 71001020304050SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 342.7142.7242.701. (CC) gcc options: -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image Conversionv5.9 Try 2v5.9Core i3 7100918273645Min: 42.66 / Avg: 42.71 / Max: 42.77Min: 42.7 / Avg: 42.72 / Max: 42.75Min: 42.68 / Avg: 42.7 / Max: 42.721. (CC) gcc options: -lm

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisv5.9 Try 2v5.9Core i3 7100714212835SE +/- 0.26, N = 4SE +/- 0.16, N = 4SE +/- 0.29, N = 431.5632.2032.111. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisv5.9 Try 2v5.9Core i3 7100714212835Min: 30.8 / Avg: 31.56 / Max: 31.97Min: 31.83 / Avg: 32.2 / Max: 32.52Min: 31.35 / Avg: 32.11 / Max: 32.751. (CC) gcc options: -O2 -std=c99

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP Decompressionv5.9 Try 2v5.9Core i3 71000.74271.48542.22812.97083.7135SE +/- 0.038, N = 13SE +/- 0.034, N = 13SE +/- 0.034, N = 143.3013.2963.295
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP Decompressionv5.9 Try 2v5.9Core i3 7100246810Min: 3.26 / Avg: 3.3 / Max: 3.75Min: 3.26 / Avg: 3.3 / Max: 3.71Min: 3.26 / Avg: 3.3 / Max: 3.74

System ZLIB Decompression

This test measures the time to decompress a Linux kernel tarball using ZLIB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7v5.9 Try 2v5.9Core i3 7100400800120016002000SE +/- 8.09, N = 10SE +/- 8.07, N = 10SE +/- 10.94, N = 101919.281918.091920.46
OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7v5.9 Try 2v5.9Core i3 710030060090012001500Min: 1903.88 / Avg: 1919.28 / Max: 1987.14Min: 1904.87 / Avg: 1918.09 / Max: 1989.94Min: 1904.14 / Avg: 1920.46 / Max: 2018.19

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetv5.9 Try 2v5.9Core i3 7100200K400K600K800K1000KSE +/- 123.30, N = 3SE +/- 125.13, N = 3SE +/- 17.04, N = 3915627915608915396
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetv5.9 Try 2v5.9Core i3 7100160K320K480K640K800KMin: 915489 / Avg: 915627 / Max: 915873Min: 915451 / Avg: 915607.67 / Max: 915855Min: 915366 / Avg: 915396 / Max: 915425

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4v5.9 Try 2v5.9Core i3 71003M6M9M12M15MSE +/- 1153.26, N = 3SE +/- 523.87, N = 3SE +/- 120.19, N = 3132300001323156713230467
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4v5.9 Try 2v5.9Core i3 71002M4M6M8M10MMin: 13227800 / Avg: 13230000 / Max: 13231700Min: 13230900 / Avg: 13231566.67 / Max: 13232600Min: 13230300 / Avg: 13230466.67 / Max: 13230700

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobilev5.9 Try 2v5.9Core i3 7100140K280K420K560K700KSE +/- 111.30, N = 3SE +/- 70.90, N = 3SE +/- 216.58, N = 3655734655610655242
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobilev5.9 Try 2v5.9Core i3 7100110K220K330K440K550KMin: 655555 / Avg: 655733.67 / Max: 655938Min: 655468 / Avg: 655609.67 / Max: 655686Min: 654892 / Avg: 655242 / Max: 655638

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatv5.9 Try 2v5.9Core i3 7100130K260K390K520K650KSE +/- 50.94, N = 3SE +/- 52.17, N = 3SE +/- 358.36, N = 3623466623386623710
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatv5.9 Try 2v5.9Core i3 7100110K220K330K440K550KMin: 623367 / Avg: 623465.67 / Max: 623537Min: 623284 / Avg: 623386 / Max: 623456Min: 623319 / Avg: 623710.33 / Max: 624426

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantv5.9 Try 2v5.9Core i3 7100140K280K420K560K700KSE +/- 82.78, N = 3SE +/- 92.16, N = 3SE +/- 113.86, N = 3641005640951640759
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantv5.9 Try 2v5.9Core i3 7100110K220K330K440K550KMin: 640856 / Avg: 641005 / Max: 641142Min: 640777 / Avg: 640950.67 / Max: 641091Min: 640596 / Avg: 640758.67 / Max: 640978

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2v5.9 Try 2v5.9Core i3 71003M6M9M12M15MSE +/- 600.93, N = 3SE +/- 762.31, N = 3SE +/- 328.30, N = 3119738671197473311975567
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2v5.9 Try 2v5.9Core i3 71002M4M6M8M10MMin: 11972700 / Avg: 11973866.67 / Max: 11974700Min: 11973300 / Avg: 11974733.33 / Max: 11975900Min: 11975100 / Avg: 11975566.67 / Max: 11976200

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0v5.9 Try 2v5.9Core i3 710048121620SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 315.0315.0715.13MIN: 14.94 / MAX: 32.93MIN: 14.95 / MAX: 55.49MIN: 15 / MAX: 33.361. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0v5.9 Try 2v5.9Core i3 710048121620Min: 15.01 / Avg: 15.03 / Max: 15.06Min: 15.03 / Avg: 15.07 / Max: 15.14Min: 15.09 / Avg: 15.13 / Max: 15.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50v5.9 Try 2v5.9Core i3 71001530456075SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 366.1866.1366.74MIN: 65.93 / MAX: 98.62MIN: 65.86 / MAX: 84.63MIN: 66.42 / MAX: 125.081. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50v5.9 Try 2v5.9Core i3 71001326395265Min: 66.15 / Avg: 66.18 / Max: 66.21Min: 66.02 / Avg: 66.13 / Max: 66.23Min: 66.66 / Avg: 66.74 / Max: 66.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224v5.9 Try 2v5.9Core i3 7100246810SE +/- 0.011, N = 3SE +/- 0.015, N = 3SE +/- 0.017, N = 37.6907.7067.784MIN: 7.59 / MAX: 25.97MIN: 7.61 / MAX: 10.62MIN: 7.64 / MAX: 48.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224v5.9 Try 2v5.9Core i3 71003691215Min: 7.67 / Avg: 7.69 / Max: 7.71Min: 7.69 / Avg: 7.71 / Max: 7.74Min: 7.76 / Avg: 7.78 / Max: 7.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0v5.9 Try 2v5.9Core i3 71003691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.8410.8110.87MIN: 10.75 / MAX: 37.07MIN: 10.74 / MAX: 29.24MIN: 10.8 / MAX: 11.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0v5.9 Try 2v5.9Core i3 71003691215Min: 10.81 / Avg: 10.84 / Max: 10.87Min: 10.8 / Avg: 10.81 / Max: 10.82Min: 10.87 / Avg: 10.87 / Max: 10.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3v5.9 Try 2v5.9Core i3 710020406080100SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.19, N = 388.2188.2689.55MIN: 87.73 / MAX: 106.12MIN: 87.84 / MAX: 106.52MIN: 88.66 / MAX: 281.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3v5.9 Try 2v5.9Core i3 710020406080100Min: 88.15 / Avg: 88.21 / Max: 88.26Min: 88.19 / Avg: 88.26 / Max: 88.34Min: 89.23 / Avg: 89.55 / Max: 89.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8v5.9 Try 2v5.9Core i3 7100714212835SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 331.3731.3631.41MIN: 31.24 / MAX: 41.78MIN: 31.2 / MAX: 41.44MIN: 31.27 / MAX: 41.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8v5.9 Try 2v5.9Core i3 7100714212835Min: 31.34 / Avg: 31.37 / Max: 31.41Min: 31.35 / Avg: 31.36 / Max: 31.37Min: 31.34 / Avg: 31.41 / Max: 31.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3v5.9 Try 2v5.9Core i3 71003691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 39.069.049.10MIN: 8.98 / MAX: 19.24MIN: 8.97 / MAX: 10.58MIN: 9.04 / MAX: 9.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3v5.9 Try 2v5.9Core i3 71003691215Min: 9.04 / Avg: 9.06 / Max: 9.08Min: 9.02 / Avg: 9.04 / Max: 9.05Min: 9.09 / Avg: 9.1 / Max: 9.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetv5.9 Try 2v5.9Core i3 7100246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.246.236.27MIN: 6.18 / MAX: 6.47MIN: 6.18 / MAX: 6.49MIN: 6.22 / MAX: 6.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetv5.9 Try 2v5.9Core i3 71003691215Min: 6.22 / Avg: 6.24 / Max: 6.26Min: 6.22 / Avg: 6.23 / Max: 6.24Min: 6.25 / Avg: 6.27 / Max: 6.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetv5.9 Try 2v5.9Core i3 71003691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 39.699.669.75MIN: 9.62 / MAX: 11.53MIN: 9.61 / MAX: 20.07MIN: 9.71 / MAX: 10.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetv5.9 Try 2v5.9Core i3 71003691215Min: 9.66 / Avg: 9.69 / Max: 9.72Min: 9.63 / Avg: 9.66 / Max: 9.7Min: 9.74 / Avg: 9.75 / Max: 9.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefacev5.9 Try 2v5.9Core i3 71000.58951.1791.76852.3582.9475SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.612.622.62MIN: 2.59 / MAX: 2.65MIN: 2.59 / MAX: 2.66MIN: 2.6 / MAX: 2.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefacev5.9 Try 2v5.9Core i3 7100246810Min: 2.61 / Avg: 2.61 / Max: 2.62Min: 2.61 / Avg: 2.62 / Max: 2.64Min: 2.61 / Avg: 2.62 / Max: 2.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8v5.9 Try 2v5.9Core i3 710020406080100SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 386.4586.5186.59MIN: 86.22 / MAX: 96.83MIN: 86.2 / MAX: 143.65MIN: 86.39 / MAX: 97.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8v5.9 Try 2v5.9Core i3 71001632486480Min: 86.38 / Avg: 86.45 / Max: 86.5Min: 86.42 / Avg: 86.51 / Max: 86.68Min: 86.57 / Avg: 86.59 / Max: 86.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8v5.9 Try 2v5.9Core i3 710080160240320400SE +/- 1.16, N = 3SE +/- 1.60, N = 3SE +/- 0.15, N = 3382.10382.21381.44MIN: 379.28 / MAX: 395.55MIN: 378.27 / MAX: 413.65MIN: 380.34 / MAX: 393.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8v5.9 Try 2v5.9Core i3 710070140210280350Min: 380.18 / Avg: 382.1 / Max: 384.2Min: 379.29 / Avg: 382.21 / Max: 384.79Min: 381.14 / Avg: 381.44 / Max: 381.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8v5.9 Try 2v5.9Core i3 71001326395265SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 359.0258.9759.17MIN: 58.86 / MAX: 68.64MIN: 58.84 / MAX: 60.7MIN: 58.97 / MAX: 69.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8v5.9 Try 2v5.9Core i3 71001224364860Min: 58.97 / Avg: 59.02 / Max: 59.05Min: 58.94 / Avg: 58.97 / Max: 58.99Min: 59.1 / Avg: 59.17 / Max: 59.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetv5.9 Try 2v5.9Core i3 7100612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 326.1426.0226.15MIN: 25.88 / MAX: 33.67MIN: 25.79 / MAX: 36.44MIN: 25.99 / MAX: 34.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetv5.9 Try 2v5.9Core i3 7100612182430Min: 26.09 / Avg: 26.14 / Max: 26.19Min: 26 / Avg: 26.02 / Max: 26.04Min: 26.14 / Avg: 26.15 / Max: 26.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8v5.9 Try 2v5.9Core i3 71004080120160200SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3189.50189.46189.78MIN: 189.17 / MAX: 199.59MIN: 189.15 / MAX: 198.97MIN: 189.33 / MAX: 251.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8v5.9 Try 2v5.9Core i3 7100306090120150Min: 189.44 / Avg: 189.5 / Max: 189.57Min: 189.41 / Avg: 189.46 / Max: 189.53Min: 189.69 / Avg: 189.78 / Max: 189.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3v5.9 Try 2v5.9Core i3 71001020304050SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 342.8242.7942.94MIN: 42.64 / MAX: 53.51MIN: 42.66 / MAX: 45.31MIN: 42.82 / MAX: 52.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3v5.9 Try 2v5.9Core i3 7100918273645Min: 42.77 / Avg: 42.82 / Max: 42.88Min: 42.77 / Avg: 42.79 / Max: 42.83Min: 42.92 / Avg: 42.94 / Max: 42.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Features 2Dv5.9 Try 2v5.9Core i3 710040K80K120K160K200KSE +/- 2284.45, N = 12SE +/- 2927.06, N = 12SE +/- 3170.64, N = 121898561900331927111. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Features 2Dv5.9 Try 2v5.9Core i3 710030K60K90K120K150KMin: 180782 / Avg: 189855.83 / Max: 205108Min: 180881 / Avg: 190033.08 / Max: 207360Min: 181809 / Avg: 192710.75 / Max: 2186711. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Object Detectionv5.9 Try 2v5.9Core i3 710012K24K36K48K60KSE +/- 669.60, N = 4SE +/- 668.37, N = 15SE +/- 613.72, N = 155464553252529811. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Object Detectionv5.9 Try 2v5.9Core i3 71009K18K27K36K45KMin: 53327 / Avg: 54645 / Max: 56471Min: 50288 / Avg: 53252.13 / Max: 58833Min: 48218 / Avg: 52981 / Max: 593891. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural Networkv5.9 Try 2v5.9Core i3 710016003200480064008000SE +/- 74.60, N = 3SE +/- 102.33, N = 3SE +/- 48.67, N = 37193730671541. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural Networkv5.9 Try 2v5.9Core i3 710013002600390052006500Min: 7101 / Avg: 7193.33 / Max: 7341Min: 7109 / Avg: 7306.33 / Max: 7452Min: 7059 / Avg: 7154.33 / Max: 72191. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt