Core i3 7100 September

Intel Core i3-7100 testing with a Gigabyte B250M-DS3H-CF (F9 BIOS) and Gigabyte Intel HD 630 3GB on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009230-FI-COREI371045
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 3 Tests
Compression Tests 3 Tests
CPU Massive 3 Tests
Creator Workloads 5 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 8 Tests
Imaging 3 Tests
Machine Learning 4 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 4 Tests
OpenMPI Tests 3 Tests
Scientific Computing 4 Tests
Server CPU Tests 2 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i3 7100
September 22 2020
  7 Hours, 49 Minutes
v5.9
September 23 2020
  7 Hours, 45 Minutes
v5.9 Try 2
September 23 2020
  7 Hours, 35 Minutes
Invert Hiding All Results Option
  7 Hours, 43 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i3 7100 SeptemberOpenBenchmarking.orgPhoronix Test SuiteIntel Core i3-7100 @ 3.90GHz (2 Cores / 4 Threads)Gigabyte B250M-DS3H-CF (F9 BIOS)Intel Xeon E3-1200 v6/7th + B2508GB250GB Western Digital WDS250G1B0A-Gigabyte Intel HD 630 3GB (1100MHz)Realtek ALC887-VDVA2431Realtek RTL8111/8168/8411Ubuntu 19.105.9.0-050900rc1daily20200822-generic (x86_64) 20200821GNOME Shell 3.34.1X Server 1.20.5modesetting 1.20.54.5 Mesa 19.2.8GCC 9.2.1 20191008ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I3 7100 September BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6- Python 2.7.17rc1 + Python 3.7.5- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected

Core i3 7100v5.9v5.9 Try 2Result OverviewPhoronix Test Suite100%101%103%104%106%NAMDIncompact3DeSpeak-NG Speech EngineMobile Neural NetworkMonte Carlo Simulations of Ionised NebulaeLibRawGLmark2OSBenchZstd CompressionLAMMPS Molecular Dynamics SimulatorWebP Image EncodeNCNNSystem GZIP DecompressiondcrawAOM AV1TensorFlow LiteOpenCVSystem ZLIB Decompression

Core i3 7100 Septembernamd: ATPase Simulation - 327,506 Atomsosbench: Launch Programsopencv: Object Detectionincompact3d: Cylinderosbench: Create Processesopencv: DNN - Deep Neural Networkosbench: Memory Allocationsespeak: Text-To-Speech Synthesismnn: inception-v3opencv: Features 2Dosbench: Create Threadsmnn: MobileNetV2_224webp: Quality 100, Lossless, Highest Compressionncnn: CPU - mnasnetmnn: resnet-v2-50webp: Quality 100, Losslesslibraw: Post-Processing Benchmarkcompress-zstd: 19mnn: SqueezeNetV1.0ncnn: CPU - mobilenet_v3ncnn: CPU - squeezenetglmark2: 1920 x 1080mnn: mobilenet-v1-1.0lammps: Rhodopsin Proteinncnn: CPU - alexnetwebp: Quality 100, Highest Compressionmocassin: Dust 2D tau100.0ncnn: CPU - blazefaceaom-av1: Speed 8 Realtimencnn: CPU - mobilenetv2_yolov3ncnn: CPU - resnet18_int8compress-zstd: 3osbench: Create Filesncnn: CPU - vgg16_int8system-decompress-gzip: ncnn: CPU - resnet50_int8ncnn: CPU - googlenet_int8ncnn: CPU - squeezenet_int8system-decompress-zlib: webp: Defaultwebp: Quality 100aom-av1: Speed 6 Realtimetensorflow-lite: NASNet Mobiledcraw: RAW To PPM Image Conversiontensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: SqueezeNettensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4aom-av1: Speed 6 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 0 Two-PassCore i3 7100v5.9v5.9 Try 27.52863116.080443529811346.3419234.066041715483.00201132.10689.54519271114.2987577.78454.5669.7566.73522.82118.8812.615.1349.106.2747710.8731.52426.158.2544152.6229.8842.9459.171544.316.462315381.443.295189.7886.5931.411920.4550711.7252.71610.2365524242.70162371064075991539611975567132304672.331.450.167.11332119.992892532521314.6977933.290386730681.31829932.20188.25619003314.2502787.70654.0119.6666.13222.61818.9112.715.0729.046.2347810.8141.52526.028.2174142.6229.9042.7958.971548.616.483896382.213.296189.4686.5131.361918.0870741.7242.71410.2265561042.72462338664095191560811974733132315672.331.450.167.12481117.874146546451313.1705733.803781719381.31265731.56288.20618985614.0905387.69054.2359.6966.18122.71319.0312.615.0319.066.2448010.8441.53226.148.2134162.6129.7942.8259.021549.016.446564382.103.301189.5086.4531.371919.2768241.7262.71310.2365573442.70962346664100591562711973867132300002.331.450.16OpenBenchmarking.org

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCore i3 7100v5.9v5.9 Try 2246810SE +/- 0.10701, N = 3SE +/- 0.00743, N = 3SE +/- 0.01481, N = 37.528637.113327.12481
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCore i3 7100v5.9v5.9 Try 23691215Min: 7.31 / Avg: 7.53 / Max: 7.64Min: 7.1 / Avg: 7.11 / Max: 7.13Min: 7.1 / Avg: 7.12 / Max: 7.15

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch ProgramsCore i3 7100v5.9v5.9 Try 2306090120150SE +/- 0.65, N = 3SE +/- 0.82, N = 3SE +/- 1.75, N = 3116.08119.99117.871. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch ProgramsCore i3 7100v5.9v5.9 Try 220406080100Min: 115.43 / Avg: 116.08 / Max: 117.38Min: 119.1 / Avg: 119.99 / Max: 121.63Min: 115.16 / Avg: 117.87 / Max: 121.151. (CC) gcc options: -lm

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Object DetectionCore i3 7100v5.9v5.9 Try 212K24K36K48K60KSE +/- 613.72, N = 15SE +/- 668.37, N = 15SE +/- 669.60, N = 45298153252546451. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Object DetectionCore i3 7100v5.9v5.9 Try 29K18K27K36K45KMin: 48218 / Avg: 52981 / Max: 59389Min: 50288 / Avg: 53252.13 / Max: 58833Min: 53327 / Avg: 54645 / Max: 564711. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderCore i3 7100v5.9v5.9 Try 230060090012001500SE +/- 2.53, N = 3SE +/- 2.19, N = 3SE +/- 0.54, N = 31346.341314.701313.171. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderCore i3 7100v5.9v5.9 Try 22004006008001000Min: 1341.88 / Avg: 1346.34 / Max: 1350.63Min: 1311.93 / Avg: 1314.7 / Max: 1319.02Min: 1312.6 / Avg: 1313.17 / Max: 1314.251. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesCore i3 7100v5.9v5.9 Try 2816243240SE +/- 0.54, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 334.0733.2933.801. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesCore i3 7100v5.9v5.9 Try 2714212835Min: 33.33 / Avg: 34.07 / Max: 35.11Min: 33.16 / Avg: 33.29 / Max: 33.38Min: 33.57 / Avg: 33.8 / Max: 34.071. (CC) gcc options: -lm

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkCore i3 7100v5.9v5.9 Try 216003200480064008000SE +/- 48.67, N = 3SE +/- 102.33, N = 3SE +/- 74.60, N = 37154730671931. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkCore i3 7100v5.9v5.9 Try 213002600390052006500Min: 7059 / Avg: 7154.33 / Max: 7219Min: 7109 / Avg: 7306.33 / Max: 7452Min: 7101 / Avg: 7193.33 / Max: 73411. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory AllocationsCore i3 7100v5.9v5.9 Try 220406080100SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 383.0081.3281.311. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory AllocationsCore i3 7100v5.9v5.9 Try 21632486480Min: 82.98 / Avg: 83 / Max: 83.02Min: 81.29 / Avg: 81.32 / Max: 81.36Min: 81.26 / Avg: 81.31 / Max: 81.381. (CC) gcc options: -lm

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisCore i3 7100v5.9v5.9 Try 2714212835SE +/- 0.29, N = 4SE +/- 0.16, N = 4SE +/- 0.26, N = 432.1132.2031.561. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisCore i3 7100v5.9v5.9 Try 2714212835Min: 31.35 / Avg: 32.11 / Max: 32.75Min: 31.83 / Avg: 32.2 / Max: 32.52Min: 30.8 / Avg: 31.56 / Max: 31.971. (CC) gcc options: -O2 -std=c99

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Core i3 7100v5.9v5.9 Try 220406080100SE +/- 0.19, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 389.5588.2688.21MIN: 88.66 / MAX: 281.78MIN: 87.84 / MAX: 106.52MIN: 87.73 / MAX: 106.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Core i3 7100v5.9v5.9 Try 220406080100Min: 89.23 / Avg: 89.55 / Max: 89.88Min: 88.19 / Avg: 88.26 / Max: 88.34Min: 88.15 / Avg: 88.21 / Max: 88.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Features 2DCore i3 7100v5.9v5.9 Try 240K80K120K160K200KSE +/- 3170.64, N = 12SE +/- 2927.06, N = 12SE +/- 2284.45, N = 121927111900331898561. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: Features 2DCore i3 7100v5.9v5.9 Try 230K60K90K120K150KMin: 181809 / Avg: 192710.75 / Max: 218671Min: 180881 / Avg: 190033.08 / Max: 207360Min: 180782 / Avg: 189855.83 / Max: 2051081. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ThreadsCore i3 7100v5.9v5.9 Try 248121620SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 314.3014.2514.091. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ThreadsCore i3 7100v5.9v5.9 Try 248121620Min: 14.19 / Avg: 14.3 / Max: 14.38Min: 14.17 / Avg: 14.25 / Max: 14.3Min: 14.05 / Avg: 14.09 / Max: 14.121. (CC) gcc options: -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Core i3 7100v5.9v5.9 Try 2246810SE +/- 0.017, N = 3SE +/- 0.015, N = 3SE +/- 0.011, N = 37.7847.7067.690MIN: 7.64 / MAX: 48.48MIN: 7.61 / MAX: 10.62MIN: 7.59 / MAX: 25.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Core i3 7100v5.9v5.9 Try 23691215Min: 7.76 / Avg: 7.78 / Max: 7.82Min: 7.69 / Avg: 7.71 / Max: 7.74Min: 7.67 / Avg: 7.69 / Max: 7.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCore i3 7100v5.9v5.9 Try 21224364860SE +/- 0.28, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 354.5754.0154.241. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCore i3 7100v5.9v5.9 Try 21122334455Min: 54.28 / Avg: 54.57 / Max: 55.13Min: 54 / Avg: 54.01 / Max: 54.03Min: 54.21 / Avg: 54.24 / Max: 54.271. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetCore i3 7100v5.9v5.9 Try 23691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 39.759.669.69MIN: 9.71 / MAX: 10.48MIN: 9.61 / MAX: 20.07MIN: 9.62 / MAX: 11.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetCore i3 7100v5.9v5.9 Try 23691215Min: 9.74 / Avg: 9.75 / Max: 9.75Min: 9.63 / Avg: 9.66 / Max: 9.7Min: 9.66 / Avg: 9.69 / Max: 9.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Core i3 7100v5.9v5.9 Try 21530456075SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 366.7466.1366.18MIN: 66.42 / MAX: 125.08MIN: 65.86 / MAX: 84.63MIN: 65.93 / MAX: 98.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Core i3 7100v5.9v5.9 Try 21326395265Min: 66.66 / Avg: 66.74 / Max: 66.8Min: 66.02 / Avg: 66.13 / Max: 66.23Min: 66.15 / Avg: 66.18 / Max: 66.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCore i3 7100v5.9v5.9 Try 2510152025SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 322.8222.6222.711. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCore i3 7100v5.9v5.9 Try 2510152025Min: 22.81 / Avg: 22.82 / Max: 22.84Min: 22.55 / Avg: 22.62 / Max: 22.71Min: 22.67 / Avg: 22.71 / Max: 22.771. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i3 7100v5.9v5.9 Try 2510152025SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 318.8818.9119.031. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i3 7100v5.9v5.9 Try 2510152025Min: 18.81 / Avg: 18.88 / Max: 18.99Min: 18.83 / Avg: 18.91 / Max: 18.97Min: 19.02 / Avg: 19.03 / Max: 19.041. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Core i3 7100v5.9v5.9 Try 23691215SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 312.612.712.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Core i3 7100v5.9v5.9 Try 248121620Min: 12.6 / Avg: 12.63 / Max: 12.7Min: 12.7 / Avg: 12.7 / Max: 12.7Min: 12.6 / Avg: 12.63 / Max: 12.71. (CC) gcc options: -O3 -pthread -lz -llzma

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Core i3 7100v5.9v5.9 Try 248121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 315.1315.0715.03MIN: 15 / MAX: 33.36MIN: 14.95 / MAX: 55.49MIN: 14.94 / MAX: 32.931. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Core i3 7100v5.9v5.9 Try 248121620Min: 15.09 / Avg: 15.13 / Max: 15.16Min: 15.03 / Avg: 15.07 / Max: 15.14Min: 15.01 / Avg: 15.03 / Max: 15.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3Core i3 7100v5.9v5.9 Try 23691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 39.109.049.06MIN: 9.04 / MAX: 9.41MIN: 8.97 / MAX: 10.58MIN: 8.98 / MAX: 19.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3Core i3 7100v5.9v5.9 Try 23691215Min: 9.09 / Avg: 9.1 / Max: 9.11Min: 9.02 / Avg: 9.04 / Max: 9.05Min: 9.04 / Avg: 9.06 / Max: 9.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetCore i3 7100v5.9v5.9 Try 2246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.276.236.24MIN: 6.22 / MAX: 6.57MIN: 6.18 / MAX: 6.49MIN: 6.18 / MAX: 6.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetCore i3 7100v5.9v5.9 Try 23691215Min: 6.25 / Avg: 6.27 / Max: 6.31Min: 6.22 / Avg: 6.23 / Max: 6.24Min: 6.22 / Avg: 6.24 / Max: 6.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Core i3 7100v5.9v5.9 Try 2100200300400500477478480

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Core i3 7100v5.9v5.9 Try 23691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 310.8710.8110.84MIN: 10.8 / MAX: 11.16MIN: 10.74 / MAX: 29.24MIN: 10.75 / MAX: 37.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Core i3 7100v5.9v5.9 Try 23691215Min: 10.87 / Avg: 10.87 / Max: 10.88Min: 10.8 / Avg: 10.81 / Max: 10.82Min: 10.81 / Avg: 10.84 / Max: 10.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinCore i3 7100v5.9v5.9 Try 20.34470.68941.03411.37881.7235SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.006, N = 31.5241.5251.5321. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinCore i3 7100v5.9v5.9 Try 2246810Min: 1.52 / Avg: 1.52 / Max: 1.53Min: 1.52 / Avg: 1.52 / Max: 1.53Min: 1.52 / Avg: 1.53 / Max: 1.541. (CXX) g++ options: -O3 -pthread -lm

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetCore i3 7100v5.9v5.9 Try 2612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 326.1526.0226.14MIN: 25.99 / MAX: 34.84MIN: 25.79 / MAX: 36.44MIN: 25.88 / MAX: 33.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetCore i3 7100v5.9v5.9 Try 2612182430Min: 26.14 / Avg: 26.15 / Max: 26.17Min: 26 / Avg: 26.02 / Max: 26.04Min: 26.09 / Avg: 26.14 / Max: 26.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i3 7100v5.9v5.9 Try 2246810SE +/- 0.013, N = 3SE +/- 0.008, N = 3SE +/- 0.001, N = 38.2548.2178.2131. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i3 7100v5.9v5.9 Try 23691215Min: 8.23 / Avg: 8.25 / Max: 8.27Min: 8.2 / Avg: 8.22 / Max: 8.23Min: 8.21 / Avg: 8.21 / Max: 8.211. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Core i3 7100v5.9v5.9 Try 290180270360450SE +/- 0.58, N = 3SE +/- 0.88, N = 34154144161. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Core i3 7100v5.9v5.9 Try 270140210280350Min: 414 / Avg: 415 / Max: 416Min: 414 / Avg: 415.67 / Max: 4171. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceCore i3 7100v5.9v5.9 Try 20.58951.1791.76852.3582.9475SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.622.622.61MIN: 2.6 / MAX: 2.72MIN: 2.59 / MAX: 2.66MIN: 2.59 / MAX: 2.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceCore i3 7100v5.9v5.9 Try 2246810Min: 2.61 / Avg: 2.62 / Max: 2.62Min: 2.61 / Avg: 2.62 / Max: 2.64Min: 2.61 / Avg: 2.61 / Max: 2.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i3 7100v5.9v5.9 Try 2714212835SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 329.8829.9029.791. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i3 7100v5.9v5.9 Try 2714212835Min: 29.81 / Avg: 29.88 / Max: 29.96Min: 29.78 / Avg: 29.9 / Max: 29.99Min: 29.61 / Avg: 29.79 / Max: 29.891. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3Core i3 7100v5.9v5.9 Try 21020304050SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 342.9442.7942.82MIN: 42.82 / MAX: 52.91MIN: 42.66 / MAX: 45.31MIN: 42.64 / MAX: 53.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3Core i3 7100v5.9v5.9 Try 2918273645Min: 42.92 / Avg: 42.94 / Max: 42.98Min: 42.77 / Avg: 42.79 / Max: 42.83Min: 42.77 / Avg: 42.82 / Max: 42.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8Core i3 7100v5.9v5.9 Try 21326395265SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 359.1758.9759.02MIN: 58.97 / MAX: 69.5MIN: 58.84 / MAX: 60.7MIN: 58.86 / MAX: 68.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8Core i3 7100v5.9v5.9 Try 21224364860Min: 59.1 / Avg: 59.17 / Max: 59.25Min: 58.94 / Avg: 58.97 / Max: 58.99Min: 58.97 / Avg: 59.02 / Max: 59.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Core i3 7100v5.9v5.9 Try 230060090012001500SE +/- 7.95, N = 3SE +/- 4.45, N = 3SE +/- 4.02, N = 31544.31548.61549.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Core i3 7100v5.9v5.9 Try 230060090012001500Min: 1528.4 / Avg: 1544.3 / Max: 1552.6Min: 1540 / Avg: 1548.63 / Max: 1554.8Min: 1541.4 / Avg: 1548.97 / Max: 1555.11. (CC) gcc options: -O3 -pthread -lz -llzma

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create FilesCore i3 7100v5.9v5.9 Try 248121620SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 316.4616.4816.451. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create FilesCore i3 7100v5.9v5.9 Try 248121620Min: 16.37 / Avg: 16.46 / Max: 16.52Min: 16.44 / Avg: 16.48 / Max: 16.54Min: 16.41 / Avg: 16.45 / Max: 16.471. (CC) gcc options: -lm

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8Core i3 7100v5.9v5.9 Try 280160240320400SE +/- 0.15, N = 3SE +/- 1.60, N = 3SE +/- 1.16, N = 3381.44382.21382.10MIN: 380.34 / MAX: 393.15MIN: 378.27 / MAX: 413.65MIN: 379.28 / MAX: 395.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8Core i3 7100v5.9v5.9 Try 270140210280350Min: 381.14 / Avg: 381.44 / Max: 381.63Min: 379.29 / Avg: 382.21 / Max: 384.79Min: 380.18 / Avg: 382.1 / Max: 384.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionCore i3 7100v5.9v5.9 Try 20.74271.48542.22812.97083.7135SE +/- 0.034, N = 14SE +/- 0.034, N = 13SE +/- 0.038, N = 133.2953.2963.301
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionCore i3 7100v5.9v5.9 Try 2246810Min: 3.26 / Avg: 3.3 / Max: 3.74Min: 3.26 / Avg: 3.3 / Max: 3.71Min: 3.26 / Avg: 3.3 / Max: 3.75

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8Core i3 7100v5.9v5.9 Try 24080120160200SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3189.78189.46189.50MIN: 189.33 / MAX: 251.14MIN: 189.15 / MAX: 198.97MIN: 189.17 / MAX: 199.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8Core i3 7100v5.9v5.9 Try 2306090120150Min: 189.69 / Avg: 189.78 / Max: 189.92Min: 189.41 / Avg: 189.46 / Max: 189.53Min: 189.44 / Avg: 189.5 / Max: 189.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8Core i3 7100v5.9v5.9 Try 220406080100SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 386.5986.5186.45MIN: 86.39 / MAX: 97.01MIN: 86.2 / MAX: 143.65MIN: 86.22 / MAX: 96.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8Core i3 7100v5.9v5.9 Try 21632486480Min: 86.57 / Avg: 86.59 / Max: 86.61Min: 86.42 / Avg: 86.51 / Max: 86.68Min: 86.38 / Avg: 86.45 / Max: 86.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8Core i3 7100v5.9v5.9 Try 2714212835SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 331.4131.3631.37MIN: 31.27 / MAX: 41.76MIN: 31.2 / MAX: 41.44MIN: 31.24 / MAX: 41.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8Core i3 7100v5.9v5.9 Try 2714212835Min: 31.34 / Avg: 31.41 / Max: 31.45Min: 31.35 / Avg: 31.36 / Max: 31.37Min: 31.34 / Avg: 31.37 / Max: 31.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

System ZLIB Decompression

This test measures the time to decompress a Linux kernel tarball using ZLIB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7Core i3 7100v5.9v5.9 Try 2400800120016002000SE +/- 10.94, N = 10SE +/- 8.07, N = 10SE +/- 8.09, N = 101920.461918.091919.28
OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7Core i3 7100v5.9v5.9 Try 230060090012001500Min: 1904.14 / Avg: 1920.46 / Max: 2018.19Min: 1904.87 / Avg: 1918.09 / Max: 1989.94Min: 1903.88 / Avg: 1919.28 / Max: 1987.14

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCore i3 7100v5.9v5.9 Try 20.38840.77681.16521.55361.942SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.7251.7241.7261. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCore i3 7100v5.9v5.9 Try 2246810Min: 1.72 / Avg: 1.73 / Max: 1.73Min: 1.72 / Avg: 1.72 / Max: 1.73Min: 1.72 / Avg: 1.73 / Max: 1.731. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Core i3 7100v5.9v5.9 Try 20.61111.22221.83332.44443.0555SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 32.7162.7142.7131. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Core i3 7100v5.9v5.9 Try 2246810Min: 2.72 / Avg: 2.72 / Max: 2.72Min: 2.71 / Avg: 2.71 / Max: 2.72Min: 2.71 / Avg: 2.71 / Max: 2.711. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i3 7100v5.9v5.9 Try 23691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 310.2310.2210.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i3 7100v5.9v5.9 Try 23691215Min: 10.22 / Avg: 10.23 / Max: 10.23Min: 10.21 / Avg: 10.22 / Max: 10.23Min: 10.22 / Avg: 10.23 / Max: 10.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i3 7100v5.9v5.9 Try 2140K280K420K560K700KSE +/- 216.58, N = 3SE +/- 70.90, N = 3SE +/- 111.30, N = 3655242655610655734
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i3 7100v5.9v5.9 Try 2110K220K330K440K550KMin: 654892 / Avg: 655242 / Max: 655638Min: 655468 / Avg: 655609.67 / Max: 655686Min: 655555 / Avg: 655733.67 / Max: 655938

dcraw

This test times how long it takes to convert several high-resolution RAW NEF image files to PPM image format using dcraw. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image ConversionCore i3 7100v5.9v5.9 Try 21020304050SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 342.7042.7242.711. (CC) gcc options: -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image ConversionCore i3 7100v5.9v5.9 Try 2918273645Min: 42.68 / Avg: 42.7 / Max: 42.72Min: 42.7 / Avg: 42.72 / Max: 42.75Min: 42.66 / Avg: 42.71 / Max: 42.771. (CC) gcc options: -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i3 7100v5.9v5.9 Try 2130K260K390K520K650KSE +/- 358.36, N = 3SE +/- 52.17, N = 3SE +/- 50.94, N = 3623710623386623466
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i3 7100v5.9v5.9 Try 2110K220K330K440K550KMin: 623319 / Avg: 623710.33 / Max: 624426Min: 623284 / Avg: 623386 / Max: 623456Min: 623367 / Avg: 623465.67 / Max: 623537

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i3 7100v5.9v5.9 Try 2140K280K420K560K700KSE +/- 113.86, N = 3SE +/- 92.16, N = 3SE +/- 82.78, N = 3640759640951641005
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i3 7100v5.9v5.9 Try 2110K220K330K440K550KMin: 640596 / Avg: 640758.67 / Max: 640978Min: 640777 / Avg: 640950.67 / Max: 641091Min: 640856 / Avg: 641005 / Max: 641142

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i3 7100v5.9v5.9 Try 2200K400K600K800K1000KSE +/- 17.04, N = 3SE +/- 125.13, N = 3SE +/- 123.30, N = 3915396915608915627
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i3 7100v5.9v5.9 Try 2160K320K480K640K800KMin: 915366 / Avg: 915396 / Max: 915425Min: 915451 / Avg: 915607.67 / Max: 915855Min: 915489 / Avg: 915627 / Max: 915873

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Core i3 7100v5.9v5.9 Try 23M6M9M12M15MSE +/- 328.30, N = 3SE +/- 762.31, N = 3SE +/- 600.93, N = 3119755671197473311973867
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Core i3 7100v5.9v5.9 Try 22M4M6M8M10MMin: 11975100 / Avg: 11975566.67 / Max: 11976200Min: 11973300 / Avg: 11974733.33 / Max: 11975900Min: 11972700 / Avg: 11973866.67 / Max: 11974700

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i3 7100v5.9v5.9 Try 23M6M9M12M15MSE +/- 120.19, N = 3SE +/- 523.87, N = 3SE +/- 1153.26, N = 3132304671323156713230000
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i3 7100v5.9v5.9 Try 22M4M6M8M10MMin: 13230300 / Avg: 13230466.67 / Max: 13230700Min: 13230900 / Avg: 13231566.67 / Max: 13232600Min: 13227800 / Avg: 13230000 / Max: 13231700

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i3 7100v5.9v5.9 Try 20.52431.04861.57292.09722.6215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.332.332.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i3 7100v5.9v5.9 Try 2246810Min: 2.33 / Avg: 2.33 / Max: 2.33Min: 2.32 / Avg: 2.33 / Max: 2.33Min: 2.33 / Avg: 2.33 / Max: 2.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i3 7100v5.9v5.9 Try 20.32630.65260.97891.30521.6315SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.451.451.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i3 7100v5.9v5.9 Try 2246810Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.44 / Avg: 1.45 / Max: 1.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassCore i3 7100v5.9v5.9 Try 20.0360.0720.1080.1440.18SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.160.160.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassCore i3 7100v5.9v5.9 Try 212345Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread