Core i5 2500K October 2020

Intel Core i5-2500K testing with a SAPPHIRE Pure Black P67 Hydra (4.6.4 BIOS) and ASUS AMD Radeon HD 4890 1GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010120-FI-COREI525006
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 5 Tests
Compression Tests 2 Tests
CPU Massive 6 Tests
Creator Workloads 5 Tests
Database Test Suite 3 Tests
Fortran Tests 5 Tests
HPC - High Performance Computing 15 Tests
Imaging 2 Tests
Machine Learning 7 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 4 Tests
Multi-Core 5 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 4 Tests
Python Tests 4 Tests
Scientific Computing 8 Tests
Server 3 Tests
Server CPU Tests 2 Tests
Single-Threaded 3 Tests
Speech 2 Tests
Telephony 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux 5.4
October 11 2020
  10 Hours, 10 Minutes
Repeat
October 11 2020
  9 Hours, 14 Minutes
Repeat 2
October 12 2020
  9 Hours, 37 Minutes
Invert Hiding All Results Option
  9 Hours, 40 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i5 2500K October 2020OpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-2500K @ 3.70GHz (4 Cores)SAPPHIRE Pure Black P67 Hydra (4.6.4 BIOS)Intel 2nd Generation Core DRAM3072MB120GB SanDisk SDSSDA12ASUS AMD Radeon HD 4890 1GBRealtek ALC892DELL S2409WMarvell 88E8057 PCI-EUbuntu 20.045.4.0-40-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I5 2500K October 2020 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x2f - GLAMOR- OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu120.04) - Python 2.7.18rc1 + Python 3.8.5- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Linux 5.4RepeatRepeat 2Result OverviewPhoronix Test Suite100%104%109%113%117%Zstd CompressionBYTE Unix BenchmarkKeyDBGLmark2Sunflow Rendering SystemIncompact3DLAMMPS Molecular Dynamics SimulatorGROMACSOpenVINORNNoiseHierarchical INTegrationApache CouchDBTensorFlow LiteeSpeak-NG Speech EngineFFTEDolfynNAMDMlpack BenchmarkInfluxDBLibRawTimed HMMer SearchSystem GZIP DecompressionWebP Image EncodeNCNNMonte Carlo Simulations of Ionised NebulaeCaffeTNN

Core i5 2500K October 2020incompact3d: Cylindergromacs: Water Benchmarkmocassin: Dust 2D tau100.0influxdb: 1024 - 10000 - 2,5000,1 - 10000compress-zstd: 3ncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: CPU - squeezenetnamd: ATPase Simulation - 327,506 Atomscompress-zstd: 19caffe: GoogleNet - CPU - 200couchdb: 100 - 1000 - 24byte: Dhrystone 2hmmer: Pfam Database Searchtensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4hint: FLOATinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000caffe: GoogleNet - CPU - 100caffe: AlexNet - CPU - 200glmark2: 1920 x 1080openvino: Person Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUmlpack: scikit_icatensorflow-lite: Mobilenet Floatkeydb: sunflow: Global Illumination + Image Synthesislibraw: Post-Processing Benchmarkcaffe: AlexNet - CPU - 100tensorflow-lite: SqueezeNettensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Quantopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUwebp: Quality 100, Lossless, Highest Compressionespeak: Text-To-Speech Synthesismlpack: scikit_svmrnnoise: dolfyn: Computational Fluid Dynamicstnn: CPU - MobileNet v2webp: Quality 100, Losslesstnn: CPU - SqueezeNet v1.1system-decompress-gzip: webp: Quality 100, Highest Compressionlammps: Rhodopsin Proteinwebp: Quality 100ffte: N=256, 3D Complex FFT Routinewebp: DefaultLinux 5.4RepeatRepeat 21039.577390.27040762220.924.7110.54124.3934.8452.86452.4155.512.6089.2175.437.7343.4817.1260.2047.806.1723811.4269249250.70129229484.4176.14384857279377547288531427.65472711718.3727591.2134664121726105711181.460.3611208.620.357313.030.557308.460.5574.94441277175757.683.68019.00610036507964639524525622.881370.542.881368.6958.01743.14625.4129.93126.623353.71223.292329.1083.5879.8972.1543.13213380.6428112562.0751042.571170.27140718.4110.95124.4334.8452.88454.2755.412.5990.1374.767.7443.1117.0960.1947.796.1531411.1269175250.09729936822.9175.90484850809377130289622712.18505711633.7731426.5134594121939105411198.110.3511154.630.357335.970.547337.220.5475.00441309179397.953.65819.01609496505914640634526122.881370.862.881368.5658.05743.09925.3229.76226.710354.32223.355329.7983.5899.8672.1333.13613398.7276461572.0801053.218790.27240762090.518.6110.83124.3634.8952.99453.1655.462.5989.6574.377.7343.3317.1260.1348.056.1609511.0269022249.54329778323.9175.70884860539376883290073981.77568709465.1731592.4134540121723107011225.270.3511260.830.357319.100.547326.690.5574.90450901175828.263.62818.97609156505504638864526422.881368.422.891370.1457.99743.25725.4829.79626.625353.66823.300330.1923.5919.8852.1473.13613424.4326754272.076OpenBenchmarking.org

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.4RepeatRepeat 22004006008001000SE +/- 1.15, N = 3SE +/- 1.14, N = 3SE +/- 10.54, N = 31039.581042.571053.221. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.4RepeatRepeat 22004006008001000Min: 1038.11 / Avg: 1039.58 / Max: 1041.84Min: 1040.43 / Avg: 1042.57 / Max: 1044.33Min: 1041.35 / Avg: 1053.22 / Max: 1074.231. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.4RepeatRepeat 20.06120.12240.18360.24480.306SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.2700.2710.2721. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.4RepeatRepeat 212345Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.27Min: 0.27 / Avg: 0.27 / Max: 0.271. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Linux 5.4RepeatRepeat 2901802703604504074074071. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Repeat 213K26K39K52K65KSE +/- 7730.84, N = 11SE +/- 7769.83, N = 1262220.962090.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Repeat 211K22K33K44K55KMin: 2305.2 / Avg: 62220.89 / Max: 89658.1Min: 7622.9 / Avg: 62090.45 / Max: 111813.8

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.4RepeatRepeat 2612182430SE +/- 2.92, N = 12SE +/- 0.15, N = 3SE +/- 0.06, N = 324.718.418.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.4RepeatRepeat 2612182430Min: 19.5 / Avg: 24.66 / Max: 55Min: 18.1 / Avg: 18.4 / Max: 18.6Min: 18.5 / Avg: 18.6 / Max: 18.71. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.4RepeatRepeat 220406080100SE +/- 0.14, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 3110.54110.95110.83MIN: 109.8 / MAX: 115.83MIN: 110.22 / MAX: 125.11MIN: 110.26 / MAX: 124.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.4RepeatRepeat 220406080100Min: 110.29 / Avg: 110.54 / Max: 110.78Min: 110.66 / Avg: 110.95 / Max: 111.11Min: 110.66 / Avg: 110.83 / Max: 110.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.4RepeatRepeat 2306090120150SE +/- 0.24, N = 3SE +/- 0.16, N = 3SE +/- 0.16, N = 3124.39124.43124.36MIN: 123.6 / MAX: 139.19MIN: 123.89 / MAX: 138.17MIN: 123.86 / MAX: 136.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.4RepeatRepeat 220406080100Min: 123.92 / Avg: 124.39 / Max: 124.66Min: 124.12 / Avg: 124.43 / Max: 124.66Min: 124.17 / Avg: 124.36 / Max: 124.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.4RepeatRepeat 2816243240SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 334.8434.8434.89MIN: 34.71 / MAX: 37.05MIN: 34.74 / MAX: 36.93MIN: 34.69 / MAX: 49.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.4RepeatRepeat 2714212835Min: 34.79 / Avg: 34.84 / Max: 34.87Min: 34.82 / Avg: 34.84 / Max: 34.87Min: 34.77 / Avg: 34.89 / Max: 35.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.4RepeatRepeat 21224364860SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 352.8652.8852.99MIN: 52.48 / MAX: 64.5MIN: 52.53 / MAX: 56.47MIN: 52.72 / MAX: 55.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.4RepeatRepeat 21122334455Min: 52.64 / Avg: 52.86 / Max: 52.98Min: 52.72 / Avg: 52.88 / Max: 53.02Min: 52.92 / Avg: 52.99 / Max: 53.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.4RepeatRepeat 2100200300400500SE +/- 1.14, N = 3SE +/- 2.81, N = 3SE +/- 0.53, N = 3452.41454.27453.16MIN: 448.51 / MAX: 479.07MIN: 448.26 / MAX: 760.1MIN: 448.99 / MAX: 493.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.4RepeatRepeat 280160240320400Min: 450.43 / Avg: 452.41 / Max: 454.39Min: 451.27 / Avg: 454.27 / Max: 459.88Min: 452.23 / Avg: 453.16 / Max: 454.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.4RepeatRepeat 21224364860SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 355.5155.4155.46MIN: 55.31 / MAX: 69.73MIN: 55.11 / MAX: 65.65MIN: 55.25 / MAX: 66.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.4RepeatRepeat 21122334455Min: 55.5 / Avg: 55.51 / Max: 55.52Min: 55.27 / Avg: 55.41 / Max: 55.49Min: 55.4 / Avg: 55.46 / Max: 55.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.4RepeatRepeat 20.5851.171.7552.342.925SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.602.592.59MIN: 2.56 / MAX: 6.39MIN: 2.56 / MAX: 2.71MIN: 2.56 / MAX: 2.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.4RepeatRepeat 2246810Min: 2.59 / Avg: 2.6 / Max: 2.62Min: 2.58 / Avg: 2.59 / Max: 2.59Min: 2.58 / Avg: 2.59 / Max: 2.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.4RepeatRepeat 220406080100SE +/- 0.03, N = 3SE +/- 0.82, N = 3SE +/- 0.42, N = 389.2190.1389.65MIN: 88.39 / MAX: 103.49MIN: 87.86 / MAX: 128.03MIN: 88.03 / MAX: 94.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.4RepeatRepeat 220406080100Min: 89.15 / Avg: 89.21 / Max: 89.25Min: 89.15 / Avg: 90.13 / Max: 91.77Min: 88.82 / Avg: 89.65 / Max: 90.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.4RepeatRepeat 220406080100SE +/- 0.18, N = 3SE +/- 0.72, N = 3SE +/- 0.44, N = 375.4374.7674.37MIN: 73.64 / MAX: 114.97MIN: 73.22 / MAX: 79.51MIN: 73.23 / MAX: 87.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.4RepeatRepeat 21428425670Min: 75.12 / Avg: 75.43 / Max: 75.73Min: 73.42 / Avg: 74.76 / Max: 75.89Min: 73.71 / Avg: 74.37 / Max: 75.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.4RepeatRepeat 2246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.737.747.73MIN: 7.7 / MAX: 9.35MIN: 7.7 / MAX: 10.7MIN: 7.7 / MAX: 8.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.4RepeatRepeat 23691215Min: 7.73 / Avg: 7.73 / Max: 7.74Min: 7.73 / Avg: 7.74 / Max: 7.75Min: 7.72 / Avg: 7.73 / Max: 7.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.4RepeatRepeat 21020304050SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.19, N = 343.4843.1143.33MIN: 43.11 / MAX: 57.39MIN: 42.42 / MAX: 46.28MIN: 42.83 / MAX: 57.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.4RepeatRepeat 2918273645Min: 43.43 / Avg: 43.48 / Max: 43.57Min: 42.99 / Avg: 43.11 / Max: 43.27Min: 43.02 / Avg: 43.33 / Max: 43.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.4RepeatRepeat 248121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 317.1217.0917.12MIN: 17.01 / MAX: 21.72MIN: 16.98 / MAX: 19.15MIN: 17.06 / MAX: 18.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.4RepeatRepeat 248121620Min: 17.11 / Avg: 17.12 / Max: 17.14Min: 17.07 / Avg: 17.09 / Max: 17.11Min: 17.1 / Avg: 17.12 / Max: 17.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.4RepeatRepeat 21326395265SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 360.2060.1960.13MIN: 59.91 / MAX: 104.64MIN: 59.95 / MAX: 100.57MIN: 59.92 / MAX: 63.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.4RepeatRepeat 21224364860Min: 60.09 / Avg: 60.2 / Max: 60.34Min: 60.12 / Avg: 60.19 / Max: 60.31Min: 60.1 / Avg: 60.13 / Max: 60.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.4RepeatRepeat 21122334455SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 347.8047.7948.05MIN: 47.5 / MAX: 52.13MIN: 47.47 / MAX: 58.43MIN: 47.73 / MAX: 51.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.4RepeatRepeat 21020304050Min: 47.64 / Avg: 47.8 / Max: 47.89Min: 47.61 / Avg: 47.79 / Max: 47.91Min: 47.94 / Avg: 48.05 / Max: 48.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.4RepeatRepeat 2246810SE +/- 0.00576, N = 3SE +/- 0.01749, N = 3SE +/- 0.01114, N = 36.172386.153146.16095
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.4RepeatRepeat 2246810Min: 6.16 / Avg: 6.17 / Max: 6.18Min: 6.12 / Avg: 6.15 / Max: 6.18Min: 6.14 / Avg: 6.16 / Max: 6.18

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.4RepeatRepeat 23691215SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 311.411.111.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.4RepeatRepeat 23691215Min: 11.4 / Avg: 11.43 / Max: 11.5Min: 10.9 / Avg: 11.07 / Max: 11.3Min: 10.9 / Avg: 11 / Max: 11.21. (CC) gcc options: -O3 -pthread -lz -llzma

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.4RepeatRepeat 260K120K180K240K300KSE +/- 64.59, N = 3SE +/- 108.31, N = 3SE +/- 129.66, N = 32692492691752690221. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.4RepeatRepeat 250K100K150K200K250KMin: 269120 / Avg: 269248.67 / Max: 269323Min: 268997 / Avg: 269175.33 / Max: 269371Min: 268814 / Avg: 269021.67 / Max: 2692601. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.4RepeatRepeat 250100150200250SE +/- 1.63, N = 3SE +/- 0.98, N = 3SE +/- 1.09, N = 3250.70250.10249.541. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.4RepeatRepeat 250100150200250Min: 247.62 / Avg: 250.7 / Max: 253.17Min: 248.48 / Avg: 250.1 / Max: 251.85Min: 248.16 / Avg: 249.54 / Max: 251.691. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.4RepeatRepeat 26M12M18M24M30MSE +/- 468902.28, N = 9SE +/- 497912.48, N = 3SE +/- 377603.30, N = 529229484.429936822.929778323.9
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.4RepeatRepeat 25M10M15M20M25MMin: 26052848.2 / Avg: 29229484.39 / Max: 30523175.1Min: 28942278.5 / Avg: 29936822.9 / Max: 30477816.8Min: 28839489.8 / Avg: 29778323.88 / Max: 30491216.7

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.4RepeatRepeat 24080120160200SE +/- 0.12, N = 3SE +/- 0.21, N = 3SE +/- 0.10, N = 3176.14175.90175.711. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.4RepeatRepeat 2306090120150Min: 175.99 / Avg: 176.14 / Max: 176.39Min: 175.49 / Avg: 175.9 / Max: 176.19Min: 175.6 / Avg: 175.71 / Max: 175.911. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.4RepeatRepeat 22M4M6M8M10MSE +/- 971.09, N = 3SE +/- 151.33, N = 3SE +/- 1248.36, N = 3848572784850808486053
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.4RepeatRepeat 21.5M3M4.5M6M7.5MMin: 8484140 / Avg: 8485726.67 / Max: 8487490Min: 8484860 / Avg: 8485080 / Max: 8485370Min: 8484790 / Avg: 8486053.33 / Max: 8488550

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.4RepeatRepeat 22M4M6M8M10MSE +/- 451.68, N = 3SE +/- 115.47, N = 3SE +/- 406.99, N = 3937754793771309376883
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.4RepeatRepeat 21.6M3.2M4.8M6.4M8MMin: 9376910 / Avg: 9377546.67 / Max: 9378420Min: 9376930 / Avg: 9377130 / Max: 9377330Min: 9376110 / Avg: 9376883.33 / Max: 9377490

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.4RepeatRepeat 260M120M180M240M300MSE +/- 890150.21, N = 3SE +/- 354272.49, N = 3SE +/- 107944.20, N = 3288531427.65289622712.19290073981.781. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.4RepeatRepeat 250M100M150M200M250MMin: 286855244.64 / Avg: 288531427.65 / Max: 289889047.86Min: 289205445.94 / Avg: 289622712.19 / Max: 290327272.38Min: 289897167.03 / Avg: 290073981.78 / Max: 290269665.711. (CC) gcc options: -O3 -march=native -lm

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4RepeatRepeat 2150K300K450K600K750KSE +/- 901.06, N = 3SE +/- 522.34, N = 3SE +/- 706.90, N = 3711718.3711633.7709465.1
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4RepeatRepeat 2120K240K360K480K600KMin: 710745.5 / Avg: 711718.33 / Max: 713518.5Min: 711106.3 / Avg: 711633.73 / Max: 712678.4Min: 708738.9 / Avg: 709465.07 / Max: 710878.7

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4RepeatRepeat 2160K320K480K640K800KSE +/- 1068.88, N = 3SE +/- 57.15, N = 3SE +/- 2240.21, N = 3727591.2731426.5731592.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4RepeatRepeat 2130K260K390K520K650KMin: 725553.6 / Avg: 727591.23 / Max: 729170Min: 731319.7 / Avg: 731426.47 / Max: 731515.2Min: 727184.9 / Avg: 731592.4 / Max: 734493.3

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.4RepeatRepeat 230K60K90K120K150KSE +/- 118.19, N = 3SE +/- 81.84, N = 3SE +/- 93.15, N = 31346641345941345401. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.4RepeatRepeat 220K40K60K80K100KMin: 134448 / Avg: 134664.33 / Max: 134855Min: 134433 / Avg: 134594 / Max: 134700Min: 134438 / Avg: 134540 / Max: 1347261. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.4RepeatRepeat 230K60K90K120K150KSE +/- 122.56, N = 3SE +/- 251.34, N = 3SE +/- 191.87, N = 31217261219391217231. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.4RepeatRepeat 220K40K60K80K100KMin: 121520 / Avg: 121725.67 / Max: 121944Min: 121436 / Avg: 121938.67 / Max: 122194Min: 121342 / Avg: 121722.67 / Max: 1219551. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Linux 5.4RepeatRepeat 22004006008001000105710541070

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 22K4K6K8K10KSE +/- 11.57, N = 3SE +/- 16.61, N = 3SE +/- 11.01, N = 311181.4611198.1111225.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 22K4K6K8K10KMin: 11159.17 / Avg: 11181.46 / Max: 11197.97Min: 11178.54 / Avg: 11198.11 / Max: 11231.14Min: 11206.89 / Avg: 11225.27 / Max: 11244.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 20.0810.1620.2430.3240.405SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.360.350.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 212345Min: 0.35 / Avg: 0.36 / Max: 0.36Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 22K4K6K8K10KSE +/- 3.88, N = 3SE +/- 14.24, N = 3SE +/- 21.47, N = 311208.6211154.6311260.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 22K4K6K8K10KMin: 11202.21 / Avg: 11208.62 / Max: 11215.6Min: 11132.13 / Avg: 11154.63 / Max: 11181.01Min: 11230.12 / Avg: 11260.83 / Max: 11302.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 20.07880.15760.23640.31520.394SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.350.350.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 212345Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.36Min: 0.35 / Avg: 0.35 / Max: 0.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 216003200480064008000SE +/- 8.64, N = 3SE +/- 18.11, N = 3SE +/- 6.49, N = 37313.037335.977319.101. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 213002600390052006500Min: 7300.6 / Avg: 7313.03 / Max: 7329.63Min: 7300.28 / Avg: 7335.97 / Max: 7359.2Min: 7309.89 / Avg: 7319.1 / Max: 7331.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 20.12380.24760.37140.49520.619SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.540.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4RepeatRepeat 2246810Min: 0.54 / Avg: 0.55 / Max: 0.55Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 216003200480064008000SE +/- 0.94, N = 3SE +/- 14.44, N = 3SE +/- 14.19, N = 37308.467337.227326.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 213002600390052006500Min: 7307.16 / Avg: 7308.46 / Max: 7310.3Min: 7317.46 / Avg: 7337.22 / Max: 7365.35Min: 7305.45 / Avg: 7326.69 / Max: 7353.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 20.12380.24760.37140.49520.619SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.540.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4RepeatRepeat 2246810Min: 0.54 / Avg: 0.55 / Max: 0.55Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.55 / Max: 0.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.4RepeatRepeat 220406080100SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.07, N = 374.9475.0074.90
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.4RepeatRepeat 21428425670Min: 74.86 / Avg: 74.94 / Max: 75.02Min: 74.81 / Avg: 75 / Max: 75.28Min: 74.77 / Avg: 74.9 / Max: 75

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.4RepeatRepeat 2100K200K300K400K500KSE +/- 53.29, N = 3SE +/- 16.17, N = 3SE +/- 5926.42, N = 5441277441309450901
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.4RepeatRepeat 280K160K240K320K400KMin: 441181 / Avg: 441277.33 / Max: 441365Min: 441281 / Avg: 441309 / Max: 441337Min: 441113 / Avg: 450901.2 / Max: 467678

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.4RepeatRepeat 240K80K120K160K200KSE +/- 2962.51, N = 3SE +/- 2361.06, N = 3SE +/- 1595.64, N = 3175757.68179397.95175828.261. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.4RepeatRepeat 230K60K90K120K150KMin: 171664.11 / Avg: 175757.68 / Max: 181514.1Min: 175475.27 / Avg: 179397.95 / Max: 183635.95Min: 172811.38 / Avg: 175828.26 / Max: 178237.831. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.4RepeatRepeat 20.8281.6562.4843.3124.14SE +/- 0.046, N = 5SE +/- 0.043, N = 3SE +/- 0.036, N = 83.6803.6583.628MIN: 3.3 / MAX: 4.45MIN: 3.38 / MAX: 4.34MIN: 3.21 / MAX: 4.39
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.4RepeatRepeat 2246810Min: 3.58 / Avg: 3.68 / Max: 3.85Min: 3.59 / Avg: 3.66 / Max: 3.74Min: 3.47 / Avg: 3.63 / Max: 3.75

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.4RepeatRepeat 2510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 319.0019.0118.971. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.4RepeatRepeat 2510152025Min: 18.99 / Avg: 19 / Max: 19.02Min: 18.97 / Avg: 19.01 / Max: 19.03Min: 18.96 / Avg: 18.97 / Max: 18.981. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.4RepeatRepeat 213K26K39K52K65KSE +/- 80.23, N = 3SE +/- 27.70, N = 3SE +/- 106.47, N = 36100360949609151. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.4RepeatRepeat 211K22K33K44K55KMin: 60871 / Avg: 61003 / Max: 61148Min: 60909 / Avg: 60948.67 / Max: 61002Min: 60713 / Avg: 60915.33 / Max: 610741. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.4RepeatRepeat 2140K280K420K560K700KSE +/- 114.12, N = 3SE +/- 32.51, N = 3SE +/- 56.54, N = 3650796650591650550
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.4RepeatRepeat 2110K220K330K440K550KMin: 650662 / Avg: 650796 / Max: 651023Min: 650526 / Avg: 650591 / Max: 650625Min: 650470 / Avg: 650549.67 / Max: 650659

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.4RepeatRepeat 2100K200K300K400K500KSE +/- 83.80, N = 3SE +/- 133.08, N = 3SE +/- 28.14, N = 3463952464063463886
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.4RepeatRepeat 280K160K240K320K400KMin: 463841 / Avg: 463951.67 / Max: 464116Min: 463900 / Avg: 464063.33 / Max: 464327Min: 463840 / Avg: 463885.67 / Max: 463937

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.4RepeatRepeat 2100K200K300K400K500KSE +/- 74.31, N = 3SE +/- 39.35, N = 3SE +/- 119.87, N = 3452562452612452642
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.4RepeatRepeat 280K160K240K320K400KMin: 452416 / Avg: 452561.67 / Max: 452660Min: 452566 / Avg: 452611.67 / Max: 452690Min: 452430 / Avg: 452641.67 / Max: 452845

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.4RepeatRepeat 20.6481.2961.9442.5923.24SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.882.882.881. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.4RepeatRepeat 2246810Min: 2.88 / Avg: 2.88 / Max: 2.89Min: 2.88 / Avg: 2.88 / Max: 2.88Min: 2.88 / Avg: 2.88 / Max: 2.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.4RepeatRepeat 230060090012001500SE +/- 1.39, N = 3SE +/- 1.30, N = 3SE +/- 0.73, N = 31370.541370.861368.421. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.4RepeatRepeat 22004006008001000Min: 1368.05 / Avg: 1370.54 / Max: 1372.84Min: 1368.53 / Avg: 1370.86 / Max: 1373.03Min: 1367.02 / Avg: 1368.42 / Max: 1369.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.4RepeatRepeat 20.65031.30061.95092.60123.2515SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.882.882.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.4RepeatRepeat 2246810Min: 2.88 / Avg: 2.88 / Max: 2.88Min: 2.88 / Avg: 2.88 / Max: 2.88Min: 2.89 / Avg: 2.89 / Max: 2.891. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.4RepeatRepeat 230060090012001500SE +/- 1.44, N = 3SE +/- 0.95, N = 3SE +/- 0.71, N = 31368.691368.561370.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.4RepeatRepeat 22004006008001000Min: 1366.43 / Avg: 1368.69 / Max: 1371.36Min: 1366.75 / Avg: 1368.56 / Max: 1369.98Min: 1368.93 / Avg: 1370.14 / Max: 1371.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.4RepeatRepeat 21326395265SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 358.0258.0658.001. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.4RepeatRepeat 21122334455Min: 57.89 / Avg: 58.02 / Max: 58.22Min: 57.89 / Avg: 58.06 / Max: 58.24Min: 57.85 / Avg: 58 / Max: 58.121. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.4RepeatRepeat 21020304050SE +/- 0.14, N = 4SE +/- 0.13, N = 4SE +/- 0.06, N = 443.1543.1043.261. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.4RepeatRepeat 2918273645Min: 42.76 / Avg: 43.15 / Max: 43.4Min: 42.71 / Avg: 43.1 / Max: 43.26Min: 43.14 / Avg: 43.26 / Max: 43.421. (CC) gcc options: -O2 -std=c99

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.4RepeatRepeat 2612182430SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 325.4125.3225.48
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.4RepeatRepeat 2612182430Min: 25.27 / Avg: 25.41 / Max: 25.61Min: 25.28 / Avg: 25.32 / Max: 25.38Min: 25.35 / Avg: 25.48 / Max: 25.71

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.4RepeatRepeat 2714212835SE +/- 0.15, N = 3SE +/- 0.18, N = 3SE +/- 0.11, N = 329.9329.7629.801. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.4RepeatRepeat 2714212835Min: 29.64 / Avg: 29.93 / Max: 30.09Min: 29.41 / Avg: 29.76 / Max: 29.97Min: 29.57 / Avg: 29.8 / Max: 29.941. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsLinux 5.4RepeatRepeat 2612182430SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 326.6226.7126.63
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsLinux 5.4RepeatRepeat 2612182430Min: 26.6 / Avg: 26.62 / Max: 26.65Min: 26.64 / Avg: 26.71 / Max: 26.76Min: 26.55 / Avg: 26.62 / Max: 26.67

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.4RepeatRepeat 280160240320400SE +/- 0.40, N = 3SE +/- 0.54, N = 3SE +/- 0.24, N = 3353.71354.32353.67MIN: 352.69 / MAX: 355.03MIN: 353.06 / MAX: 361.41MIN: 352.88 / MAX: 359.051. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.4RepeatRepeat 260120180240300Min: 353.22 / Avg: 353.71 / Max: 354.5Min: 353.5 / Avg: 354.32 / Max: 355.35Min: 353.23 / Avg: 353.67 / Max: 354.071. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.4RepeatRepeat 2612182430SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.13, N = 323.2923.3623.301. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.4RepeatRepeat 2510152025Min: 23.17 / Avg: 23.29 / Max: 23.41Min: 23.24 / Avg: 23.36 / Max: 23.48Min: 23.08 / Avg: 23.3 / Max: 23.521. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.4RepeatRepeat 270140210280350SE +/- 1.06, N = 3SE +/- 0.95, N = 3SE +/- 0.14, N = 3329.11329.80330.19MIN: 326.65 / MAX: 331.54MIN: 326.97 / MAX: 332.22MIN: 329.57 / MAX: 331.641. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.4RepeatRepeat 260120180240300Min: 327.05 / Avg: 329.11 / Max: 330.56Min: 327.98 / Avg: 329.8 / Max: 331.16Min: 329.93 / Avg: 330.19 / Max: 330.41. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionLinux 5.4RepeatRepeat 20.8081.6162.4243.2324.04SE +/- 0.033, N = 13SE +/- 0.033, N = 13SE +/- 0.037, N = 143.5873.5893.591
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionLinux 5.4RepeatRepeat 2246810Min: 3.55 / Avg: 3.59 / Max: 3.99Min: 3.55 / Avg: 3.59 / Max: 3.99Min: 3.55 / Avg: 3.59 / Max: 4.07

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.4RepeatRepeat 23691215SE +/- 0.014, N = 3SE +/- 0.008, N = 3SE +/- 0.016, N = 39.8979.8679.8851. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.4RepeatRepeat 23691215Min: 9.88 / Avg: 9.9 / Max: 9.92Min: 9.85 / Avg: 9.87 / Max: 9.88Min: 9.87 / Avg: 9.88 / Max: 9.921. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.4RepeatRepeat 20.48470.96941.45411.93882.4235SE +/- 0.009, N = 3SE +/- 0.011, N = 3SE +/- 0.011, N = 32.1542.1332.1471. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.4RepeatRepeat 2246810Min: 2.14 / Avg: 2.15 / Max: 2.16Min: 2.12 / Avg: 2.13 / Max: 2.16Min: 2.13 / Avg: 2.15 / Max: 2.161. (CXX) g++ options: -O3 -pthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.4RepeatRepeat 20.70561.41122.11682.82243.528SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 33.1323.1363.1361. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.4RepeatRepeat 2246810Min: 3.13 / Avg: 3.13 / Max: 3.14Min: 3.13 / Avg: 3.14 / Max: 3.14Min: 3.13 / Avg: 3.14 / Max: 3.141. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.4RepeatRepeat 23K6K9K12K15KSE +/- 25.32, N = 3SE +/- 19.21, N = 3SE +/- 35.71, N = 313380.6413398.7313424.431. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.4RepeatRepeat 22K4K6K8K10KMin: 13331.59 / Avg: 13380.64 / Max: 13416.09Min: 13362.21 / Avg: 13398.73 / Max: 13427.33Min: 13376.7 / Avg: 13424.43 / Max: 13494.311. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.4RepeatRepeat 20.4680.9361.4041.8722.34SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 32.0752.0802.0761. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.4RepeatRepeat 2246810Min: 2.07 / Avg: 2.07 / Max: 2.08Min: 2.08 / Avg: 2.08 / Max: 2.08Min: 2.07 / Avg: 2.08 / Max: 2.081. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff