Xeon Broadwell September 2020

Intel Xeon E5-2609 v4 testing with a MSI X99A RAIDER (MS-7885) v5.0 (P.50 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009272-FI-XEONBROAD92
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 6 Tests
Compression Tests 2 Tests
CPU Massive 9 Tests
Creator Workloads 11 Tests
Database Test Suite 3 Tests
Encoding 2 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 14 Tests
Imaging 6 Tests
Machine Learning 7 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 5 Tests
Multi-Core 10 Tests
NVIDIA GPU Compute 4 Tests
OCR 2 Tests
OpenMPI Tests 6 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 3 Tests
Scientific Computing 6 Tests
Server 3 Tests
Server CPU Tests 5 Tests
Single-Threaded 2 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux 5.4
September 25 2020
  14 Hours, 57 Minutes
Linux 5.8
September 26 2020
  14 Hours, 54 Minutes
Linux 5.9 Git
September 26 2020
  15 Hours, 30 Minutes
Invert Hiding All Results Option
  15 Hours, 7 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Broadwell September 2020OpenBenchmarking.orgPhoronix Test SuiteIntel Xeon E5-2609 v4 @ 1.70GHz (8 Cores)MSI X99A RAIDER (MS-7885) v5.0 (P.50 BIOS)Intel Xeon E7 v4/Xeon16GB256GB CORSAIR FORCE LXllvmpipeRealtek ALC892Intel I218-VUbuntu 20.045.4.0-37-generic (x86_64)5.8.0-050800-generic (x86_64)5.9.0-050900rc6daily20200926-generic (x86_64) 20200925GNOME Shell 3.36.2X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 256 bits)GCC 9.3.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionXeon Broadwell September 2020 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Linux 5.4: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xb000038 - Linux 5.8: Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0xb000038 - Linux 5.9 Git: Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0xb000038 - Python 3.8.2- Linux 5.4: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT disabled - Linux 5.8: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT disabled - Linux 5.9 Git: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT disabled

Linux 5.4Linux 5.8Linux 5.9 GitResult OverviewPhoronix Test Suite100%104%107%111%115%Apache CouchDBOpenCVLeelaChessZeroTimed Linux Kernel CompilationAI Benchmark AlphaPostgreSQL pgbenchTimed Apache CompilationGROMACSIncompact3DeSpeak-NG Speech EngineASTC EncoderZstd CompressionTesseract OCRLAMMPS Molecular Dynamics SimulatorRodiniaLibRawAOM AV1OCRMyPDFMontage Astronomical Image Mosaic EngineMobile Neural NetworkG'MICWebP Image EncodeHuginNCNNNAMDTensorFlow LiteInfluxDBlibavif avifencSystem GZIP DecompressionTNNMonte Carlo Simulations of Ionised NebulaeGPAW

Xeon Broadwell September 2020couchdb: 100 - 1000 - 24opencv: DNN - Deep Neural Networkpgbench: 1 - 250 - Read Writepgbench: 1 - 250 - Read Write - Average Latencypgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 100 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 50 - Read Writepgbench: 100 - 50 - Read Write - Average Latencypgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlylczero: BLASpgbench: 100 - 100 - Read Only - Average Latencypgbench: 1 - 50 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 1 - 50 - Read Onlypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Onlyncnn: CPU - mobilenetaom-av1: Speed 8 Realtimelczero: Eigenpgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlyastcenc: Fastbuild-linux-kernel: Time To Compileai-benchmark: Device Training Scoreai-benchmark: Device Inference Scoreai-benchmark: Device AI Scoremnn: mobilenet-v1-1.0build-apache: Time To Compilerodinia: OpenMP HotSpot3Dgromacs: Water Benchmarkncnn: CPU - shufflenet-v2avifenc: 10ncnn: CPU - alexnetwebp: Quality 100aom-av1: Speed 6 Realtimeespeak: Text-To-Speech Synthesistnn: CPU - MobileNet v2mnn: inception-v3compress-zstd: 3influxdb: 4 - 10000 - 2,5000,1 - 10000avifenc: 2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - efficientnet-b0lammps: 20k Atomsncnn: CPU - vgg16compress-zstd: 19tesseract-ocr: Time To OCR 7 Imageswebp: Quality 100, Lossless, Highest Compressionmontage: Mosaic of M17, K band, 1.5 deg x 1.5 degncnn: CPU - blazefacemnn: SqueezeNetV1.0rodinia: OpenMP Streamclusterincompact3d: Cylinderlibraw: Post-Processing Benchmarkrodinia: OpenMP Leukocytencnn: CPU - resnet18avifenc: 8ocrmypdf: Processing 60 Page PDF Documentinfluxdb: 1024 - 10000 - 2,5000,1 - 10000webp: Quality 100, Highest Compressionrodinia: OpenMP CFD Solvermnn: MobileNetV2_224lammps: Rhodopsin Proteinncnn: CPU - mnasnetmocassin: Dust 2D tau100.0ncnn: CPU - squeezenettensorflow-lite: NASNet Mobilemnn: resnet-v2-50tensorflow-lite: SqueezeNetwebp: Defaulthugin: Panorama Photo Assistant + Stitching Timeavifenc: 0gpaw: Carbon Nanotubencnn: CPU - googlenetrodinia: OpenMP LavaMDncnn: CPU - yolov4-tinyastcenc: Thoroughinfluxdb: 64 - 10000 - 2,5000,1 - 10000namd: ATPase Simulation - 327,506 Atomsncnn: CPU-v2-v2 - mobilenet-v2gmic: 3D Elevated Function In Rand Colors, 100 Timeswebp: Quality 100, Losslessncnn: CPU - resnet50tensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4astcenc: Exhaustiveastcenc: Mediumtnn: CPU - SqueezeNet v1.1tensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantsystem-decompress-gzip: aom-av1: Speed 6 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 0 Two-Passpgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read WriteLinux 5.4Linux 5.8Linux 5.9 Git233.8206825398628.276115.079435423236.4674.01662275402112.4453.408733776611.4940.66566968751850.7276874934.0117.766351.388721066.66254.11761156211738.22956.931236.1780.4577.3214.45726.706.2178.5979.257666.60265.0072288.3683945.0169.6648.4413.332.70171.2325.558.59986.342201.4613.1111.28431.737872.99005117.53345.62725.2015.38878.804742413.418.38755.3786.0802.6718.9646228.4641704452.3635771354.074135.739288.142535.08728.86843.76553.34100.66732223.05.005319.42158.37241.14247.0175009808290180798.0916.21640.7943877693919627.4471.470.920.1151.474485921.0894763204.8746291413605.092110.213454441226.9994.14860297415412.0603.429729736431.5220.67365706742740.7466710034.7117.416231.411709426.77254.67060355711608.14756.880235.1850.4567.2714.48426.586.2338.6479.643663.11665.0412297.9683785.9169.8228.4013.282.71071.5325.658.74286.622201.9793.1211.30231.831875.00018317.51346.61325.2715.39379.005743354.918.43155.2476.0662.6778.9846228.5141773752.4445780194.068135.759288.179535.47228.89842.63453.40100.76731570.75.003089.43158.30741.15747.0575066038295930798.2816.22640.4553878993921097.4481.470.920.1147.332528621.2054718204.1636420419597.204110.573452441226.8984.17959853408812.2583.512712216421.5370.68465138731490.7386784634.6517.496261.387721376.69257.50960455511598.15157.428237.3020.4537.3114.55026.536.1948.6479.682663.42565.3472299.8687197.1169.0138.4413.272.69871.3925.658.82786.674202.1223.1111.31831.831875.54543017.48346.51125.2715.42878.981741552.518.39355.3696.0672.6718.9846328.4541767052.4115775844.073135.562287.761534.72828.90842.87553.33100.79731282.14.999359.43158.46841.12047.0575050338295183798.6416.22640.5243879393921317.4451.470.920.1152.722477520.5554878OpenBenchmarking.org

Apache CouchDB

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.4Linux 5.8Linux 5.9 Git50100150200250SE +/- 1.26, N = 3SE +/- 1.42, N = 3SE +/- 0.91, N = 3233.82204.87204.161. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.4Linux 5.8Linux 5.9 Git4080120160200Min: 231.77 / Avg: 233.82 / Max: 236.11Min: 202.27 / Avg: 204.87 / Max: 207.15Min: 202.62 / Avg: 204.16 / Max: 205.791. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkLinux 5.4Linux 5.8Linux 5.9 Git15003000450060007500SE +/- 37.24, N = 3SE +/- 87.29, N = 4SE +/- 51.81, N = 136825629164201. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkLinux 5.4Linux 5.8Linux 5.9 Git12002400360048006000Min: 6756 / Avg: 6824.67 / Max: 6884Min: 6073 / Avg: 6290.75 / Max: 6500Min: 6249 / Avg: 6419.92 / Max: 68981. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git90180270360450SE +/- 1.94, N = 3SE +/- 3.02, N = 3SE +/- 2.46, N = 33984134191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git70140210280350Min: 394.35 / Avg: 398.05 / Max: 400.91Min: 407.41 / Avg: 413.34 / Max: 417.32Min: 414.26 / Avg: 418.78 / Max: 422.731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git140280420560700SE +/- 3.07, N = 3SE +/- 4.46, N = 3SE +/- 3.52, N = 3628.28605.09597.201. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git110220330440550Min: 623.77 / Avg: 628.28 / Max: 634.16Min: 599.24 / Avg: 605.09 / Max: 613.85Min: 591.59 / Avg: 597.2 / Max: 603.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git306090120150SE +/- 0.30, N = 3SE +/- 0.28, N = 3SE +/- 0.15, N = 3115.08110.21110.571. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git20406080100Min: 114.53 / Avg: 115.08 / Max: 115.56Min: 109.66 / Avg: 110.21 / Max: 110.59Min: 110.27 / Avg: 110.57 / Max: 110.721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git100200300400500SE +/- 1.13, N = 3SE +/- 1.17, N = 3SE +/- 0.61, N = 34354544521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git80160240320400Min: 432.78 / Avg: 434.58 / Max: 436.66Min: 452.22 / Avg: 453.77 / Max: 456.05Min: 451.67 / Avg: 452.28 / Max: 453.511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git100200300400500SE +/- 1.08, N = 3SE +/- 1.41, N = 3SE +/- 1.28, N = 34234414411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git80160240320400Min: 421.37 / Avg: 423.04 / Max: 425.07Min: 439.09 / Avg: 440.69 / Max: 443.49Min: 438.77 / Avg: 440.91 / Max: 443.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git50100150200250SE +/- 0.61, N = 3SE +/- 0.73, N = 3SE +/- 0.66, N = 3236.47227.00226.901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git4080120160200Min: 235.34 / Avg: 236.47 / Max: 237.41Min: 225.55 / Avg: 227 / Max: 227.83Min: 225.72 / Avg: 226.9 / Max: 228.011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git0.94031.88062.82093.76124.7015SE +/- 0.023, N = 3SE +/- 0.031, N = 3SE +/- 0.031, N = 34.0164.1484.1791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 3.97 / Avg: 4.02 / Max: 4.04Min: 4.11 / Avg: 4.15 / Max: 4.21Min: 4.13 / Avg: 4.18 / Max: 4.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git13K26K39K52K65KSE +/- 358.02, N = 3SE +/- 446.03, N = 3SE +/- 439.97, N = 36227560297598531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git11K22K33K44K55KMin: 61858.24 / Avg: 62274.93 / Max: 62987.58Min: 59418.32 / Avg: 60296.82 / Max: 60870.26Min: 59012.04 / Avg: 59852.79 / Max: 60498.031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git9001800270036004500SE +/- 45.45, N = 6SE +/- 45.13, N = 15SE +/- 49.30, N = 154021415440881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git7001400210028003500Min: 3894.53 / Avg: 4021.15 / Max: 4164.98Min: 3901.09 / Avg: 4153.71 / Max: 4454.06Min: 3741.03 / Avg: 4088.22 / Max: 4387.171. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git3691215SE +/- 0.14, N = 6SE +/- 0.13, N = 15SE +/- 0.15, N = 1512.4512.0612.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git48121620Min: 12.01 / Avg: 12.44 / Max: 12.84Min: 11.23 / Avg: 12.06 / Max: 12.82Min: 11.4 / Avg: 12.26 / Max: 13.371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git0.79021.58042.37063.16083.951SE +/- 0.019, N = 3SE +/- 0.056, N = 3SE +/- 0.030, N = 33.4083.4293.5121. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 3.37 / Avg: 3.41 / Max: 3.43Min: 3.34 / Avg: 3.43 / Max: 3.53Min: 3.45 / Avg: 3.51 / Max: 3.551. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git16K32K48K64K80KSE +/- 404.31, N = 3SE +/- 1194.78, N = 3SE +/- 617.91, N = 37337772973712211. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git13K26K39K52K65KMin: 72909.72 / Avg: 73376.81 / Max: 74181.98Min: 70819.62 / Avg: 72973.06 / Max: 74946.74Min: 70554.22 / Avg: 71220.69 / Max: 72455.21. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASLinux 5.4Linux 5.8Linux 5.9 Git140280420560700SE +/- 4.93, N = 3SE +/- 7.42, N = 3SE +/- 3.79, N = 36616436421. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASLinux 5.4Linux 5.8Linux 5.9 Git120240360480600Min: 653 / Avg: 661 / Max: 670Min: 628 / Avg: 642.67 / Max: 652Min: 635 / Avg: 642 / Max: 6481. (CXX) g++ options: -flto -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git0.34580.69161.03741.38321.729SE +/- 0.013, N = 3SE +/- 0.005, N = 3SE +/- 0.024, N = 31.4941.5221.5371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 1.48 / Avg: 1.49 / Max: 1.52Min: 1.51 / Avg: 1.52 / Max: 1.53Min: 1.51 / Avg: 1.54 / Max: 1.581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git0.15390.30780.46170.61560.7695SE +/- 0.008, N = 3SE +/- 0.002, N = 3SE +/- 0.008, N = 30.6650.6730.6841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 0.65 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.68 / Max: 0.71. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git14K28K42K56K70KSE +/- 567.75, N = 3SE +/- 206.58, N = 3SE +/- 1003.14, N = 36696865706651381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git12K24K36K48K60KMin: 65859.37 / Avg: 66968.06 / Max: 67734.8Min: 65426.9 / Avg: 65706.21 / Max: 66109.51Min: 63149.76 / Avg: 65138.08 / Max: 66364.181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git16K32K48K64K80KSE +/- 933.23, N = 3SE +/- 240.03, N = 3SE +/- 843.48, N = 37518574274731491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git13K26K39K52K65KMin: 73643.47 / Avg: 75185 / Max: 76867.1Min: 73950.66 / Avg: 74274.42 / Max: 74743.26Min: 71747.16 / Avg: 73148.63 / Max: 74662.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git0.16790.33580.50370.67160.8395SE +/- 0.004, N = 3SE +/- 0.003, N = 3SE +/- 0.008, N = 30.7270.7460.7381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 0.72 / Avg: 0.73 / Max: 0.74Min: 0.74 / Avg: 0.75 / Max: 0.75Min: 0.72 / Avg: 0.74 / Max: 0.751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git15K30K45K60K75KSE +/- 371.62, N = 3SE +/- 306.54, N = 3SE +/- 766.55, N = 36874967100678461. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git12K24K36K48K60KMin: 68027.11 / Avg: 68749.4 / Max: 69262.33Min: 66547.24 / Avg: 67100.44 / Max: 67605.89Min: 67077.86 / Avg: 67845.73 / Max: 69378.831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.4Linux 5.8Linux 5.9 Git816243240SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 334.0134.7134.65MIN: 33.84 / MAX: 38.3MIN: 34.38 / MAX: 57.88MIN: 34.02 / MAX: 55.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.4Linux 5.8Linux 5.9 Git714212835Min: 33.94 / Avg: 34.01 / Max: 34.1Min: 34.67 / Avg: 34.71 / Max: 34.77Min: 34.61 / Avg: 34.65 / Max: 34.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.4Linux 5.8Linux 5.9 Git48121620SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 317.7617.4117.491. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.4Linux 5.8Linux 5.9 Git48121620Min: 17.74 / Avg: 17.76 / Max: 17.79Min: 17.25 / Avg: 17.41 / Max: 17.5Min: 17.46 / Avg: 17.49 / Max: 17.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.4Linux 5.8Linux 5.9 Git140280420560700SE +/- 1.33, N = 3SE +/- 9.29, N = 3SE +/- 6.17, N = 36356236261. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.4Linux 5.8Linux 5.9 Git110220330440550Min: 632 / Avg: 634.67 / Max: 636Min: 605 / Avg: 623 / Max: 636Min: 619 / Avg: 625.67 / Max: 6381. (CXX) g++ options: -flto -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git0.31750.6350.95251.271.5875SE +/- 0.009, N = 3SE +/- 0.014, N = 7SE +/- 0.009, N = 31.3881.4111.3871. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 1.38 / Avg: 1.39 / Max: 1.41Min: 1.34 / Avg: 1.41 / Max: 1.45Min: 1.37 / Avg: 1.39 / Max: 1.41. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git15K30K45K60K75KSE +/- 474.91, N = 3SE +/- 739.35, N = 7SE +/- 480.38, N = 37210670942721371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyLinux 5.4Linux 5.8Linux 5.9 Git13K26K39K52K65KMin: 71156.79 / Avg: 72106.1 / Max: 72607.6Min: 69020.57 / Avg: 70941.55 / Max: 74475.9Min: 71413.66 / Avg: 72136.96 / Max: 73046.271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastLinux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 36.666.776.691. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastLinux 5.4Linux 5.8Linux 5.9 Git3691215Min: 6.6 / Avg: 6.66 / Max: 6.69Min: 6.67 / Avg: 6.77 / Max: 6.96Min: 6.66 / Avg: 6.69 / Max: 6.731. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileLinux 5.4Linux 5.8Linux 5.9 Git60120180240300SE +/- 1.05, N = 3SE +/- 1.16, N = 3SE +/- 2.60, N = 3254.12254.67257.51
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileLinux 5.4Linux 5.8Linux 5.9 Git50100150200250Min: 252.95 / Avg: 254.12 / Max: 256.21Min: 253.44 / Avg: 254.67 / Max: 256.99Min: 254.89 / Avg: 257.51 / Max: 262.71

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreLinux 5.4Linux 5.8Linux 5.9 Git130260390520650611603604

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreLinux 5.4Linux 5.8Linux 5.9 Git120240360480600562557555

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreLinux 5.4Linux 5.8Linux 5.9 Git30060090012001500117311601159

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.076, N = 3SE +/- 0.010, N = 3SE +/- 0.006, N = 38.2298.1478.151MIN: 8.1 / MAX: 47.32MIN: 8.09 / MAX: 12.1MIN: 8.1 / MAX: 27.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.4Linux 5.8Linux 5.9 Git3691215Min: 8.15 / Avg: 8.23 / Max: 8.38Min: 8.13 / Avg: 8.15 / Max: 8.16Min: 8.14 / Avg: 8.15 / Max: 8.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileLinux 5.4Linux 5.8Linux 5.9 Git1326395265SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 356.9356.8857.43
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileLinux 5.4Linux 5.8Linux 5.9 Git1122334455Min: 56.9 / Avg: 56.93 / Max: 56.97Min: 56.81 / Avg: 56.88 / Max: 56.92Min: 57.39 / Avg: 57.43 / Max: 57.46

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DLinux 5.4Linux 5.8Linux 5.9 Git50100150200250SE +/- 0.43, N = 3SE +/- 0.10, N = 3SE +/- 2.62, N = 3236.18235.19237.301. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DLinux 5.4Linux 5.8Linux 5.9 Git4080120160200Min: 235.33 / Avg: 236.18 / Max: 236.75Min: 234.98 / Avg: 235.18 / Max: 235.33Min: 234.66 / Avg: 237.3 / Max: 242.541. (CXX) g++ options: -O2 -lOpenCL

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkLinux 5.4Linux 5.8Linux 5.9 Git0.10280.20560.30840.41120.514SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4570.4560.4531. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkLinux 5.4Linux 5.8Linux 5.9 Git12345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.451. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.327.277.31MIN: 7.25 / MAX: 11.85MIN: 7.22 / MAX: 7.34MIN: 7.25 / MAX: 7.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.4Linux 5.8Linux 5.9 Git3691215Min: 7.3 / Avg: 7.32 / Max: 7.34Min: 7.26 / Avg: 7.27 / Max: 7.29Min: 7.29 / Avg: 7.31 / Max: 7.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Linux 5.4Linux 5.8Linux 5.9 Git48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 314.4614.4814.551. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Linux 5.4Linux 5.8Linux 5.9 Git48121620Min: 14.44 / Avg: 14.46 / Max: 14.47Min: 14.47 / Avg: 14.48 / Max: 14.5Min: 14.51 / Avg: 14.55 / Max: 14.61. (CXX) g++ options: -O3 -fPIC

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.4Linux 5.8Linux 5.9 Git612182430SE +/- 0.22, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 326.7026.5826.53MIN: 26.42 / MAX: 142.89MIN: 26.45 / MAX: 46.03MIN: 26.46 / MAX: 41.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.4Linux 5.8Linux 5.9 Git612182430Min: 26.48 / Avg: 26.7 / Max: 27.14Min: 26.54 / Avg: 26.58 / Max: 26.61Min: 26.5 / Avg: 26.53 / Max: 26.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.013, N = 3SE +/- 0.008, N = 3SE +/- 0.006, N = 36.2176.2336.1941. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.4Linux 5.8Linux 5.9 Git246810Min: 6.19 / Avg: 6.22 / Max: 6.23Min: 6.22 / Avg: 6.23 / Max: 6.25Min: 6.18 / Avg: 6.19 / Max: 6.21. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 38.598.648.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.4Linux 5.8Linux 5.9 Git3691215Min: 8.54 / Avg: 8.59 / Max: 8.62Min: 8.61 / Avg: 8.64 / Max: 8.68Min: 8.62 / Avg: 8.64 / Max: 8.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.4Linux 5.8Linux 5.9 Git20406080100SE +/- 0.42, N = 4SE +/- 0.29, N = 4SE +/- 0.35, N = 479.2679.6479.681. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.4Linux 5.8Linux 5.9 Git1530456075Min: 78.4 / Avg: 79.26 / Max: 80.37Min: 79.15 / Avg: 79.64 / Max: 80.47Min: 78.86 / Avg: 79.68 / Max: 80.31. (CC) gcc options: -O2 -std=c99

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.4Linux 5.8Linux 5.9 Git140280420560700SE +/- 0.55, N = 3SE +/- 0.51, N = 3SE +/- 0.55, N = 3666.60663.12663.43MIN: 662.58 / MAX: 678.88MIN: 659.87 / MAX: 692.1MIN: 660.96 / MAX: 681.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.4Linux 5.8Linux 5.9 Git120240360480600Min: 665.81 / Avg: 666.6 / Max: 667.66Min: 662.3 / Avg: 663.12 / Max: 664.05Min: 662.33 / Avg: 663.42 / Max: 664.081. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.4Linux 5.8Linux 5.9 Git1530456075SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.22, N = 365.0165.0465.35MIN: 64.6 / MAX: 84.89MIN: 64.8 / MAX: 165.03MIN: 64.79 / MAX: 212.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.4Linux 5.8Linux 5.9 Git1326395265Min: 64.89 / Avg: 65.01 / Max: 65.08Min: 64.94 / Avg: 65.04 / Max: 65.14Min: 65.06 / Avg: 65.35 / Max: 65.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.4Linux 5.8Linux 5.9 Git5001000150020002500SE +/- 0.33, N = 3SE +/- 2.70, N = 3SE +/- 0.75, N = 32288.32297.92299.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.4Linux 5.8Linux 5.9 Git400800120016002000Min: 2287.6 / Avg: 2288.27 / Max: 2288.6Min: 2292.5 / Avg: 2297.9 / Max: 2300.6Min: 2298.3 / Avg: 2299.8 / Max: 2300.61. (CC) gcc options: -O3 -pthread -lz -llzma

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Linux 5.8Linux 5.9 Git150K300K450K600K750KSE +/- 1763.98, N = 3SE +/- 2275.53, N = 3SE +/- 4327.34, N = 3683945.0683785.9687197.1
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Linux 5.8Linux 5.9 Git120K240K360K480K600KMin: 680811.6 / Avg: 683945.03 / Max: 686915.7Min: 679277.4 / Avg: 683785.87 / Max: 686578.1Min: 679672.3 / Avg: 687197.07 / Max: 694662.3

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Linux 5.4Linux 5.8Linux 5.9 Git4080120160200SE +/- 0.27, N = 3SE +/- 0.27, N = 3SE +/- 0.24, N = 3169.66169.82169.011. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Linux 5.4Linux 5.8Linux 5.9 Git306090120150Min: 169.38 / Avg: 169.66 / Max: 170.2Min: 169.49 / Avg: 169.82 / Max: 170.37Min: 168.56 / Avg: 169.01 / Max: 169.41. (CXX) g++ options: -O3 -fPIC

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 38.448.408.44MIN: 8.33 / MAX: 27.05MIN: 8.29 / MAX: 13.76MIN: 8.32 / MAX: 15.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.4Linux 5.8Linux 5.9 Git3691215Min: 8.42 / Avg: 8.44 / Max: 8.48Min: 8.39 / Avg: 8.4 / Max: 8.41Min: 8.41 / Avg: 8.44 / Max: 8.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.4Linux 5.8Linux 5.9 Git3691215SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 313.3313.2813.27MIN: 13.25 / MAX: 32.93MIN: 13.23 / MAX: 14.53MIN: 13.23 / MAX: 14.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.4Linux 5.8Linux 5.9 Git48121620Min: 13.28 / Avg: 13.33 / Max: 13.4Min: 13.27 / Avg: 13.28 / Max: 13.29Min: 13.27 / Avg: 13.27 / Max: 13.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: 20k AtomsLinux 5.4Linux 5.8Linux 5.9 Git0.60981.21961.82942.43923.049SE +/- 0.011, N = 3SE +/- 0.003, N = 3SE +/- 0.010, N = 32.7012.7102.6981. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: 20k AtomsLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 2.68 / Avg: 2.7 / Max: 2.71Min: 2.71 / Avg: 2.71 / Max: 2.72Min: 2.68 / Avg: 2.7 / Max: 2.721. (CXX) g++ options: -O3 -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.4Linux 5.8Linux 5.9 Git1632486480SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 371.2371.5371.39MIN: 71.1 / MAX: 72.55MIN: 71.28 / MAX: 91.6MIN: 71.12 / MAX: 76.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.4Linux 5.8Linux 5.9 Git1428425670Min: 71.19 / Avg: 71.23 / Max: 71.29Min: 71.43 / Avg: 71.53 / Max: 71.7Min: 71.33 / Avg: 71.39 / Max: 71.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.4Linux 5.8Linux 5.9 Git612182430SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 325.525.625.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.4Linux 5.8Linux 5.9 Git612182430Min: 25.4 / Avg: 25.47 / Max: 25.5Min: 25.5 / Avg: 25.57 / Max: 25.6Min: 25.4 / Avg: 25.57 / Max: 25.71. (CC) gcc options: -O3 -pthread -lz -llzma

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesLinux 5.4Linux 5.8Linux 5.9 Git1326395265SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 358.6058.7458.83
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesLinux 5.4Linux 5.8Linux 5.9 Git1224364860Min: 58.54 / Avg: 58.6 / Max: 58.69Min: 58.59 / Avg: 58.74 / Max: 58.9Min: 58.75 / Avg: 58.83 / Max: 58.95

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.4Linux 5.8Linux 5.9 Git20406080100SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.18, N = 386.3486.6286.671. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.4Linux 5.8Linux 5.9 Git1632486480Min: 86.17 / Avg: 86.34 / Max: 86.57Min: 86.59 / Avg: 86.62 / Max: 86.68Min: 86.31 / Avg: 86.67 / Max: 86.921. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degLinux 5.4Linux 5.8Linux 5.9 Git4080120160200SE +/- 0.20, N = 3SE +/- 0.45, N = 3SE +/- 0.22, N = 3201.46201.98202.121. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degLinux 5.4Linux 5.8Linux 5.9 Git4080120160200Min: 201.07 / Avg: 201.46 / Max: 201.68Min: 201.32 / Avg: 201.98 / Max: 202.84Min: 201.7 / Avg: 202.12 / Max: 202.391. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.4Linux 5.8Linux 5.9 Git0.7021.4042.1062.8083.51SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 33.113.123.11MIN: 3.07 / MAX: 3.76MIN: 3.06 / MAX: 3.19MIN: 3.08 / MAX: 3.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 3.11 / Avg: 3.11 / Max: 3.12Min: 3.1 / Avg: 3.12 / Max: 3.16Min: 3.11 / Avg: 3.11 / Max: 3.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.4Linux 5.8Linux 5.9 Git3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.2811.3011.32MIN: 11.22 / MAX: 29.64MIN: 11.25 / MAX: 15.69MIN: 11.25 / MAX: 16.041. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.4Linux 5.8Linux 5.9 Git3691215Min: 11.27 / Avg: 11.28 / Max: 11.3Min: 11.28 / Avg: 11.3 / Max: 11.32Min: 11.3 / Avg: 11.32 / Max: 11.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterLinux 5.4Linux 5.8Linux 5.9 Git714212835SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 331.7431.8331.831. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterLinux 5.4Linux 5.8Linux 5.9 Git714212835Min: 31.71 / Avg: 31.74 / Max: 31.75Min: 31.8 / Avg: 31.83 / Max: 31.85Min: 31.81 / Avg: 31.83 / Max: 31.841. (CXX) g++ options: -O2 -lOpenCL

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.4Linux 5.8Linux 5.9 Git2004006008001000SE +/- 1.86, N = 3SE +/- 0.99, N = 3SE +/- 2.32, N = 3872.99875.00875.551. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.4Linux 5.8Linux 5.9 Git150300450600750Min: 869.71 / Avg: 872.99 / Max: 876.14Min: 873.79 / Avg: 875 / Max: 876.97Min: 872.86 / Avg: 875.55 / Max: 880.181. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.4Linux 5.8Linux 5.9 Git48121620SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 317.5317.5117.481. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.4Linux 5.8Linux 5.9 Git48121620Min: 17.52 / Avg: 17.53 / Max: 17.55Min: 17.45 / Avg: 17.51 / Max: 17.54Min: 17.31 / Avg: 17.48 / Max: 17.571. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteLinux 5.4Linux 5.8Linux 5.9 Git80160240320400SE +/- 0.00, N = 3SE +/- 0.88, N = 3SE +/- 0.77, N = 3345.63346.61346.511. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteLinux 5.4Linux 5.8Linux 5.9 Git60120180240300Min: 345.62 / Avg: 345.63 / Max: 345.64Min: 345.7 / Avg: 346.61 / Max: 348.37Min: 345.64 / Avg: 346.51 / Max: 348.041. (CXX) g++ options: -O2 -lOpenCL

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.4Linux 5.8Linux 5.9 Git612182430SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 325.2025.2725.27MIN: 25.14 / MAX: 29.73MIN: 25.19 / MAX: 27.03MIN: 25.17 / MAX: 45.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.4Linux 5.8Linux 5.9 Git612182430Min: 25.19 / Avg: 25.2 / Max: 25.21Min: 25.24 / Avg: 25.27 / Max: 25.32Min: 25.23 / Avg: 25.27 / Max: 25.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Linux 5.4Linux 5.8Linux 5.9 Git48121620SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 315.3915.3915.431. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Linux 5.4Linux 5.8Linux 5.9 Git48121620Min: 15.35 / Avg: 15.39 / Max: 15.45Min: 15.37 / Avg: 15.39 / Max: 15.42Min: 15.41 / Avg: 15.43 / Max: 15.451. (CXX) g++ options: -O3 -fPIC

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentLinux 5.4Linux 5.8Linux 5.9 Git20406080100SE +/- 0.08, N = 3SE +/- 0.16, N = 3SE +/- 0.14, N = 378.8079.0178.98
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentLinux 5.4Linux 5.8Linux 5.9 Git1530456075Min: 78.64 / Avg: 78.8 / Max: 78.91Min: 78.84 / Avg: 79 / Max: 79.32Min: 78.83 / Avg: 78.98 / Max: 79.27

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Linux 5.8Linux 5.9 Git160K320K480K640K800KSE +/- 1022.51, N = 3SE +/- 972.85, N = 3SE +/- 407.91, N = 3742413.4743354.9741552.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Linux 5.8Linux 5.9 Git130K260K390K520K650KMin: 740374.1 / Avg: 742413.37 / Max: 743565.8Min: 741997.4 / Avg: 743354.9 / Max: 745240.8Min: 740987.1 / Avg: 741552.47 / Max: 742344.5

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.4Linux 5.8Linux 5.9 Git510152025SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 318.3918.4318.391. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.4Linux 5.8Linux 5.9 Git510152025Min: 18.37 / Avg: 18.39 / Max: 18.4Min: 18.39 / Avg: 18.43 / Max: 18.5Min: 18.39 / Avg: 18.39 / Max: 18.41. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverLinux 5.4Linux 5.8Linux 5.9 Git1224364860SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 355.3855.2555.371. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverLinux 5.4Linux 5.8Linux 5.9 Git1122334455Min: 55.29 / Avg: 55.38 / Max: 55.47Min: 55.04 / Avg: 55.25 / Max: 55.51Min: 55.21 / Avg: 55.37 / Max: 55.631. (CXX) g++ options: -O2 -lOpenCL

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.020, N = 3SE +/- 0.009, N = 3SE +/- 0.004, N = 36.0806.0666.067MIN: 6.02 / MAX: 41.2MIN: 6.02 / MAX: 26.4MIN: 6.02 / MAX: 10.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.4Linux 5.8Linux 5.9 Git246810Min: 6.06 / Avg: 6.08 / Max: 6.12Min: 6.05 / Avg: 6.07 / Max: 6.08Min: 6.06 / Avg: 6.07 / Max: 6.071. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.4Linux 5.8Linux 5.9 Git0.60231.20461.80692.40923.0115SE +/- 0.004, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 32.6712.6772.6711. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 2.67 / Avg: 2.67 / Max: 2.68Min: 2.67 / Avg: 2.68 / Max: 2.68Min: 2.67 / Avg: 2.67 / Max: 2.681. (CXX) g++ options: -O3 -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.4Linux 5.8Linux 5.9 Git3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.968.988.98MIN: 8.9 / MAX: 9.11MIN: 8.92 / MAX: 10.49MIN: 8.93 / MAX: 10.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.4Linux 5.8Linux 5.9 Git3691215Min: 8.93 / Avg: 8.96 / Max: 9Min: 8.96 / Avg: 8.98 / Max: 8.99Min: 8.96 / Avg: 8.98 / Max: 9.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Linux 5.4Linux 5.8Linux 5.9 Git100200300400500SE +/- 0.33, N = 34624624631. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Linux 5.4Linux 5.8Linux 5.9 Git80160240320400Min: 461 / Avg: 461.67 / Max: 4621. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.4Linux 5.8Linux 5.9 Git714212835SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 328.4628.5128.45MIN: 28.36 / MAX: 48.38MIN: 28.4 / MAX: 29.9MIN: 28.37 / MAX: 29.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.4Linux 5.8Linux 5.9 Git612182430Min: 28.41 / Avg: 28.46 / Max: 28.53Min: 28.45 / Avg: 28.51 / Max: 28.58Min: 28.43 / Avg: 28.45 / Max: 28.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.4Linux 5.8Linux 5.9 Git90K180K270K360K450KSE +/- 45.23, N = 3SE +/- 235.80, N = 3SE +/- 202.63, N = 3417044417737417670
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.4Linux 5.8Linux 5.9 Git70K140K210K280K350KMin: 416975 / Avg: 417043.67 / Max: 417129Min: 417417 / Avg: 417737 / Max: 418197Min: 417401 / Avg: 417670 / Max: 418067

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.4Linux 5.8Linux 5.9 Git1224364860SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 352.3652.4452.41MIN: 52.23 / MAX: 70.64MIN: 52.26 / MAX: 71.79MIN: 52.29 / MAX: 72.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.4Linux 5.8Linux 5.9 Git1122334455Min: 52.33 / Avg: 52.36 / Max: 52.4Min: 52.38 / Avg: 52.44 / Max: 52.5Min: 52.38 / Avg: 52.41 / Max: 52.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.4Linux 5.8Linux 5.9 Git120K240K360K480K600KSE +/- 51.82, N = 3SE +/- 67.78, N = 3SE +/- 184.27, N = 3577135578019577584
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.4Linux 5.8Linux 5.9 Git100K200K300K400K500KMin: 577035 / Avg: 577135.33 / Max: 577208Min: 577887 / Avg: 578019.33 / Max: 578111Min: 577277 / Avg: 577583.67 / Max: 577914

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.4Linux 5.8Linux 5.9 Git0.91671.83342.75013.66684.5835SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 34.0744.0684.0731. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 4.07 / Avg: 4.07 / Max: 4.08Min: 4.06 / Avg: 4.07 / Max: 4.07Min: 4.07 / Avg: 4.07 / Max: 4.081. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeLinux 5.4Linux 5.8Linux 5.9 Git306090120150SE +/- 0.55, N = 3SE +/- 1.01, N = 3SE +/- 0.30, N = 3135.74135.76135.56
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeLinux 5.4Linux 5.8Linux 5.9 Git306090120150Min: 134.76 / Avg: 135.74 / Max: 136.67Min: 133.75 / Avg: 135.76 / Max: 136.96Min: 134.98 / Avg: 135.56 / Max: 135.99

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Linux 5.4Linux 5.8Linux 5.9 Git60120180240300SE +/- 0.21, N = 3SE +/- 0.26, N = 3SE +/- 0.21, N = 3288.14288.18287.761. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Linux 5.4Linux 5.8Linux 5.9 Git50100150200250Min: 287.75 / Avg: 288.14 / Max: 288.47Min: 287.7 / Avg: 288.18 / Max: 288.57Min: 287.37 / Avg: 287.76 / Max: 288.081. (CXX) g++ options: -O3 -fPIC

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.4Linux 5.8Linux 5.9 Git120240360480600SE +/- 0.98, N = 3SE +/- 1.12, N = 3SE +/- 1.02, N = 3535.09535.47534.731. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.4Linux 5.8Linux 5.9 Git90180270360450Min: 533.34 / Avg: 535.09 / Max: 536.73Min: 534.31 / Avg: 535.47 / Max: 537.71Min: 533.31 / Avg: 534.73 / Max: 536.711. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.4Linux 5.8Linux 5.9 Git714212835SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 328.8628.8928.90MIN: 28.74 / MAX: 33.45MIN: 28.78 / MAX: 34.47MIN: 28.78 / MAX: 45.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.4Linux 5.8Linux 5.9 Git612182430Min: 28.8 / Avg: 28.86 / Max: 28.89Min: 28.84 / Avg: 28.89 / Max: 28.96Min: 28.86 / Avg: 28.9 / Max: 28.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDLinux 5.4Linux 5.8Linux 5.9 Git2004006008001000SE +/- 0.79, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3843.77842.63842.881. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDLinux 5.4Linux 5.8Linux 5.9 Git150300450600750Min: 842.33 / Avg: 843.77 / Max: 845.05Min: 842.5 / Avg: 842.63 / Max: 842.71Min: 842.7 / Avg: 842.88 / Max: 843.21. (CXX) g++ options: -O2 -lOpenCL

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.4Linux 5.8Linux 5.9 Git1224364860SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 353.3453.4053.33MIN: 52.79 / MAX: 56.05MIN: 52.34 / MAX: 76.63MIN: 52.26 / MAX: 77.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.4Linux 5.8Linux 5.9 Git1122334455Min: 53.29 / Avg: 53.34 / Max: 53.37Min: 53.33 / Avg: 53.4 / Max: 53.48Min: 53.31 / Avg: 53.33 / Max: 53.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughLinux 5.4Linux 5.8Linux 5.9 Git20406080100SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3100.66100.76100.791. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughLinux 5.4Linux 5.8Linux 5.9 Git20406080100Min: 100.64 / Avg: 100.66 / Max: 100.69Min: 100.72 / Avg: 100.76 / Max: 100.83Min: 100.76 / Avg: 100.79 / Max: 100.811. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Linux 5.8Linux 5.9 Git160K320K480K640K800KSE +/- 1047.29, N = 3SE +/- 1935.82, N = 3SE +/- 1289.71, N = 3732223.0731570.7731282.1
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.4Linux 5.8Linux 5.9 Git130K260K390K520K650KMin: 730264.1 / Avg: 732222.97 / Max: 733844.7Min: 727751 / Avg: 731570.7 / Max: 734027.9Min: 728771.1 / Avg: 731282.07 / Max: 733048.8

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.4Linux 5.8Linux 5.9 Git1.12622.25243.37864.50485.631SE +/- 0.00188, N = 3SE +/- 0.00438, N = 3SE +/- 0.00183, N = 35.005315.003084.99935
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 5 / Avg: 5.01 / Max: 5.01Min: 5 / Avg: 5 / Max: 5.01Min: 5 / Avg: 5 / Max: 5

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.4Linux 5.8Linux 5.9 Git3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 39.429.439.43MIN: 9.3 / MAX: 12.36MIN: 9.29 / MAX: 14.55MIN: 9.28 / MAX: 14.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.4Linux 5.8Linux 5.9 Git3691215Min: 9.38 / Avg: 9.42 / Max: 9.44Min: 9.41 / Avg: 9.43 / Max: 9.46Min: 9.42 / Avg: 9.43 / Max: 9.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesLinux 5.4Linux 5.8Linux 5.9 Git4080120160200SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.18, N = 3158.37158.31158.471. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesLinux 5.4Linux 5.8Linux 5.9 Git306090120150Min: 158.2 / Avg: 158.37 / Max: 158.48Min: 158.22 / Avg: 158.31 / Max: 158.47Min: 158.14 / Avg: 158.47 / Max: 158.741. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.4Linux 5.8Linux 5.9 Git918273645SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 341.1441.1641.121. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.4Linux 5.8Linux 5.9 Git918273645Min: 41.13 / Avg: 41.14 / Max: 41.16Min: 41.13 / Avg: 41.16 / Max: 41.18Min: 41.09 / Avg: 41.12 / Max: 41.171. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.4Linux 5.8Linux 5.9 Git1122334455SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 347.0147.0547.05MIN: 46.82 / MAX: 49.69MIN: 46.82 / MAX: 52.42MIN: 46.87 / MAX: 49.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.4Linux 5.8Linux 5.9 Git1020304050Min: 47 / Avg: 47.01 / Max: 47.02Min: 47.03 / Avg: 47.05 / Max: 47.08Min: 47.03 / Avg: 47.05 / Max: 47.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.4Linux 5.8Linux 5.9 Git1.6M3.2M4.8M6.4M8MSE +/- 799.08, N = 3SE +/- 539.42, N = 3SE +/- 362.87, N = 3750098075066037505033
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.4Linux 5.8Linux 5.9 Git1.3M2.6M3.9M5.2M6.5MMin: 7499840 / Avg: 7500980 / Max: 7502520Min: 7505930 / Avg: 7506603.33 / Max: 7507670Min: 7504330 / Avg: 7505033.33 / Max: 7505540

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.4Linux 5.8Linux 5.9 Git2M4M6M8M10MSE +/- 801.81, N = 3SE +/- 596.32, N = 3SE +/- 909.11, N = 3829018082959308295183
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.4Linux 5.8Linux 5.9 Git1.4M2.8M4.2M5.6M7MMin: 8288710 / Avg: 8290180 / Max: 8291470Min: 8295190 / Avg: 8295930 / Max: 8297110Min: 8294210 / Avg: 8295183.33 / Max: 8297000

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.4Linux 5.8Linux 5.9 Git2004006008001000SE +/- 0.16, N = 3SE +/- 0.12, N = 3SE +/- 0.01, N = 3798.09798.28798.641. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.4Linux 5.8Linux 5.9 Git140280420560700Min: 797.85 / Avg: 798.09 / Max: 798.38Min: 798.1 / Avg: 798.28 / Max: 798.51Min: 798.63 / Avg: 798.64 / Max: 798.651. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumLinux 5.4Linux 5.8Linux 5.9 Git48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 316.2116.2216.221. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumLinux 5.4Linux 5.8Linux 5.9 Git48121620Min: 16.2 / Avg: 16.21 / Max: 16.21Min: 16.2 / Avg: 16.22 / Max: 16.24Min: 16.2 / Avg: 16.22 / Max: 16.261. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.4Linux 5.8Linux 5.9 Git140280420560700SE +/- 0.05, N = 3SE +/- 0.21, N = 3SE +/- 0.21, N = 3640.79640.46640.52MIN: 640.35 / MAX: 641.12MIN: 639.61 / MAX: 641.2MIN: 639.34 / MAX: 641.241. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.4Linux 5.8Linux 5.9 Git110220330440550Min: 640.72 / Avg: 640.79 / Max: 640.89Min: 640.09 / Avg: 640.46 / Max: 640.8Min: 640.12 / Avg: 640.52 / Max: 640.821. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.4Linux 5.8Linux 5.9 Git80K160K240K320K400KSE +/- 136.08, N = 3SE +/- 81.19, N = 3SE +/- 80.41, N = 3387769387899387939
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.4Linux 5.8Linux 5.9 Git70K140K210K280K350KMin: 387539 / Avg: 387769 / Max: 388010Min: 387762 / Avg: 387899 / Max: 388043Min: 387786 / Avg: 387939.33 / Max: 388058

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.4Linux 5.8Linux 5.9 Git80K160K240K320K400KSE +/- 46.31, N = 3SE +/- 63.01, N = 3SE +/- 71.27, N = 3391962392109392131
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.4Linux 5.8Linux 5.9 Git70K140K210K280K350KMin: 391876 / Avg: 391961.67 / Max: 392035Min: 391986 / Avg: 392108.67 / Max: 392195Min: 391989 / Avg: 392131.33 / Max: 392209

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionLinux 5.4Linux 5.8Linux 5.9 Git246810SE +/- 0.091, N = 3SE +/- 0.085, N = 3SE +/- 0.087, N = 37.4477.4487.445
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionLinux 5.4Linux 5.8Linux 5.9 Git3691215Min: 7.35 / Avg: 7.45 / Max: 7.63Min: 7.35 / Avg: 7.45 / Max: 7.62Min: 7.35 / Avg: 7.45 / Max: 7.62

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.4Linux 5.8Linux 5.9 Git0.33080.66160.99241.32321.654SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.471.471.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 1.47 / Avg: 1.47 / Max: 1.47Min: 1.46 / Avg: 1.47 / Max: 1.47Min: 1.46 / Avg: 1.47 / Max: 1.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.4Linux 5.8Linux 5.9 Git0.2070.4140.6210.8281.035SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.920.920.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.4Linux 5.8Linux 5.9 Git246810Min: 0.92 / Avg: 0.92 / Max: 0.92Min: 0.92 / Avg: 0.92 / Max: 0.92Min: 0.92 / Avg: 0.92 / Max: 0.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.4Linux 5.8Linux 5.9 Git0.02480.04960.07440.09920.124SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.110.110.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.4Linux 5.8Linux 5.9 Git12345Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.11 / Avg: 0.11 / Max: 0.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git1224364860SE +/- 0.52, N = 3SE +/- 0.68, N = 3SE +/- 1.18, N = 1551.4747.3352.721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git1122334455Min: 50.46 / Avg: 51.47 / Max: 52.15Min: 45.99 / Avg: 47.33 / Max: 48.13Min: 46.73 / Avg: 52.72 / Max: 62.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git11002200330044005500SE +/- 49.45, N = 3SE +/- 76.53, N = 3SE +/- 100.33, N = 154859528647751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git9001800270036004500Min: 4795.38 / Avg: 4859.39 / Max: 4956.68Min: 5195.41 / Avg: 5285.67 / Max: 5437.85Min: 4011.82 / Avg: 4774.86 / Max: 5351.61. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git510152025SE +/- 0.36, N = 15SE +/- 0.06, N = 3SE +/- 0.28, N = 1321.0921.2120.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.4Linux 5.8Linux 5.9 Git510152025Min: 19.14 / Avg: 21.09 / Max: 23.94Min: 21.12 / Avg: 21.21 / Max: 21.31Min: 19.14 / Avg: 20.55 / Max: 22.351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git10002000300040005000SE +/- 80.37, N = 15SE +/- 12.53, N = 3SE +/- 66.48, N = 134763471848781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteLinux 5.4Linux 5.8Linux 5.9 Git8001600240032004000Min: 4177.73 / Avg: 4762.64 / Max: 5226.27Min: 4693.59 / Avg: 4717.62 / Max: 4735.79Min: 4476.2 / Avg: 4877.85 / Max: 5227.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

103 Results Shown

Apache CouchDB
OpenCV
PostgreSQL pgbench:
  1 - 250 - Read Write
  1 - 250 - Read Write - Average Latency
  1 - 50 - Read Write - Average Latency
  1 - 50 - Read Write
  1 - 100 - Read Write
  1 - 100 - Read Write - Average Latency
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
  100 - 50 - Read Write
  100 - 50 - Read Write - Average Latency
  1 - 250 - Read Only - Average Latency
  1 - 250 - Read Only
LeelaChessZero
PostgreSQL pgbench:
  100 - 100 - Read Only - Average Latency
  1 - 50 - Read Only - Average Latency
  100 - 100 - Read Only
  1 - 50 - Read Only
  100 - 50 - Read Only - Average Latency
  100 - 50 - Read Only
NCNN
AOM AV1
LeelaChessZero
PostgreSQL pgbench:
  1 - 100 - Read Only - Average Latency
  1 - 100 - Read Only
ASTC Encoder
Timed Linux Kernel Compilation
AI Benchmark Alpha:
  Device Training Score
  Device Inference Score
  Device AI Score
Mobile Neural Network
Timed Apache Compilation
Rodinia
GROMACS
NCNN
libavif avifenc
NCNN
WebP Image Encode
AOM AV1
eSpeak-NG Speech Engine
TNN
Mobile Neural Network
Zstd Compression
InfluxDB
libavif avifenc
NCNN:
  CPU-v3-v3 - mobilenet-v3
  CPU - efficientnet-b0
LAMMPS Molecular Dynamics Simulator
NCNN
Zstd Compression
Tesseract OCR
WebP Image Encode
Montage Astronomical Image Mosaic Engine
NCNN
Mobile Neural Network
Rodinia
Incompact3D
LibRaw
Rodinia
NCNN
libavif avifenc
OCRMyPDF
InfluxDB
WebP Image Encode
Rodinia
Mobile Neural Network
LAMMPS Molecular Dynamics Simulator
NCNN
Monte Carlo Simulations of Ionised Nebulae
NCNN
TensorFlow Lite
Mobile Neural Network
TensorFlow Lite
WebP Image Encode
Hugin
libavif avifenc
GPAW
NCNN
Rodinia
NCNN
ASTC Encoder
InfluxDB
NAMD
NCNN
G'MIC
WebP Image Encode
NCNN
TensorFlow Lite:
  Inception ResNet V2
  Inception V4
ASTC Encoder:
  Exhaustive
  Medium
TNN
TensorFlow Lite:
  Mobilenet Float
  Mobilenet Quant
System GZIP Decompression
AOM AV1:
  Speed 6 Two-Pass
  Speed 4 Two-Pass
  Speed 0 Two-Pass
PostgreSQL pgbench:
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write