Core i7 5775C Perf In September 2020

Intel Core i7-5775C testing with a MSI Z97-G45 GAMING (MS-7821) v1.0 (V2.9 BIOS) and MSI Intel Iris Pro 6200 3GB on Ubuntu 18.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2009259-FI-COREI757768
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 6 Tests
Compression Tests 3 Tests
CPU Massive 13 Tests
Creator Workloads 13 Tests
Encoding 3 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 12 Tests
Imaging 4 Tests
Java 3 Tests
Machine Learning 5 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 5 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 4 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 3 Tests
Scientific Computing 7 Tests
Server CPU Tests 8 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
September 23 2020
  12 Hours, 56 Minutes
Run 2
September 24 2020
  13 Hours, 1 Minute
Run 3
September 24 2020
  12 Hours, 20 Minutes
Invert Hiding All Results Option
  12 Hours, 46 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i7 5775C Perf In September 2020OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-5775C @ 3.70GHz (4 Cores / 8 Threads)MSI Z97-G45 GAMING (MS-7821) v1.0 (V2.9 BIOS)Intel Broadwell-U DMI16GB120GB CT120BX100SSD1MSI Intel Iris Pro 6200 3GB (1150MHz)Intel Broadwell-U AudioVA2431Qualcomm Atheros Killer E220xUbuntu 18.105.0.0-999-generic (x86_64) 20190223GNOME Shell 3.30.2X Server 1.20.1modesetting 1.20.14.5 Mesa 19.2.0-devel (git-2631fd3 2019-07-24 cosmic-oibaf-ppa)1.1.102GCC 8.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I7 5775C Perf In September 2020 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x20- OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.10.1)- Python 2.7.16 + Python 3.6.8- l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

Run 1Run 2Run 3Result OverviewPhoronix Test Suite100%103%106%108%111%OpenCVBuild2NeatBenchoneDNNJava Gradle BuildSystem GZIP DecompressionC-BloscTimed Linux Kernel CompilationLibRawInfluxDBHuginMobile Neural NetworkMPVeSpeak-NG Speech EngineGNU Octave BenchmarkGPAWGLmark2Monte Carlo Simulations of Ionised NebulaeZstd CompressionNCNNNAMDlibavif avifencStress-NGMontage Astronomical Image Mosaic EngineTimed Apache CompilationBRL-CADAOM AV1TensorFlow LiteLAMMPS Molecular Dynamics SimulatorGROMACSOCRMyPDFSVT-AV1ASTC EncoderDaCapo BenchmarkIncompact3DLuxCoreRenderRenaissance

Core i7 5775C Perf In September 2020aom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimeastcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustivebrl-cad: VGR Performance Metricbuild2: Time To Compileblosc: blosclzdacapobench: H2dacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansespeak: Text-To-Speech Synthesisglmark2: 1920 x 1080octave-benchmark: gpaw: Carbon Nanotubegromacs: Water Benchmarkhugin: Panorama Photo Assistant + Stitching Timeincompact3d: Cylinderinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000influxdb: 1024 - 10000 - 2,5000,1 - 10000java-gradle-perf: Reactorlammps: Rhodopsin Proteinavifenc: 0avifenc: 2avifenc: 8avifenc: 10libraw: Post-Processing Benchmarkluxcorerender: DLSCmnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3montage: Mosaic of M17, K band, 1.5 deg x 1.5 degmocassin: Dust 2D tau100.0mpv: Big Buck Bunny Sunflower 4K - Software Onlympv: Big Buck Bunny Sunflower 1080p - Software Onlynamd: ATPase Simulation - 327,506 Atomsncnn: CPU - squeezenet_int8ncnn: CPU - mobilenet_v3ncnn: CPU - squeezenetncnn: CPU - mnasnetncnn: CPU - blazefacencnn: CPU - googlenet_int8ncnn: CPU - vgg16_int8ncnn: CPU - resnet18_int8ncnn: CPU - alexnetncnn: CPU - resnet50_int8ncnn: CPU - mobilenetv2_yolov3neatbench: CPUocrmypdf: Processing 60 Page PDF Documentonednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUopencv: DNN - Deep Neural Networkrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: Apache Spark ALSrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treestress-ng: MMAPstress-ng: NUMAstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: RdRandstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingsvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080psystem-decompress-gzip: tensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2build-apache: Time To Compilebuild-linux-kernel: Time To Compilecompress-zstd: 3compress-zstd: 19Run 1Run 2Run 30.21.6513.442.6433.828.4010.9372.66589.1645474215.3456788.0335652577852457340.6409858.707653.1100.48581.906769.230143855281.8909598.9971205.0265.7802.864195.324115.2988.5177.87728.130.679.27144.3414.9566.92854.38092.332296353.18902.294.1081825.326.004.926.572.2071.78243.0139.4624.89133.0226.878.1656.1648.26351120.14721.160910.066215.9594558.903175.5693.3884854411974.9162138.8572664.2613779.50319921.2294865.5003735.4549284.59854.2285.99286.58202671.08817.9731337895.18240337.1937547.2646822.8545.271751.85743806.0319391.2725159.222591.446030.691693803.99453045.7458.696263395.171.49612.1943.4475418707825710387372366589354486707758730.578177.8472494.723.10.21.6613.472.6633.678.4110.9272.61588.4045358209.9656787.7336452427871462140.7569828.653648.6130.48482.225768.573588852344.4915807.8982263.6260.3042.866195.312115.4408.4927.78928.240.679.39044.8514.9676.94954.88892.236295349.48911.234.0922325.535.994.956.552.1971.54240.4939.4324.20132.9426.668.1156.0808.30212120.29820.377510.061115.7029535.209169.7553.6664949281964.4812135.7872671.8033989.69719713.4024803.6303700.0129409.74454.3285.43297.06205022.47818.9032002407.75240331.8837723.7746865.5144.211741.82744929.8119265.7725153.832617.526030.961698242.99445829.2158.606453494.071.49812.2193.3855408877809863386514365619353741706355030.602177.8832498.223.20.21.6613.472.6533.838.4210.9272.61588.4845365208.5296887.9333252287942463840.4569888.710648.3250.48582.736766.920431838679.5903216.0982578.8260.0362.870195.390114.9898.4917.79727.950.679.39944.9014.9686.93554.71392.087294344.54911.494.0952425.386.024.936.652.2371.58240.8839.3924.33132.8726.497.8956.1188.31908118.52920.849110.124916.4471577.334176.7603.7166354941999.3862106.8132620.7573909.15019384.7844712.3663787.3129242.48554.1985.43293.65207144.00816.7331529776.24240319.2637590.5846879.6645.911770.49744313.5419207.2425103.462396.666125.051728927.62458759.5758.436385777.221.49812.2093.3815409027810147386168365631353786706483030.518180.3672502.023.3OpenBenchmarking.org

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassRun 1Run 2Run 30.0450.090.1350.180.225SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.20.20.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassRun 1Run 2Run 312345Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassRun 1Run 2Run 30.37350.7471.12051.4941.8675SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.651.661.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassRun 1Run 2Run 3246810Min: 1.65 / Avg: 1.65 / Max: 1.65Min: 1.65 / Avg: 1.66 / Max: 1.66Min: 1.65 / Avg: 1.66 / Max: 1.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 1Run 2Run 33691215SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 313.4413.4713.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 1Run 2Run 348121620Min: 13.32 / Avg: 13.44 / Max: 13.5Min: 13.38 / Avg: 13.47 / Max: 13.54Min: 13.35 / Avg: 13.47 / Max: 13.541. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassRun 1Run 2Run 30.59851.1971.79552.3942.9925SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.642.662.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassRun 1Run 2Run 3246810Min: 2.64 / Avg: 2.64 / Max: 2.64Min: 2.65 / Avg: 2.66 / Max: 2.66Min: 2.63 / Avg: 2.65 / Max: 2.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeRun 1Run 2Run 3816243240SE +/- 0.05, N = 3SE +/- 0.23, N = 3SE +/- 0.06, N = 333.8233.6733.831. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeRun 1Run 2Run 3714212835Min: 33.74 / Avg: 33.82 / Max: 33.9Min: 33.21 / Avg: 33.67 / Max: 33.99Min: 33.77 / Avg: 33.83 / Max: 33.941. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1Run 2Run 3246810SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.408.418.421. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1Run 2Run 33691215Min: 8.33 / Avg: 8.4 / Max: 8.46Min: 8.4 / Avg: 8.41 / Max: 8.42Min: 8.41 / Avg: 8.42 / Max: 8.431. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1Run 2Run 33691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 310.9310.9210.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1Run 2Run 33691215Min: 10.93 / Avg: 10.93 / Max: 10.93Min: 10.92 / Avg: 10.92 / Max: 10.93Min: 10.92 / Avg: 10.92 / Max: 10.931. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1Run 2Run 31632486480SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 372.6672.6172.611. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1Run 2Run 31428425670Min: 72.65 / Avg: 72.66 / Max: 72.68Min: 72.6 / Avg: 72.61 / Max: 72.62Min: 72.6 / Avg: 72.61 / Max: 72.621. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1Run 2Run 3130260390520650SE +/- 0.06, N = 3SE +/- 0.20, N = 3SE +/- 0.06, N = 3589.16588.40588.481. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1Run 2Run 3100200300400500Min: 589.08 / Avg: 589.16 / Max: 589.27Min: 588.09 / Avg: 588.4 / Max: 588.78Min: 588.41 / Avg: 588.48 / Max: 588.591. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricRun 1Run 2Run 310K20K30K40K50K4547445358453651. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileRun 1Run 2Run 350100150200250SE +/- 3.72, N = 3SE +/- 1.30, N = 3SE +/- 1.52, N = 3215.35209.97208.53
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileRun 1Run 2Run 34080120160200Min: 209.98 / Avg: 215.35 / Max: 222.49Min: 207.98 / Avg: 209.96 / Max: 212.42Min: 206.89 / Avg: 208.53 / Max: 211.56

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 1Run 2Run 315003000450060007500SE +/- 15.22, N = 3SE +/- 11.18, N = 3SE +/- 12.72, N = 36788.06787.76887.91. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 1Run 2Run 312002400360048006000Min: 6771.5 / Avg: 6788 / Max: 6818.4Min: 6765.4 / Avg: 6787.67 / Max: 6800.6Min: 6865.2 / Avg: 6887.9 / Max: 6909.21. (CXX) g++ options: -rdynamic

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Run 1Run 2Run 37001400210028003500SE +/- 28.03, N = 20SE +/- 31.72, N = 20SE +/- 28.23, N = 20335633643332
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Run 1Run 2Run 36001200180024003000Min: 3113 / Avg: 3355.55 / Max: 3619Min: 3030 / Avg: 3364.1 / Max: 3567Min: 3082 / Avg: 3332.15 / Max: 3573

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonRun 1Run 2Run 311002200330044005500SE +/- 47.91, N = 4SE +/- 72.34, N = 4SE +/- 55.50, N = 4525752425228
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonRun 1Run 2Run 39001800270036004500Min: 5144 / Avg: 5256.5 / Max: 5376Min: 5122 / Avg: 5242 / Max: 5451Min: 5111 / Avg: 5227.5 / Max: 5372

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRun 1Run 2Run 32K4K6K8K10KSE +/- 34.21, N = 4SE +/- 81.37, N = 4SE +/- 38.63, N = 3785278717942
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRun 1Run 2Run 314002800420056007000Min: 7777 / Avg: 7852 / Max: 7943Min: 7731 / Avg: 7871 / Max: 8063Min: 7866 / Avg: 7942 / Max: 7992

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansRun 1Run 2Run 310002000300040005000SE +/- 49.10, N = 10SE +/- 52.68, N = 3SE +/- 104.15, N = 4457346214638
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansRun 1Run 2Run 38001600240032004000Min: 4247 / Avg: 4572.5 / Max: 4781Min: 4522 / Avg: 4620.67 / Max: 4702Min: 4346 / Avg: 4638 / Max: 4816

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 3918273645SE +/- 0.35, N = 17SE +/- 0.44, N = 4SE +/- 0.37, N = 440.6440.7640.461. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 3816243240Min: 35.83 / Avg: 40.64 / Max: 42.76Min: 40.29 / Avg: 40.76 / Max: 42.09Min: 39.67 / Avg: 40.46 / Max: 41.471. (CC) gcc options: -O2 -std=c99

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Run 1Run 2Run 32004006008001000985982988

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 4.4.1Run 1Run 2Run 3246810SE +/- 0.037, N = 5SE +/- 0.039, N = 5SE +/- 0.042, N = 58.7078.6538.710
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 4.4.1Run 1Run 2Run 33691215Min: 8.62 / Avg: 8.71 / Max: 8.83Min: 8.59 / Avg: 8.65 / Max: 8.8Min: 8.66 / Avg: 8.71 / Max: 8.88

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeRun 1Run 2Run 3140280420560700SE +/- 0.92, N = 3SE +/- 0.96, N = 3SE +/- 0.86, N = 3653.11648.61648.331. (CC) gcc options: -pthread -shared -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeRun 1Run 2Run 3120240360480600Min: 651.35 / Avg: 653.11 / Max: 654.45Min: 646.97 / Avg: 648.61 / Max: 650.3Min: 647.11 / Avg: 648.33 / Max: 649.971. (CC) gcc options: -pthread -shared -lxc -lblas -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkRun 1Run 2Run 30.10910.21820.32730.43640.5455SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.4850.4840.4851. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkRun 1Run 2Run 3246810Min: 0.48 / Avg: 0.49 / Max: 0.49Min: 0.48 / Avg: 0.48 / Max: 0.49Min: 0.48 / Avg: 0.48 / Max: 0.491. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 1Run 2Run 320406080100SE +/- 0.44, N = 3SE +/- 0.19, N = 3SE +/- 0.57, N = 381.9182.2382.74
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 1Run 2Run 31632486480Min: 81.06 / Avg: 81.91 / Max: 82.51Min: 81.93 / Avg: 82.22 / Max: 82.59Min: 81.7 / Avg: 82.74 / Max: 83.66

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderRun 1Run 2Run 3170340510680850SE +/- 0.56, N = 3SE +/- 0.27, N = 3SE +/- 0.78, N = 3769.23768.57766.921. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderRun 1Run 2Run 3140280420560700Min: 768.61 / Avg: 769.23 / Max: 770.35Min: 768.1 / Avg: 768.57 / Max: 769.03Min: 765.38 / Avg: 766.92 / Max: 767.941. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KSE +/- 12233.47, N = 3SE +/- 3868.34, N = 3SE +/- 3971.45, N = 3855281.8852344.4838679.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3150K300K450K600K750KMin: 842841.5 / Avg: 855281.83 / Max: 879747.6Min: 844708.8 / Avg: 852344.4 / Max: 857241.7Min: 831364.2 / Avg: 838679.5 / Max: 845017.1

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KSE +/- 5150.33, N = 3SE +/- 3710.45, N = 3SE +/- 2328.20, N = 3909598.9915807.8903216.0
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3160K320K480K640K800KMin: 901634.1 / Avg: 909598.93 / Max: 919238.1Min: 909007 / Avg: 915807.8 / Max: 921780Min: 900601.8 / Avg: 903216.03 / Max: 907860.2

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KSE +/- 4110.61, N = 3SE +/- 3299.07, N = 3SE +/- 4766.81, N = 3971205.0982263.6982578.8
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Run 1Run 2Run 3200K400K600K800K1000KMin: 963174.6 / Avg: 971205 / Max: 976745.3Min: 975671.7 / Avg: 982263.6 / Max: 985807.9Min: 973965.7 / Avg: 982578.77 / Max: 990424.9

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorRun 1Run 2Run 360120180240300SE +/- 3.33, N = 7SE +/- 3.55, N = 9SE +/- 4.63, N = 9265.78260.30260.04
OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorRun 1Run 2Run 350100150200250Min: 252.03 / Avg: 265.78 / Max: 276.3Min: 248.96 / Avg: 260.3 / Max: 285.77Min: 246.68 / Avg: 260.04 / Max: 294.44

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinRun 1Run 2Run 30.64581.29161.93742.58323.229SE +/- 0.012, N = 3SE +/- 0.007, N = 3SE +/- 0.014, N = 32.8642.8662.8701. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinRun 1Run 2Run 3246810Min: 2.84 / Avg: 2.86 / Max: 2.88Min: 2.85 / Avg: 2.87 / Max: 2.87Min: 2.84 / Avg: 2.87 / Max: 2.891. (CXX) g++ options: -O3 -pthread -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 1Run 2Run 34080120160200SE +/- 0.31, N = 3SE +/- 0.22, N = 3SE +/- 0.30, N = 3195.32195.31195.391. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 1Run 2Run 34080120160200Min: 194.92 / Avg: 195.32 / Max: 195.94Min: 195.01 / Avg: 195.31 / Max: 195.74Min: 194.82 / Avg: 195.39 / Max: 195.831. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 1Run 2Run 3306090120150SE +/- 0.10, N = 3SE +/- 0.16, N = 3SE +/- 0.03, N = 3115.30115.44114.991. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 1Run 2Run 320406080100Min: 115.15 / Avg: 115.3 / Max: 115.49Min: 115.2 / Avg: 115.44 / Max: 115.75Min: 114.96 / Avg: 114.99 / Max: 115.051. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 1Run 2Run 3246810SE +/- 0.029, N = 3SE +/- 0.002, N = 3SE +/- 0.009, N = 38.5178.4928.4911. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 1Run 2Run 33691215Min: 8.47 / Avg: 8.52 / Max: 8.57Min: 8.49 / Avg: 8.49 / Max: 8.5Min: 8.48 / Avg: 8.49 / Max: 8.511. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 1Run 2Run 3246810SE +/- 0.014, N = 3SE +/- 0.012, N = 3SE +/- 0.021, N = 37.8777.7897.7971. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 1Run 2Run 33691215Min: 7.85 / Avg: 7.88 / Max: 7.9Min: 7.77 / Avg: 7.79 / Max: 7.81Min: 7.78 / Avg: 7.8 / Max: 7.841. (CXX) g++ options: -O3 -fPIC

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3714212835SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 328.1328.2427.951. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3612182430Min: 28.11 / Avg: 28.13 / Max: 28.14Min: 28.16 / Avg: 28.24 / Max: 28.31Min: 27.81 / Avg: 27.95 / Max: 28.131. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCRun 1Run 2Run 30.15080.30160.45240.60320.754SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.670.670.67MIN: 0.65 / MAX: 0.68MIN: 0.65 / MAX: 0.68MIN: 0.65 / MAX: 0.68
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCRun 1Run 2Run 3246810Min: 0.67 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.67

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 33691215SE +/- 0.069, N = 3SE +/- 0.008, N = 3SE +/- 0.014, N = 39.2719.3909.399MIN: 9.11 / MAX: 10.35MIN: 9.34 / MAX: 10.45MIN: 9.34 / MAX: 22.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 33691215Min: 9.16 / Avg: 9.27 / Max: 9.4Min: 9.38 / Avg: 9.39 / Max: 9.4Min: 9.37 / Avg: 9.4 / Max: 9.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 31020304050SE +/- 0.23, N = 3SE +/- 0.11, N = 3SE +/- 0.20, N = 344.3444.8544.90MIN: 43.74 / MAX: 91.68MIN: 44.55 / MAX: 73.46MIN: 44.57 / MAX: 57.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 3918273645Min: 44 / Avg: 44.34 / Max: 44.77Min: 44.63 / Avg: 44.85 / Max: 45Min: 44.63 / Avg: 44.9 / Max: 45.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 31.11782.23563.35344.47125.589SE +/- 0.003, N = 3SE +/- 0.018, N = 3SE +/- 0.023, N = 34.9564.9674.968MIN: 4.92 / MAX: 5.83MIN: 4.91 / MAX: 6.37MIN: 4.92 / MAX: 6.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 3246810Min: 4.95 / Avg: 4.96 / Max: 4.96Min: 4.93 / Avg: 4.97 / Max: 4.99Min: 4.94 / Avg: 4.97 / Max: 5.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 3246810SE +/- 0.009, N = 3SE +/- 0.018, N = 3SE +/- 0.009, N = 36.9286.9496.935MIN: 6.88 / MAX: 7.79MIN: 6.9 / MAX: 19.59MIN: 6.9 / MAX: 19.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 33691215Min: 6.91 / Avg: 6.93 / Max: 6.94Min: 6.91 / Avg: 6.95 / Max: 6.97Min: 6.92 / Avg: 6.93 / Max: 6.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 31224364860SE +/- 0.40, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 354.3854.8954.71MIN: 53.57 / MAX: 66.62MIN: 54.66 / MAX: 73.33MIN: 54.42 / MAX: 67.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 31122334455Min: 53.83 / Avg: 54.38 / Max: 55.16Min: 54.78 / Avg: 54.89 / Max: 55.02Min: 54.58 / Avg: 54.71 / Max: 54.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degRun 1Run 2Run 320406080100SE +/- 0.24, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 392.3392.2492.091. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degRun 1Run 2Run 320406080100Min: 91.87 / Avg: 92.33 / Max: 92.64Min: 92.18 / Avg: 92.24 / Max: 92.32Min: 92.06 / Avg: 92.09 / Max: 92.111. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Run 1Run 2Run 360120180240300SE +/- 1.53, N = 3SE +/- 0.88, N = 3SE +/- 0.33, N = 32962952941. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Run 1Run 2Run 350100150200250Min: 294 / Avg: 296 / Max: 299Min: 293 / Avg: 294.67 / Max: 296Min: 294 / Avg: 294.33 / Max: 2951. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software OnlyRun 1Run 2Run 380160240320400SE +/- 1.84, N = 3SE +/- 6.84, N = 3SE +/- 2.64, N = 3353.18349.48344.54MIN: 243.18 / MAX: 461.46MIN: 232.45 / MAX: 461.62MIN: 237.94 / MAX: 441.791. mpv 0.29.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software OnlyRun 1Run 2Run 360120180240300Min: 350.3 / Avg: 353.18 / Max: 356.59Min: 336.07 / Avg: 349.48 / Max: 358.54Min: 339.89 / Avg: 344.54 / Max: 349.051. mpv 0.29.0

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software OnlyRun 1Run 2Run 32004006008001000SE +/- 6.74, N = 3SE +/- 1.33, N = 3SE +/- 3.09, N = 3902.29911.23911.49MIN: 560.47 / MAX: 995.42MIN: 570.75 / MAX: 995.92MIN: 602.06 / MAX: 994.91. mpv 0.29.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software OnlyRun 1Run 2Run 3160320480640800Min: 890.74 / Avg: 902.29 / Max: 914.1Min: 909.53 / Avg: 911.23 / Max: 913.85Min: 905.31 / Avg: 911.49 / Max: 914.691. mpv 0.29.0

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1Run 2Run 30.92431.84862.77293.69724.6215SE +/- 0.01035, N = 3SE +/- 0.00571, N = 3SE +/- 0.00576, N = 34.108184.092234.09524
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1Run 2Run 3246810Min: 4.09 / Avg: 4.11 / Max: 4.13Min: 4.08 / Avg: 4.09 / Max: 4.1Min: 4.08 / Avg: 4.1 / Max: 4.1

NCNN

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8Run 1Run 2Run 3612182430SE +/- 0.03, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 325.3225.5325.38MIN: 25.06 / MAX: 26.6MIN: 25.2 / MAX: 106.62MIN: 25.08 / MAX: 38.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet_int8Run 1Run 2Run 3612182430Min: 25.27 / Avg: 25.32 / Max: 25.36Min: 25.35 / Avg: 25.53 / Max: 25.85Min: 25.23 / Avg: 25.38 / Max: 25.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3Run 1Run 2Run 3246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 36.005.996.02MIN: 5.94 / MAX: 6.48MIN: 5.94 / MAX: 6.2MIN: 5.93 / MAX: 6.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet_v3Run 1Run 2Run 3246810Min: 5.98 / Avg: 6 / Max: 6.02Min: 5.96 / Avg: 5.99 / Max: 6.02Min: 5.95 / Avg: 6.02 / Max: 6.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetRun 1Run 2Run 31.11382.22763.34144.45525.569SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 34.924.954.93MIN: 4.86 / MAX: 5.52MIN: 4.89 / MAX: 9.04MIN: 4.86 / MAX: 5.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetRun 1Run 2Run 3246810Min: 4.88 / Avg: 4.92 / Max: 4.97Min: 4.9 / Avg: 4.95 / Max: 4.99Min: 4.87 / Avg: 4.93 / Max: 4.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetRun 1Run 2Run 3246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 36.576.556.65MIN: 6.5 / MAX: 7.61MIN: 6.5 / MAX: 9.36MIN: 6.59 / MAX: 6.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetRun 1Run 2Run 33691215Min: 6.54 / Avg: 6.57 / Max: 6.6Min: 6.53 / Avg: 6.55 / Max: 6.59Min: 6.61 / Avg: 6.65 / Max: 6.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceRun 1Run 2Run 30.50181.00361.50542.00722.509SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 32.202.192.23MIN: 2.16 / MAX: 2.8MIN: 2.12 / MAX: 2.46MIN: 2.17 / MAX: 2.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceRun 1Run 2Run 3246810Min: 2.18 / Avg: 2.2 / Max: 2.22Min: 2.13 / Avg: 2.19 / Max: 2.25Min: 2.18 / Avg: 2.23 / Max: 2.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8Run 1Run 2Run 31632486480SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 371.7871.5471.58MIN: 71.24 / MAX: 84.94MIN: 71.13 / MAX: 84.34MIN: 71.35 / MAX: 75.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet_int8Run 1Run 2Run 31428425670Min: 71.6 / Avg: 71.78 / Max: 71.99Min: 71.4 / Avg: 71.54 / Max: 71.62Min: 71.49 / Avg: 71.58 / Max: 71.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8Run 1Run 2Run 350100150200250SE +/- 0.98, N = 3SE +/- 0.25, N = 3SE +/- 1.27, N = 3243.01240.49240.88MIN: 239.48 / MAX: 317.06MIN: 238.92 / MAX: 251.71MIN: 237.74 / MAX: 295.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16_int8Run 1Run 2Run 34080120160200Min: 242.01 / Avg: 243.01 / Max: 244.97Min: 240 / Avg: 240.49 / Max: 240.82Min: 239.53 / Avg: 240.88 / Max: 243.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8Run 1Run 2Run 3918273645SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 339.4639.4339.39MIN: 39.21 / MAX: 41.03MIN: 39.25 / MAX: 41.06MIN: 39.26 / MAX: 40.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18_int8Run 1Run 2Run 3816243240Min: 39.41 / Avg: 39.46 / Max: 39.54Min: 39.36 / Avg: 39.43 / Max: 39.47Min: 39.37 / Avg: 39.39 / Max: 39.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetRun 1Run 2Run 3612182430SE +/- 0.16, N = 3SE +/- 0.22, N = 3SE +/- 0.31, N = 324.8924.2024.33MIN: 24.39 / MAX: 26.77MIN: 23.76 / MAX: 25.06MIN: 23.74 / MAX: 27.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetRun 1Run 2Run 3612182430Min: 24.68 / Avg: 24.89 / Max: 25.21Min: 23.86 / Avg: 24.2 / Max: 24.61Min: 23.85 / Avg: 24.33 / Max: 24.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8Run 1Run 2Run 3306090120150SE +/- 0.08, N = 3SE +/- 0.14, N = 3SE +/- 0.25, N = 3133.02132.94132.87MIN: 132.43 / MAX: 145.46MIN: 132.44 / MAX: 146.41MIN: 132.31 / MAX: 135.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50_int8Run 1Run 2Run 320406080100Min: 132.88 / Avg: 133.02 / Max: 133.14Min: 132.67 / Avg: 132.94 / Max: 133.12Min: 132.45 / Avg: 132.87 / Max: 133.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3Run 1Run 2Run 3612182430SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.30, N = 326.8726.6626.49MIN: 26.43 / MAX: 27.62MIN: 25.93 / MAX: 40.64MIN: 25.81 / MAX: 27.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetv2_yolov3Run 1Run 2Run 3612182430Min: 26.79 / Avg: 26.87 / Max: 26.91Min: 26.51 / Avg: 26.66 / Max: 26.96Min: 25.92 / Avg: 26.49 / Max: 26.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPURun 1Run 2Run 3246810SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 38.168.117.89
OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPURun 1Run 2Run 33691215Min: 8.05 / Avg: 8.16 / Max: 8.25Min: 8.09 / Avg: 8.11 / Max: 8.14Min: 7.75 / Avg: 7.89 / Max: 7.98

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 6.2.4Processing 60 Page PDF DocumentRun 1Run 2Run 31326395265SE +/- 0.19, N = 3SE +/- 0.51, N = 3SE +/- 0.07, N = 356.1656.0856.12
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 6.2.4Processing 60 Page PDF DocumentRun 1Run 2Run 31122334455Min: 55.88 / Avg: 56.16 / Max: 56.53Min: 55.08 / Avg: 56.08 / Max: 56.78Min: 55.99 / Avg: 56.12 / Max: 56.19

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 3246810SE +/- 0.01407, N = 3SE +/- 0.05947, N = 3SE +/- 0.07930, N = 38.263518.302128.31908MIN: 8.06MIN: 8.01MIN: 8.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215Min: 8.25 / Avg: 8.26 / Max: 8.29Min: 8.22 / Avg: 8.3 / Max: 8.42Min: 8.24 / Avg: 8.32 / Max: 8.481. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPURun 1Run 2Run 3306090120150SE +/- 0.31, N = 3SE +/- 1.07, N = 3SE +/- 0.44, N = 3120.15120.30118.53MIN: 118.47MIN: 117.48MIN: 116.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPURun 1Run 2Run 320406080100Min: 119.52 / Avg: 120.15 / Max: 120.5Min: 118.52 / Avg: 120.3 / Max: 122.22Min: 117.84 / Avg: 118.53 / Max: 119.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3510152025SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.17, N = 321.1620.3820.85MIN: 20.76MIN: 19.16MIN: 20.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3510152025Min: 21.08 / Avg: 21.16 / Max: 21.28Min: 20.34 / Avg: 20.38 / Max: 20.41Min: 20.54 / Avg: 20.85 / Max: 21.111. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 310.0710.0610.12MIN: 9.96MIN: 9.97MIN: 101. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215Min: 10.05 / Avg: 10.07 / Max: 10.09Min: 10.05 / Avg: 10.06 / Max: 10.08Min: 10.08 / Avg: 10.12 / Max: 10.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 348121620SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 315.9615.7016.45MIN: 15.64MIN: 15.54MIN: 16.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 348121620Min: 15.89 / Avg: 15.96 / Max: 16.01Min: 15.63 / Avg: 15.7 / Max: 15.75Min: 16.3 / Avg: 16.45 / Max: 16.521. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 3120240360480600SE +/- 8.18, N = 4SE +/- 4.58, N = 3SE +/- 4.80, N = 3558.90535.21577.33MIN: 532.11MIN: 527.99MIN: 569.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 3100200300400500Min: 537.99 / Avg: 558.9 / Max: 577.58Min: 529.27 / Avg: 535.21 / Max: 544.21Min: 572.38 / Avg: 577.33 / Max: 586.931. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 34080120160200SE +/- 1.99, N = 15SE +/- 2.75, N = 4SE +/- 1.78, N = 15175.57169.76176.76MIN: 161.6MIN: 166.09MIN: 164.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 3306090120150Min: 162.75 / Avg: 175.57 / Max: 182.74Min: 166.62 / Avg: 169.75 / Max: 177.99Min: 166.13 / Avg: 176.76 / Max: 186.111. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1Run 2Run 30.83621.67242.50863.34484.181SE +/- 0.03639, N = 15SE +/- 0.03590, N = 3SE +/- 0.06193, N = 153.388483.666493.71663MIN: 3.17MIN: 3.46MIN: 3.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1Run 2Run 3246810Min: 3.27 / Avg: 3.39 / Max: 3.73Min: 3.61 / Avg: 3.67 / Max: 3.73Min: 3.33 / Avg: 3.72 / Max: 4.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 312002400360048006000SE +/- 336.20, N = 12SE +/- 68.30, N = 3SE +/- 173.70, N = 155441492854941. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 310002000300040005000Min: 4545 / Avg: 5441.42 / Max: 8803Min: 4852 / Avg: 4927.67 / Max: 5064Min: 4766 / Avg: 5494.07 / Max: 68251. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyRun 1Run 2Run 3400800120016002000SE +/- 12.40, N = 5SE +/- 15.89, N = 5SE +/- 15.65, N = 51974.921964.481999.39
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyRun 1Run 2Run 330060090012001500Min: 1937.36 / Avg: 1974.92 / Max: 2003.59Min: 1909.13 / Avg: 1964.48 / Max: 2002.53Min: 1962.3 / Avg: 1999.39 / Max: 2056.7

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestRun 1Run 2Run 35001000150020002500SE +/- 25.11, N = 5SE +/- 17.20, N = 25SE +/- 26.67, N = 52138.862135.792106.81
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestRun 1Run 2Run 3400800120016002000Min: 2050.12 / Avg: 2138.86 / Max: 2206.11Min: 2007.67 / Avg: 2135.79 / Max: 2354.6Min: 2032.4 / Avg: 2106.81 / Max: 2187.8

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSRun 1Run 2Run 36001200180024003000SE +/- 28.08, N = 5SE +/- 26.47, N = 25SE +/- 21.08, N = 52664.262671.802620.76
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSRun 1Run 2Run 35001000150020002500Min: 2579.7 / Avg: 2664.26 / Max: 2718.87Min: 2444.23 / Avg: 2671.8 / Max: 3133.85Min: 2546.97 / Avg: 2620.76 / Max: 2666.18

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark BayesRun 1Run 2Run 39001800270036004500SE +/- 108.68, N = 20SE +/- 92.43, N = 20SE +/- 109.38, N = 203779.503989.703909.15
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark BayesRun 1Run 2Run 37001400210028003500Min: 3282.64 / Avg: 3779.5 / Max: 4658.92Min: 3302.32 / Avg: 3989.7 / Max: 4424.14Min: 3245.72 / Avg: 3909.15 / Max: 4480.09

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Savina Reactors.IORun 1Run 2Run 34K8K12K16K20KSE +/- 208.42, N = 11SE +/- 201.83, N = 5SE +/- 150.18, N = 519921.2319713.4019384.78
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Savina Reactors.IORun 1Run 2Run 33K6K9K12K15KMin: 19170.06 / Avg: 19921.23 / Max: 21097.92Min: 19038.91 / Avg: 19713.4 / Max: 20285.53Min: 19161.15 / Avg: 19384.78 / Max: 19969.82

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankRun 1Run 2Run 310002000300040005000SE +/- 116.60, N = 20SE +/- 125.25, N = 20SE +/- 87.30, N = 204865.504803.634712.37
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRankRun 1Run 2Run 38001600240032004000Min: 3858.82 / Avg: 4865.5 / Max: 5772Min: 4046.06 / Avg: 4803.63 / Max: 6100.88Min: 3805.72 / Avg: 4712.37 / Max: 5525.12

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutRun 1Run 2Run 38001600240032004000SE +/- 29.28, N = 25SE +/- 43.96, N = 5SE +/- 8.96, N = 53735.453700.013787.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutRun 1Run 2Run 37001400210028003500Min: 3479.92 / Avg: 3735.45 / Max: 4008.59Min: 3566.87 / Avg: 3700.01 / Max: 3803.83Min: 3760.63 / Avg: 3787.31 / Max: 3810.08

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeRun 1Run 2Run 32K4K6K8K10KSE +/- 119.47, N = 7SE +/- 124.27, N = 5SE +/- 86.29, N = 59284.609409.749242.49
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeRun 1Run 2Run 316003200480064008000Min: 9018.16 / Avg: 9284.6 / Max: 9788.72Min: 9110.16 / Avg: 9409.74 / Max: 9701.84Min: 8945.22 / Avg: 9242.48 / Max: 9457.97

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPRun 1Run 2Run 31224364860SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 354.2254.3254.191. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPRun 1Run 2Run 31122334455Min: 54.16 / Avg: 54.22 / Max: 54.3Min: 54.26 / Avg: 54.32 / Max: 54.36Min: 54.16 / Avg: 54.19 / Max: 54.231. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMARun 1Run 2Run 320406080100SE +/- 0.41, N = 3SE +/- 0.41, N = 3SE +/- 0.22, N = 385.9985.4385.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMARun 1Run 2Run 31632486480Min: 85.51 / Avg: 85.99 / Max: 86.81Min: 84.86 / Avg: 85.43 / Max: 86.23Min: 84.99 / Avg: 85.43 / Max: 85.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDRun 1Run 2Run 360120180240300SE +/- 0.44, N = 3SE +/- 0.84, N = 3SE +/- 2.18, N = 3286.58297.06293.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDRun 1Run 2Run 350100150200250Min: 285.81 / Avg: 286.58 / Max: 287.34Min: 295.39 / Avg: 297.06 / Max: 297.95Min: 289.35 / Avg: 293.65 / Max: 296.421. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicRun 1Run 2Run 340K80K120K160K200KSE +/- 533.86, N = 3SE +/- 2587.53, N = 7SE +/- 2157.56, N = 15202671.08205022.47207144.001. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicRun 1Run 2Run 340K80K120K160K200KMin: 201603.5 / Avg: 202671.08 / Max: 203220.27Min: 201623.74 / Avg: 205022.47 / Max: 220507.8Min: 201590.73 / Avg: 207144 / Max: 220563.081. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoRun 1Run 2Run 32004006008001000SE +/- 0.22, N = 3SE +/- 0.45, N = 3SE +/- 1.75, N = 3817.97818.90816.731. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoRun 1Run 2Run 3140280420560700Min: 817.58 / Avg: 817.97 / Max: 818.34Min: 818.06 / Avg: 818.9 / Max: 819.6Min: 813.29 / Avg: 816.73 / Max: 819.041. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocRun 1Run 2Run 37M14M21M28M35MSE +/- 56736.14, N = 3SE +/- 170156.05, N = 3SE +/- 17315.95, N = 331337895.1832002407.7531529776.241. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocRun 1Run 2Run 36M12M18M24M30MMin: 31232861.65 / Avg: 31337895.18 / Max: 31427599.82Min: 31665845.32 / Avg: 32002407.75 / Max: 32214318.56Min: 31497664.3 / Avg: 31529776.24 / Max: 31557063.571. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandRun 1Run 2Run 350K100K150K200K250KSE +/- 51.51, N = 3SE +/- 49.46, N = 3SE +/- 41.56, N = 3240337.19240331.88240319.261. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandRun 1Run 2Run 340K80K120K160K200KMin: 240284.9 / Avg: 240337.19 / Max: 240440.2Min: 240265.04 / Avg: 240331.88 / Max: 240428.44Min: 240256.15 / Avg: 240319.26 / Max: 240397.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingRun 1Run 2Run 38K16K24K32K40KSE +/- 221.11, N = 3SE +/- 73.58, N = 3SE +/- 193.79, N = 337547.2637723.7737590.581. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingRun 1Run 2Run 37K14K21K28K35KMin: 37247.87 / Avg: 37547.26 / Max: 37978.8Min: 37636.97 / Avg: 37723.77 / Max: 37870.08Min: 37213.73 / Avg: 37590.58 / Max: 37857.421. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILERun 1Run 2Run 310K20K30K40K50KSE +/- 45.03, N = 3SE +/- 55.35, N = 3SE +/- 23.83, N = 346822.8546865.5146879.661. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILERun 1Run 2Run 38K16K24K32K40KMin: 46752.18 / Avg: 46822.85 / Max: 46906.54Min: 46793.76 / Avg: 46865.51 / Max: 46974.4Min: 46837.36 / Avg: 46879.66 / Max: 46919.831. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheRun 1Run 2Run 31020304050SE +/- 0.81, N = 15SE +/- 0.67, N = 15SE +/- 0.64, N = 545.2744.2145.911. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheRun 1Run 2Run 3918273645Min: 36.63 / Avg: 45.27 / Max: 49.5Min: 39.87 / Avg: 44.21 / Max: 48.1Min: 43.93 / Avg: 45.91 / Max: 47.931. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressRun 1Run 2Run 3400800120016002000SE +/- 10.47, N = 3SE +/- 9.37, N = 3SE +/- 4.44, N = 31751.851741.821770.491. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressRun 1Run 2Run 330060090012001500Min: 1741.04 / Avg: 1751.85 / Max: 1772.78Min: 1725.46 / Avg: 1741.82 / Max: 1757.92Min: 1761.63 / Avg: 1770.49 / Max: 1775.271. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresRun 1Run 2Run 3160K320K480K640K800KSE +/- 103.59, N = 3SE +/- 226.37, N = 3SE +/- 504.30, N = 3743806.03744929.81744313.541. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresRun 1Run 2Run 3130K260K390K520K650KMin: 743627.26 / Avg: 743806.03 / Max: 743986.11Min: 744515.9 / Avg: 744929.81 / Max: 745295.65Min: 743504.22 / Avg: 744313.54 / Max: 745239.451. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathRun 1Run 2Run 34K8K12K16K20KSE +/- 45.24, N = 3SE +/- 68.81, N = 3SE +/- 102.55, N = 319391.2719265.7719207.241. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathRun 1Run 2Run 33K6K9K12K15KMin: 19309.93 / Avg: 19391.27 / Max: 19466.25Min: 19140.47 / Avg: 19265.77 / Max: 19377.69Min: 19078.4 / Avg: 19207.24 / Max: 19409.851. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathRun 1Run 2Run 35K10K15K20K25KSE +/- 1.66, N = 3SE +/- 1.27, N = 3SE +/- 60.90, N = 325159.2225153.8325103.461. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathRun 1Run 2Run 34K8K12K16K20KMin: 25157.03 / Avg: 25159.22 / Max: 25162.47Min: 25151.77 / Avg: 25153.83 / Max: 25156.15Min: 24981.67 / Avg: 25103.46 / Max: 25164.371. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingRun 1Run 2Run 36001200180024003000SE +/- 6.23, N = 3SE +/- 2.87, N = 3SE +/- 10.61, N = 32591.442617.522396.661. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingRun 1Run 2Run 35001000150020002500Min: 2580.72 / Avg: 2591.44 / Max: 2602.29Min: 2611.81 / Avg: 2617.52 / Max: 2620.89Min: 2376.96 / Avg: 2396.66 / Max: 2413.321. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityRun 1Run 2Run 313002600390052006500SE +/- 57.67, N = 3SE +/- 102.63, N = 3SE +/- 72.33, N = 36030.696030.966125.051. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityRun 1Run 2Run 311002200330044005500Min: 5927.86 / Avg: 6030.69 / Max: 6127.36Min: 5857.82 / Avg: 6030.96 / Max: 6213.02Min: 5988.78 / Avg: 6125.05 / Max: 6235.241. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingRun 1Run 2Run 3400K800K1200K1600K2000KSE +/- 19078.01, N = 3SE +/- 14863.93, N = 3SE +/- 16739.57, N = 31693803.991698242.991728927.621. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingRun 1Run 2Run 3300K600K900K1200K1500KMin: 1660581.53 / Avg: 1693803.99 / Max: 1726666.76Min: 1676524.32 / Avg: 1698242.99 / Max: 1726681.68Min: 1711607.8 / Avg: 1728927.62 / Max: 1762399.971. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsRun 1Run 2Run 3100K200K300K400K500KSE +/- 384.03, N = 3SE +/- 7777.27, N = 3SE +/- 2883.26, N = 3453045.74445829.21458759.571. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsRun 1Run 2Run 380K160K240K320K400KMin: 452278.85 / Avg: 453045.74 / Max: 453465.96Min: 430296.05 / Avg: 445829.21 / Max: 454301.93Min: 453021.44 / Avg: 458759.57 / Max: 462123.591. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingRun 1Run 2Run 31326395265SE +/- 0.28, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 358.6958.6058.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingRun 1Run 2Run 31224364860Min: 58.33 / Avg: 58.69 / Max: 59.23Min: 58.37 / Avg: 58.6 / Max: 58.83Min: 58.27 / Avg: 58.43 / Max: 58.631. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingRun 1Run 2Run 31.4M2.8M4.2M5.6M7MSE +/- 110965.05, N = 3SE +/- 20978.35, N = 3SE +/- 98775.41, N = 56263395.176453494.076385777.221. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingRun 1Run 2Run 31.1M2.2M3.3M4.4M5.5MMin: 6042101.86 / Avg: 6263395.17 / Max: 6388591.01Min: 6414494.89 / Avg: 6453494.07 / Max: 6486394.07Min: 5991954.87 / Avg: 6385777.22 / Max: 6505142.681. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pRun 1Run 2Run 30.33710.67421.01131.34841.6855SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 31.4961.4981.4981. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pRun 1Run 2Run 3246810Min: 1.5 / Avg: 1.5 / Max: 1.5Min: 1.5 / Avg: 1.5 / Max: 1.5Min: 1.5 / Avg: 1.5 / Max: 1.51. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pRun 1Run 2Run 33691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.1912.2212.211. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pRun 1Run 2Run 348121620Min: 12.19 / Avg: 12.19 / Max: 12.21Min: 12.2 / Avg: 12.22 / Max: 12.24Min: 12.19 / Avg: 12.21 / Max: 12.221. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 30.77561.55122.32683.10243.878SE +/- 0.068, N = 14SE +/- 0.010, N = 3SE +/- 0.004, N = 33.4473.3853.381
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 3246810Min: 3.38 / Avg: 3.45 / Max: 4.33Min: 3.37 / Avg: 3.39 / Max: 3.41Min: 3.38 / Avg: 3.38 / Max: 3.39

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1Run 2Run 3120K240K360K480K600KSE +/- 37.42, N = 3SE +/- 15.59, N = 3SE +/- 19.86, N = 3541870540887540902
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1Run 2Run 390K180K270K360K450KMin: 541814 / Avg: 541870 / Max: 541941Min: 540860 / Avg: 540887 / Max: 540914Min: 540871 / Avg: 540902 / Max: 540939

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1Run 2Run 32M4M6M8M10MSE +/- 551.94, N = 3SE +/- 600.45, N = 3SE +/- 496.06, N = 3782571078098637810147
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1Run 2Run 31.4M2.8M4.2M5.6M7MMin: 7824660 / Avg: 7825710 / Max: 7826530Min: 7808820 / Avg: 7809863.33 / Max: 7810900Min: 7809160 / Avg: 7810146.67 / Max: 7810730

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1Run 2Run 380K160K240K320K400KSE +/- 122.99, N = 3SE +/- 40.08, N = 3SE +/- 37.63, N = 3387372386514386168
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1Run 2Run 370K140K210K280K350KMin: 387127 / Avg: 387371.67 / Max: 387516Min: 386439 / Avg: 386514 / Max: 386576Min: 386113 / Avg: 386168 / Max: 386240

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1Run 2Run 380K160K240K320K400KSE +/- 67.22, N = 3SE +/- 25.78, N = 3SE +/- 23.50, N = 3366589365619365631
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1Run 2Run 360K120K180K240K300KMin: 366502 / Avg: 366588.67 / Max: 366721Min: 365570 / Avg: 365619.33 / Max: 365657Min: 365607 / Avg: 365631 / Max: 365678

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1Run 2Run 380K160K240K320K400KSE +/- 49.05, N = 3SE +/- 22.82, N = 3SE +/- 65.69, N = 3354486353741353786
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1Run 2Run 360K120K180K240K300KMin: 354418 / Avg: 354485.67 / Max: 354581Min: 353708 / Avg: 353741.33 / Max: 353785Min: 353683 / Avg: 353785.67 / Max: 353908

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1Run 2Run 31.5M3M4.5M6M7.5MSE +/- 488.48, N = 3SE +/- 384.23, N = 3SE +/- 160.93, N = 3707758770635507064830
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1Run 2Run 31.2M2.4M3.6M4.8M6MMin: 7076870 / Avg: 7077586.67 / Max: 7078520Min: 7062800 / Avg: 7063550 / Max: 7064070Min: 7064510 / Avg: 7064830 / Max: 7065020

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 3714212835SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 330.5830.6030.52
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 3714212835Min: 30.45 / Avg: 30.58 / Max: 30.76Min: 30.53 / Avg: 30.6 / Max: 30.64Min: 30.45 / Avg: 30.52 / Max: 30.66

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 34080120160200SE +/- 0.67, N = 3SE +/- 0.85, N = 3SE +/- 2.90, N = 3177.85177.88180.37
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 3306090120150Min: 176.92 / Avg: 177.85 / Max: 179.14Min: 177.01 / Avg: 177.88 / Max: 179.59Min: 177.25 / Avg: 180.37 / Max: 186.17

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 35001000150020002500SE +/- 0.95, N = 3SE +/- 2.17, N = 3SE +/- 1.88, N = 32494.72498.22502.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 3400800120016002000Min: 2493.3 / Avg: 2494.67 / Max: 2496.5Min: 2493.9 / Avg: 2498.17 / Max: 2501Min: 2498.9 / Avg: 2502 / Max: 2505.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 3612182430SE +/- 0.22, N = 3SE +/- 0.32, N = 3SE +/- 0.29, N = 323.123.223.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 3510152025Min: 22.8 / Avg: 23.07 / Max: 23.5Min: 22.8 / Avg: 23.17 / Max: 23.8Min: 22.8 / Avg: 23.27 / Max: 23.81. (CC) gcc options: -O3 -pthread -lz -llzma

107 Results Shown

AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
ASTC Encoder:
  Fast
  Medium
  Thorough
  Exhaustive
BRL-CAD
Build2
C-Blosc
DaCapo Benchmark:
  H2
  Jython
  Tradesoap
  Tradebeans
eSpeak-NG Speech Engine
GLmark2
GNU Octave Benchmark
GPAW
GROMACS
Hugin
Incompact3D
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
  1024 - 10000 - 2,5000,1 - 10000
Java Gradle Build
LAMMPS Molecular Dynamics Simulator
libavif avifenc:
  0
  2
  8
  10
LibRaw
LuxCoreRender
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
Montage Astronomical Image Mosaic Engine
Monte Carlo Simulations of Ionised Nebulae
MPV:
  Big Buck Bunny Sunflower 4K - Software Only
  Big Buck Bunny Sunflower 1080p - Software Only
NAMD
NCNN:
  CPU - squeezenet_int8
  CPU - mobilenet_v3
  CPU - squeezenet
  CPU - mnasnet
  CPU - blazeface
  CPU - googlenet_int8
  CPU - vgg16_int8
  CPU - resnet18_int8
  CPU - alexnet
  CPU - resnet50_int8
  CPU - mobilenetv2_yolov3
NeatBench
OCRMyPDF
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
OpenCV
Renaissance:
  Scala Dotty
  Rand Forest
  Apache Spark ALS
  Apache Spark Bayes
  Savina Reactors.IO
  Apache Spark PageRank
  In-Memory Database Shootout
  Akka Unbalanced Cobwebbed Tree
Stress-NG:
  MMAP
  NUMA
  MEMFD
  Atomic
  Crypto
  Malloc
  RdRand
  Forking
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
SVT-AV1:
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
System GZIP Decompression
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
Timed Apache Compilation
Timed Linux Kernel Compilation
Zstd Compression:
  3
  19