3100 compare

AMD Ryzen 3 3100 4-Core testing with a ASUS ROG CROSSHAIR VIII HERO (2702 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2011296-HA-3100COMPA40
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 11 Tests
Compression Tests 2 Tests
CPU Massive 24 Tests
Creator Workloads 20 Tests
Encoding 5 Tests
Fortran Tests 5 Tests
Game Development 4 Tests
HPC - High Performance Computing 20 Tests
Imaging 4 Tests
Machine Learning 11 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 4 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 9 Tests
OCR 2 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 6 Tests
Python 4 Tests
Renderers 2 Tests
Scientific Computing 9 Tests
Server 3 Tests
Server CPU Tests 16 Tests
Single-Threaded 7 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 5 Tests
Vulkan Compute 4 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
November 27 2020
  13 Hours, 56 Minutes
2
November 28 2020
  13 Hours, 20 Minutes
3
November 28 2020
  13 Hours, 35 Minutes
Invert Hiding All Results Option
  13 Hours, 37 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3100 compareProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen 3 3100 4-Core @ 3.60GHz (4 Cores / 8 Threads)ASUS ROG CROSSHAIR VIII HERO (2702 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 1TBAMD Radeon RX 56/64 8GB (1590/800MHz)AMD Vega 10 HDMI AudioLG Ultra HDRealtek RTL8125 2.5GbE + Intel I211Ubuntu 20.105.8.0-29-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9amdgpu 19.1.04.6 Mesa 20.2.1 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8701021Graphics Details- GLAMORSecurity Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Java Details- 2, 3: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- 2, 3: Python 3.8.6

123Result OverviewPhoronix Test Suite100%104%108%113%LeelaChessZeroRedisDDraceNetworkCraftyBetsy GPU CompressoroneDNNLAMMPS Molecular Dynamics SimulatorSunflow Rendering SystemStockfishGROMACSAOM AV1FFTEBRL-CADHuginRawTherapeelibavif avifencMlpack BenchmarkGeekbenchasmFishrav1eDarktablex265Timed Linux Kernel CompilationZstd CompressionAI Benchmark AlphaEmbreeyquake2NCNNMPVLZ4 CompressionGLmark2DolfynIndigoBenchWaifu2x-NCNN VulkanBlenderOCRMyPDFNumpy BenchmarkNAMDKvazaarTNNVkFFTTesseract OCReSpeak-NG Speech EngineBasis UniversalTensorFlow LiteTimed HMMer SearchHierarchical INTegrationASTC EncoderPyPerformancePHPBenchMobile Neural NetworkOpenSSLRNNoiseTimed LLVM CompilationMonte Carlo Simulations of Ionised NebulaeCaffePyBench

3100 comparekripke: build-llvm: Time To Compilebasis: UASTC Level 2 + RDO Post-Processinglczero: Eigenlczero: BLASai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scoreastcenc: Exhaustivehpcc: G-HPLblender: BMW27 - CPU-Onlygromacs: Water Benchmarkddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - RaiNyMore2mocassin: Dust 2D tau100.0brl-cad: VGR Performance Metricmlpack: scikit_qdanumpy: hint: FLOATnamd: ATPase Simulation - 327,506 Atomskvazaar: Bosphorus 4K - Mediumasmfish: 1024 Hash Memory, 26 Depthbuild-linux-kernel: Time To Compiletensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4avifenc: 0onednn: IP Batch All - f32 - CPUcaffe: GoogleNet - CPU - 100embree: Pathtracer ISPC - Crowncompress-zstd: 19basis: UASTC Level 3embree: Pathtracer - Crownhmmer: Pfam Database Searchglmark2: 1920 x 1080mnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0embree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Asian Dragonstockfish: Total Timeavifenc: 2pyperformance: raytracerawtherapee: Total Benchmark Timencnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: CPU - squeezenetx265: Bosphorus 4Kkvazaar: Bosphorus 4K - Very Fastpyperformance: python_startupgeekbench: CPU Multi Coregeekbench: GPU Vulkanbasis: ETC1Shugin: Panorama Photo Assistant + Stitching Timeindigobench: CPU - Bedroomindigobench: CPU - Supercartensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: NASNet Mobileddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - Multeasymapmlpack: scikit_linearridgeregressioncompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedpyperformance: 2to3basis: UASTC Level 2compress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedastcenc: Thoroughrav1e: 5rav1e: 1geekbench: CPU Single Coreredis: GETmlpack: scikit_icacaffe: AlexNet - CPU - 100onednn: Deconvolution Batch deconv_1d - f32 - CPUkvazaar: Bosphorus 1080p - Mediumpyperformance: gorav1e: 6espeak: Text-To-Speech Synthesiskvazaar: Bosphorus 4K - Ultra Fastaom-av1: Speed 0 Two-Passmpv: Big Buck Bunny Sunflower 4K - Software Onlyredis: SADDpyperformance: django_templateonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUaom-av1: Speed 6 Realtimeonednn: IP Batch 1D - f32 - CPUredis: LPOPpyperformance: regex_compilephpbench: PHP Benchmark Suitevkfft: ocrmypdf: Processing 60 Page PDF Documentcompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedaom-av1: Speed 6 Two-Passpyperformance: pathlibrav1e: 10compress-zstd: 3pyperformance: pickle_pure_pythoncrafty: Elapsed Timepyperformance: json_loadsmlpack: scikit_svmredis: LPUSHredis: SETtesseract-ocr: Time To OCR 7 Imagespybench: Total For Average Test Timesncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - squeezenetaom-av1: Speed 4 Two-Passsunflow: Global Illumination + Image Synthesispyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: crypto_pyaesopenssl: RSA 4096-bit Performancernnoise: kvazaar: Bosphorus 1080p - Very Fasttnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1dolfyn: Computational Fluid Dynamicsx265: Bosphorus 1080paom-av1: Speed 8 Realtimebetsy: ETC2 RGB - Highestmpv: Big Buck Bunny Sunflower 1080p - Software Onlylammps: Rhodopsin Proteinbetsy: ETC1 - Highestdarktable: Boat - CPU-onlyastcenc: Mediumonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUkvazaar: Bosphorus 1080p - Ultra Fastbasis: UASTC Level 0darktable: Masskrug - CPU-onlydarktable: Server Room - CPU-onlyastcenc: Fastwaifu2x-ncnn: 2x - 3 - Yesavifenc: 8onednn: Convolution Batch Shapes Auto - f32 - CPUyquake2: Software CPU - 1920 x 1080avifenc: 10onednn: Deconvolution Batch deconv_3d - f32 - CPUffte: N=256, 3D Complex FFT Routineyquake2: OpenGL 1.x - 1920 x 1080yquake2: OpenGL 3.x - 1920 x 1080darktable: Server Rack - CPU-onlyhpcc: Max Ping Pong Bandwidthhpcc: Rand Ring Bandwidthhpcc: Rand Ring Latencyhpcc: G-Rand Accesshpcc: EP-STREAM Triadhpcc: G-Ptranshpcc: EP-DGEMMhpcc: G-Ffte12369264191215.501702.9077307751543788755425.23120.90700323.460.542236.142885741974.11321.42340004918.538324.495002.9113797562159.30150486475572917146.26296.74101206345.175421.1114.2705.3673111.354653042.5735.8194.93734.9318.8536.11856.2294984056186.46048079.95532.9637.3218.9517.8372.5719.421.959.216.324.796.186.6623.8221.877.778.038.0252273624662.79561.9990.9301.954387017259621266426280959334.973.1910575.652.5032058.58910496.653.5652.391.0500.37212232310604.6249.12490828.0522112.722491.40430.99114.550.26454.911951987.0946.9503.700249.37116.937.432672624535.481696040791843232.16411116.39763.103.4317.73.0413687.7440773814627.221.431437805.001725875.7524.567102011.266.104.092.1610.825.740.919.332.782.323.572.578.284.842.181.9481111171171161145.020.02232.34263.815260.26017.75634.7137.096.6881300.123.2495.79912.6208.304.7305757.379.2978.0196.3187.397.1586.63120.6452110.96.10012.029925405.988379614737.4977.80.23014400.7664.961310.394000.020046.741790.6536655.509503.3068664347951214.944701.8536887301549788761425.58324.800.550214.082885673076.91320.60339696862.295954.482092.9213866946160.55850469335574923149.26399.73791206795.156321.2114.1865.3310111.303654742.4445.8244.93634.9958.8486.13546.14301005012588.20947779.41832.8437.0518.5717.7572.1019.301.929.16.274.826.116.6423.7121.677.788.038.0852483679362.69461.9720.9351.958386977259600266319279167331.873.2010711.551.2032158.62810635.353.6052.461.0470.36512282095708.1949.12490479.7474912.752481.37030.94114.630.25457.511947660.5346.7491.110262.36816.668.420722159676.181706045121847932.23311146.310001.813.4217.73.0683632.0439757349727.321.441413366.451754885.3824.505101711.296.134.102.1710.715.700.889.342.762.333.582.588.254.852.171.9651111161171161144.320.01532.38264.303260.45117.70435.1236.186.4761305.293.3295.60812.4778.34.0391057.579.2677.9636.2747.397.1846.57519.1978109.56.03713.179825808.658192384747.0977.80.2291216.085704.1336386491553790763425.39325.170.552203.502885696075.29321.61339643088.464494.483082.9213730556160.14950484335571717145.65399.96831202915.146421.2114.2615.3162111.494655942.5455.8114.91035.1678.8746.14016.2800986404586.29347680.34933.2037.3118.8118.0072.7219.511.929.166.304.846.256.6823.7121.837.758.018.1352563698562.76962.7060.9311.958387009259575266346279158333.513.2210653.051.4532158.62110588.052.5552.401.0580.36112282135888.9049.14489629.8251712.762481.39330.95614.600.25456.912004338.2546.8462.801247.15616.918.810651559670.961686043221847832.26411202.69959.063.4317.83.0873687.7438739469927.321.521459398.531695298.0324.509102111.306.094.092.1710.615.690.879.362.762.323.582.588.274.852.181.9961111171171161144.720.02832.38264.123260.44117.68034.6637.076.4701304.013.3405.60212.5098.304.0673957.479.2678.0266.2927.377.1836.58419.4144110.46.06412.892125835.646713364730.4975.80.234OpenBenchmarking.org

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4121.5M3M4.5M6M7.5MSE +/- 717516.50, N = 2692641964347951. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4121.2M2.4M3.6M4.8M6MMin: 6208902 / Avg: 6926418.5 / Max: 76439351. (CXX) g++ options: -O3 -fopenmp

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To Compile12330060090012001500SE +/- 1.95, N = 3SE +/- 1.32, N = 3SE +/- 0.35, N = 31215.501214.941216.09
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To Compile1232004006008001000Min: 1212.39 / Avg: 1215.5 / Max: 1219.08Min: 1213.44 / Avg: 1214.94 / Max: 1217.57Min: 1215.55 / Avg: 1216.08 / Max: 1216.75

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123150300450600750SE +/- 1.79, N = 3SE +/- 0.97, N = 3SE +/- 1.97, N = 3702.91701.85704.131. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123120240360480600Min: 700.93 / Avg: 702.91 / Max: 706.47Min: 699.92 / Avg: 701.85 / Max: 702.9Min: 700.83 / Avg: 704.13 / Max: 707.651. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123160320480640800SE +/- 4.41, N = 3SE +/- 9.06, N = 5SE +/- 7.37, N = 97306886381. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123130260390520650Min: 722 / Avg: 730.33 / Max: 737Min: 654 / Avg: 688.4 / Max: 704Min: 600 / Avg: 638.11 / Max: 6701. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS1232004006008001000SE +/- 7.81, N = 3SE +/- 10.34, N = 4SE +/- 7.68, N = 97757306491. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS123140280420560700Min: 761 / Avg: 775 / Max: 788Min: 711 / Avg: 730 / Max: 759Min: 600 / Avg: 649.33 / Max: 6741. (CXX) g++ options: -flto -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score12330060090012001500154315491553

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score1232004006008001000788788790

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score123160320480640800755761763

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive12390180270360450SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 3425.23425.58425.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive12380160240320400Min: 425.09 / Avg: 425.23 / Max: 425.48Min: 425.29 / Avg: 425.58 / Max: 425.83Min: 425.3 / Avg: 425.39 / Max: 425.51. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL1306090120150120.911. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-Only12370140210280350SE +/- 0.35, N = 3SE +/- 0.44, N = 3SE +/- 0.71, N = 3323.46324.80325.17
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-Only12360120180240300Min: 322.92 / Avg: 323.46 / Max: 324.12Min: 324.14 / Avg: 324.8 / Max: 325.64Min: 324.04 / Avg: 325.17 / Max: 326.47

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.12420.24840.37260.49680.621SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 30.5420.5500.5521. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark123246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.55 / Avg: 0.55 / Max: 0.55Min: 0.55 / Avg: 0.55 / Max: 0.551. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore212350100150200250SE +/- 9.66, N = 15SE +/- 13.98, N = 15SE +/- 13.89, N = 12236.14214.08203.50MIN: 35.16 / MAX: 498.75MIN: 33.12 / MAX: 476.64MIN: 27.45 / MAX: 451.061. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore21234080120160200Min: 152.49 / Avg: 236.14 / Max: 293.24Min: 124.38 / Avg: 214.08 / Max: 294.64Min: 99.73 / Avg: 203.5 / Max: 262.771. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.012360120180240300SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.88, N = 32882882881. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.012350100150200250Min: 287 / Avg: 288 / Max: 289Min: 287 / Avg: 288.33 / Max: 289Min: 287 / Avg: 288.33 / Max: 2901. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric12312K24K36K48K60K5741956730569601. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda12320406080100SE +/- 0.39, N = 3SE +/- 1.17, N = 12SE +/- 1.16, N = 374.1176.9175.29
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda1231530456075Min: 73.33 / Avg: 74.11 / Max: 74.54Min: 72.51 / Avg: 76.91 / Max: 85.94Min: 73.75 / Avg: 75.29 / Max: 77.56

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12370140210280350SE +/- 0.41, N = 3SE +/- 0.23, N = 3SE +/- 0.63, N = 3321.42320.60321.61
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12360120180240300Min: 320.62 / Avg: 321.42 / Max: 321.99Min: 320.23 / Avg: 320.6 / Max: 321.03Min: 320.59 / Avg: 321.61 / Max: 322.75

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12370M140M210M280M350MSE +/- 215594.66, N = 3SE +/- 156026.35, N = 3SE +/- 106732.80, N = 3340004918.54339696862.30339643088.461. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12360M120M180M240M300MMin: 339711148.45 / Avg: 340004918.54 / Max: 340425148.66Min: 339526795.33 / Avg: 339696862.3 / Max: 340008480.38Min: 339464439.79 / Avg: 339643088.46 / Max: 339833601.541. (CC) gcc options: -O3 -march=native -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1231.01142.02283.03424.04565.057SE +/- 0.00243, N = 3SE +/- 0.00085, N = 3SE +/- 0.00219, N = 34.495004.482094.48308
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms123246810Min: 4.49 / Avg: 4.5 / Max: 4.5Min: 4.48 / Avg: 4.48 / Max: 4.48Min: 4.48 / Avg: 4.48 / Max: 4.49

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1230.6571.3141.9712.6283.285SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.912.922.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium123246810Min: 2.9 / Avg: 2.91 / Max: 2.91Min: 2.91 / Avg: 2.92 / Max: 2.92Min: 2.91 / Avg: 2.92 / Max: 2.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1233M6M9M12M15MSE +/- 23080.42, N = 3SE +/- 160984.48, N = 3SE +/- 105834.72, N = 3137975621386694613730556
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1232M4M6M8M10MMin: 13754678 / Avg: 13797562.33 / Max: 13833797Min: 13697677 / Avg: 13866945.67 / Max: 14188770Min: 13518945 / Avg: 13730556.33 / Max: 13840657

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile1234080120160200SE +/- 0.52, N = 3SE +/- 1.04, N = 3SE +/- 0.59, N = 3159.30160.56160.15
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile123306090120150Min: 158.7 / Avg: 159.3 / Max: 160.34Min: 159.05 / Avg: 160.56 / Max: 162.56Min: 159.39 / Avg: 160.15 / Max: 161.31

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V21231.1M2.2M3.3M4.4M5.5MSE +/- 308.45, N = 3SE +/- 255.76, N = 3SE +/- 2537.61, N = 3504864750469335048433
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2123900K1800K2700K3600K4500KMin: 5048030 / Avg: 5048646.67 / Max: 5048970Min: 5046450 / Avg: 5046933.33 / Max: 5047320Min: 5044640 / Avg: 5048433.33 / Max: 5053250

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V41231.2M2.4M3.6M4.8M6MSE +/- 421.91, N = 3SE +/- 2695.68, N = 3SE +/- 230.96, N = 3557291755749235571717
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V41231000K2000K3000K4000K5000KMin: 5572470 / Avg: 5572916.67 / Max: 5573760Min: 5571650 / Avg: 5574923.33 / Max: 5580270Min: 5571320 / Avg: 5571716.67 / Max: 5572120

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0123306090120150SE +/- 0.63, N = 3SE +/- 0.61, N = 3SE +/- 0.42, N = 3146.26149.26145.651. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0123306090120150Min: 145.02 / Avg: 146.26 / Max: 147.07Min: 148.21 / Avg: 149.26 / Max: 150.32Min: 144.94 / Avg: 145.65 / Max: 146.381. (CXX) g++ options: -O3 -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPU12320406080100SE +/- 0.07, N = 3SE +/- 0.13, N = 3SE +/- 0.89, N = 1596.7499.7499.97MIN: 95.29MIN: 97.89MIN: 95.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPU12320406080100Min: 96.63 / Avg: 96.74 / Max: 96.87Min: 99.52 / Avg: 99.74 / Max: 99.97Min: 97.73 / Avg: 99.97 / Max: 111.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012330K60K90K120K150KSE +/- 118.13, N = 3SE +/- 32.70, N = 3SE +/- 73.69, N = 31206341206791202911. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012320K40K60K80K100KMin: 120502 / Avg: 120634.33 / Max: 120870Min: 120617 / Avg: 120679 / Max: 120728Min: 120144 / Avg: 120291.33 / Max: 1203681. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown1231.16452.3293.49354.6585.8225SE +/- 0.0016, N = 3SE +/- 0.0124, N = 3SE +/- 0.0320, N = 35.17545.15635.1464MIN: 5.15 / MAX: 5.25MIN: 5.11 / MAX: 5.25MIN: 5.05 / MAX: 5.25
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown123246810Min: 5.17 / Avg: 5.18 / Max: 5.18Min: 5.13 / Avg: 5.16 / Max: 5.18Min: 5.08 / Avg: 5.15 / Max: 5.18

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19123510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 321.121.221.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19123510152025Min: 21.1 / Avg: 21.1 / Max: 21.1Min: 21.2 / Avg: 21.2 / Max: 21.2Min: 21.2 / Avg: 21.2 / Max: 21.21. (CC) gcc options: -O3 -pthread -lz -llzma

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3123306090120150SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3114.27114.19114.261. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 312320406080100Min: 114.24 / Avg: 114.27 / Max: 114.3Min: 114.17 / Avg: 114.19 / Max: 114.2Min: 114.24 / Avg: 114.26 / Max: 114.291. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown1231.20762.41523.62284.83046.038SE +/- 0.0401, N = 3SE +/- 0.0092, N = 3SE +/- 0.0526, N = 35.36735.33105.3162MIN: 4.89 / MAX: 5.49MIN: 5.3 / MAX: 5.4MIN: 4.89 / MAX: 5.45
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown123246810Min: 5.29 / Avg: 5.37 / Max: 5.43Min: 5.32 / Avg: 5.33 / Max: 5.35Min: 5.22 / Avg: 5.32 / Max: 5.39

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100SE +/- 0.15, N = 3SE +/- 0.11, N = 3SE +/- 0.23, N = 3111.35111.30111.491. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100Min: 111.19 / Avg: 111.35 / Max: 111.66Min: 111.19 / Avg: 111.3 / Max: 111.52Min: 111.08 / Avg: 111.49 / Max: 111.891. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 108012314002800420056007000653065476559

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v31231020304050SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 342.5742.4442.55MIN: 42.21 / MAX: 51.92MIN: 42.09 / MAX: 51.33MIN: 42.02 / MAX: 58.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3123918273645Min: 42.42 / Avg: 42.57 / Max: 42.7Min: 42.29 / Avg: 42.44 / Max: 42.53Min: 42.21 / Avg: 42.55 / Max: 42.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.01231.31042.62083.93125.24166.552SE +/- 0.016, N = 3SE +/- 0.028, N = 3SE +/- 0.015, N = 35.8195.8245.811MIN: 5.74 / MAX: 6.34MIN: 5.72 / MAX: 25.28MIN: 5.73 / MAX: 15.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0123246810Min: 5.79 / Avg: 5.82 / Max: 5.84Min: 5.77 / Avg: 5.82 / Max: 5.87Min: 5.8 / Avg: 5.81 / Max: 5.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_2241231.11082.22163.33244.44325.554SE +/- 0.018, N = 3SE +/- 0.020, N = 3SE +/- 0.030, N = 34.9374.9364.910MIN: 4.86 / MAX: 5.44MIN: 4.86 / MAX: 5.66MIN: 4.82 / MAX: 6.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224123246810Min: 4.9 / Avg: 4.94 / Max: 4.96Min: 4.9 / Avg: 4.94 / Max: 4.96Min: 4.85 / Avg: 4.91 / Max: 4.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50123816243240SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.25, N = 334.9335.0035.17MIN: 34.38 / MAX: 58.39MIN: 34.7 / MAX: 43.77MIN: 34.65 / MAX: 80.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50123816243240Min: 34.87 / Avg: 34.93 / Max: 35.04Min: 34.95 / Avg: 34.99 / Max: 35.07Min: 34.82 / Avg: 35.17 / Max: 35.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0123246810SE +/- 0.003, N = 3SE +/- 0.005, N = 3SE +/- 0.018, N = 38.8538.8488.874MIN: 8.76 / MAX: 11.24MIN: 8.76 / MAX: 9.86MIN: 8.77 / MAX: 11.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.01233691215Min: 8.85 / Avg: 8.85 / Max: 8.86Min: 8.84 / Avg: 8.85 / Max: 8.85Min: 8.85 / Avg: 8.87 / Max: 8.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon123246810SE +/- 0.0222, N = 3SE +/- 0.0098, N = 3SE +/- 0.0025, N = 36.11856.13546.1401MIN: 6.05 / MAX: 6.22MIN: 6.09 / MAX: 6.22MIN: 6.11 / MAX: 6.21
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon123246810Min: 6.08 / Avg: 6.12 / Max: 6.15Min: 6.12 / Avg: 6.14 / Max: 6.15Min: 6.14 / Avg: 6.14 / Max: 6.14

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon123246810SE +/- 0.0315, N = 3SE +/- 0.0103, N = 3SE +/- 0.0267, N = 36.22946.14306.2800MIN: 6.16 / MAX: 6.36MIN: 6.1 / MAX: 6.24MIN: 6.21 / MAX: 6.4
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon1233691215Min: 6.18 / Avg: 6.23 / Max: 6.29Min: 6.12 / Avg: 6.14 / Max: 6.16Min: 6.24 / Avg: 6.28 / Max: 6.33

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1232M4M6M8M10MSE +/- 111801.84, N = 6SE +/- 45923.84, N = 3SE +/- 124228.11, N = 398405611005012598640451. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1232M4M6M8M10MMin: 9453120 / Avg: 9840561 / Max: 10236128Min: 9962157 / Avg: 10050125 / Max: 10116983Min: 9617303 / Avg: 9864044.67 / Max: 100126501. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 212320406080100SE +/- 0.23, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 386.4688.2186.291. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 212320406080100Min: 86.01 / Avg: 86.46 / Max: 86.74Min: 88.08 / Avg: 88.21 / Max: 88.33Min: 86.19 / Avg: 86.29 / Max: 86.41. (CXX) g++ options: -O3 -fPIC

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace123100200300400500SE +/- 0.33, N = 3480477476
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace12390180270360450Min: 475 / Avg: 475.67 / Max: 476

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time12320406080100SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 379.9679.4280.351. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time1231530456075Min: 79.92 / Avg: 79.96 / Max: 79.98Min: 79.39 / Avg: 79.42 / Max: 79.46Min: 80.26 / Avg: 80.35 / Max: 80.471. RawTherapee, version 5.8, command line.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny123816243240SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 332.9632.8433.20MIN: 32.78 / MAX: 34.14MIN: 32.57 / MAX: 55.87MIN: 32.95 / MAX: 34.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny123714212835Min: 32.91 / Avg: 32.96 / Max: 33.03Min: 32.7 / Avg: 32.84 / Max: 33Min: 33.12 / Avg: 33.2 / Max: 33.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50123918273645SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 337.3237.0537.31MIN: 37.15 / MAX: 39.69MIN: 36.81 / MAX: 38.07MIN: 37.14 / MAX: 39.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50123816243240Min: 37.28 / Avg: 37.32 / Max: 37.39Min: 36.94 / Avg: 37.05 / Max: 37.17Min: 37.24 / Avg: 37.31 / Max: 37.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet123510152025SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 318.9518.5718.81MIN: 18.85 / MAX: 19.3MIN: 18.49 / MAX: 19.27MIN: 18.72 / MAX: 19.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet123510152025Min: 18.9 / Avg: 18.95 / Max: 19Min: 18.55 / Avg: 18.57 / Max: 18.62Min: 18.78 / Avg: 18.81 / Max: 18.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet1812348121620SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 317.8317.7518.00MIN: 17.65 / MAX: 21.58MIN: 17.57 / MAX: 18.32MIN: 17.88 / MAX: 18.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18123510152025Min: 17.74 / Avg: 17.83 / Max: 17.88Min: 17.66 / Avg: 17.75 / Max: 17.83Min: 17.98 / Avg: 18 / Max: 18.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg161231632486480SE +/- 0.13, N = 3SE +/- 0.22, N = 3SE +/- 0.02, N = 372.5772.1072.72MIN: 72.08 / MAX: 107.37MIN: 71.61 / MAX: 88.56MIN: 72.42 / MAX: 80.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg161231428425670Min: 72.43 / Avg: 72.57 / Max: 72.82Min: 71.77 / Avg: 72.1 / Max: 72.51Min: 72.69 / Avg: 72.72 / Max: 72.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet123510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 319.4219.3019.51MIN: 19.3 / MAX: 24.48MIN: 19.11 / MAX: 38.78MIN: 19.37 / MAX: 20.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet123510152025Min: 19.41 / Avg: 19.42 / Max: 19.43Min: 19.27 / Avg: 19.3 / Max: 19.32Min: 19.48 / Avg: 19.51 / Max: 19.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface1230.43880.87761.31641.75522.194SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.951.921.92MIN: 1.9 / MAX: 2.15MIN: 1.88 / MAX: 2.17MIN: 1.89 / MAX: 2.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface123246810Min: 1.92 / Avg: 1.95 / Max: 2Min: 1.91 / Avg: 1.92 / Max: 1.94Min: 1.92 / Avg: 1.92 / Max: 1.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b01233691215SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 39.219.109.16MIN: 9.07 / MAX: 14.29MIN: 9.02 / MAX: 9.48MIN: 9.06 / MAX: 9.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b01233691215Min: 9.16 / Avg: 9.21 / Max: 9.3Min: 9.1 / Avg: 9.1 / Max: 9.1Min: 9.13 / Avg: 9.16 / Max: 9.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.326.276.30MIN: 6.26 / MAX: 7.07MIN: 6.2 / MAX: 7.42MIN: 6.22 / MAX: 7.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet1233691215Min: 6.3 / Avg: 6.32 / Max: 6.34Min: 6.25 / Avg: 6.27 / Max: 6.29Min: 6.29 / Avg: 6.3 / Max: 6.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v21231.0892.1783.2674.3565.445SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 34.794.824.84MIN: 4.65 / MAX: 5.62MIN: 4.76 / MAX: 6MIN: 4.76 / MAX: 5.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2123246810Min: 4.71 / Avg: 4.79 / Max: 4.83Min: 4.82 / Avg: 4.82 / Max: 4.83Min: 4.81 / Avg: 4.84 / Max: 4.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 36.186.116.25MIN: 6.11 / MAX: 7.82MIN: 6.03 / MAX: 7.3MIN: 6.11 / MAX: 37.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v31233691215Min: 6.16 / Avg: 6.18 / Max: 6.21Min: 6.1 / Avg: 6.11 / Max: 6.12Min: 6.2 / Avg: 6.25 / Max: 6.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.666.646.68MIN: 6.56 / MAX: 7.93MIN: 6.54 / MAX: 7.9MIN: 6.59 / MAX: 8.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 6.66 / Avg: 6.66 / Max: 6.67Min: 6.63 / Avg: 6.64 / Max: 6.65Min: 6.67 / Avg: 6.68 / Max: 6.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet123612182430SE +/- 0.18, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 323.8223.7123.71MIN: 23.36 / MAX: 24.99MIN: 23.3 / MAX: 24.65MIN: 23.46 / MAX: 24.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet123612182430Min: 23.46 / Avg: 23.82 / Max: 24.03Min: 23.42 / Avg: 23.71 / Max: 23.91Min: 23.59 / Avg: 23.71 / Max: 23.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet123510152025SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 321.8721.6721.83MIN: 21.59 / MAX: 23.26MIN: 21.39 / MAX: 37.01MIN: 21.56 / MAX: 22.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet123510152025Min: 21.7 / Avg: 21.87 / Max: 21.98Min: 21.61 / Avg: 21.67 / Max: 21.78Min: 21.67 / Avg: 21.83 / Max: 22.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K123246810SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 37.777.787.751. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K1233691215Min: 7.69 / Avg: 7.77 / Max: 7.85Min: 7.69 / Avg: 7.78 / Max: 7.94Min: 7.67 / Avg: 7.75 / Max: 7.851. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.038.038.011. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast1233691215Min: 8.02 / Avg: 8.03 / Max: 8.05Min: 8.02 / Avg: 8.03 / Max: 8.04Min: 8 / Avg: 8.01 / Max: 8.021. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 38.028.088.13
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup1233691215Min: 8.01 / Avg: 8.02 / Max: 8.03Min: 8.06 / Avg: 8.08 / Max: 8.09Min: 8.08 / Avg: 8.13 / Max: 8.19

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi Core12311002200330044005500SE +/- 4.98, N = 3SE +/- 7.09, N = 3SE +/- 6.17, N = 3522752485256
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi Core1239001800270036004500Min: 5219 / Avg: 5226.67 / Max: 5236Min: 5234 / Avg: 5248 / Max: 5257Min: 5249 / Avg: 5255.67 / Max: 5268

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: GPU Vulkan1238K16K24K32K40KSE +/- 459.84, N = 3SE +/- 115.14, N = 3SE +/- 56.79, N = 3362463679336985
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: GPU Vulkan1236K12K18K24K30KMin: 35326 / Avg: 36245.67 / Max: 36708Min: 36664 / Avg: 36793.33 / Max: 37023Min: 36880 / Avg: 36985 / Max: 37075

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231428425670SE +/- 0.17, N = 3SE +/- 0.11, N = 3SE +/- 0.15, N = 362.8062.6962.771. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231224364860Min: 62.46 / Avg: 62.8 / Max: 62.98Min: 62.48 / Avg: 62.69 / Max: 62.83Min: 62.47 / Avg: 62.77 / Max: 62.931. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1231428425670SE +/- 0.29, N = 3SE +/- 0.43, N = 3SE +/- 0.37, N = 362.0061.9762.71
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1231224364860Min: 61.44 / Avg: 62 / Max: 62.39Min: 61.54 / Avg: 61.97 / Max: 62.82Min: 62.11 / Avg: 62.71 / Max: 63.38

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1230.21040.42080.63120.84161.052SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.9300.9350.931
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom123246810Min: 0.93 / Avg: 0.93 / Max: 0.93Min: 0.93 / Avg: 0.94 / Max: 0.94Min: 0.93 / Avg: 0.93 / Max: 0.93

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1230.44060.88121.32181.76242.203SE +/- 0.005, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 31.9541.9581.958
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar123246810Min: 1.95 / Avg: 1.95 / Max: 1.96Min: 1.95 / Avg: 1.96 / Max: 1.97Min: 1.96 / Avg: 1.96 / Max: 1.96

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12380K160K240K320K400KSE +/- 60.36, N = 3SE +/- 56.90, N = 3SE +/- 10.21, N = 3387017386977387009
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12370K140K210K280K350KMin: 386922 / Avg: 387017 / Max: 387129Min: 386885 / Avg: 386977 / Max: 387081Min: 386990 / Avg: 387009 / Max: 387025

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12360K120K180K240K300KSE +/- 20.61, N = 3SE +/- 44.31, N = 3SE +/- 21.18, N = 3259621259600259575
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12350K100K150K200K250KMin: 259583 / Avg: 259620.67 / Max: 259654Min: 259527 / Avg: 259600 / Max: 259680Min: 259536 / Avg: 259574.67 / Max: 259609

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12360K120K180K240K300KSE +/- 51.91, N = 3SE +/- 131.03, N = 3SE +/- 60.84, N = 3266426266319266346
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12350K100K150K200K250KMin: 266354 / Avg: 266426.33 / Max: 266527Min: 266183 / Avg: 266319 / Max: 266581Min: 266284 / Avg: 266346.33 / Max: 266468

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile12360K120K180K240K300KSE +/- 1226.74, N = 3SE +/- 79.18, N = 3SE +/- 151.01, N = 3280959279167279158
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile12350K100K150K200K250KMin: 279365 / Avg: 280958.67 / Max: 283371Min: 279069 / Avg: 279167.33 / Max: 279324Min: 278861 / Avg: 279158 / Max: 279354

DDraceNetwork

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap - Total Frame Time1233691215Min: 2.07 / Avg: 2.98 / Max: 7.54Min: 2.09 / Avg: 3.02 / Max: 7.16Min: 2.02 / Avg: 3 / Max: 10.71. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap12370140210280350SE +/- 1.03, N = 3SE +/- 1.65, N = 3SE +/- 0.66, N = 3334.97331.87333.51MIN: 110.11 / MAX: 493.34MIN: 104.44 / MAX: 494.32MIN: 109.02 / MAX: 495.291. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap12360120180240300Min: 333.56 / Avg: 334.97 / Max: 336.97Min: 328.86 / Avg: 331.87 / Max: 334.55Min: 332.32 / Avg: 333.51 / Max: 334.581. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression1230.72451.4492.17352.8983.6225SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.193.203.22
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression123246810Min: 3.18 / Avg: 3.19 / Max: 3.2Min: 3.2 / Avg: 3.2 / Max: 3.21Min: 3.21 / Avg: 3.22 / Max: 3.22

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1232K4K6K8K10KSE +/- 23.62, N = 3SE +/- 8.30, N = 3SE +/- 26.11, N = 310575.610711.510653.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1232K4K6K8K10KMin: 10543.9 / Avg: 10575.63 / Max: 10621.8Min: 10703 / Avg: 10711.5 / Max: 10728.1Min: 10619.9 / Avg: 10652.97 / Max: 10704.51. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231224364860SE +/- 0.72, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 352.5051.2051.451. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231122334455Min: 51.21 / Avg: 52.5 / Max: 53.71Min: 50.66 / Avg: 51.2 / Max: 51.51Min: 51.4 / Avg: 51.45 / Max: 51.491. (CC) gcc options: -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to312370140210280350SE +/- 0.33, N = 3320321321
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to312360120180240300Min: 319 / Avg: 319.67 / Max: 320

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231326395265SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 358.5958.6358.621. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231224364860Min: 58.57 / Avg: 58.59 / Max: 58.6Min: 58.59 / Avg: 58.63 / Max: 58.69Min: 58.59 / Avg: 58.62 / Max: 58.681. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KSE +/- 12.91, N = 3SE +/- 18.42, N = 3SE +/- 22.07, N = 310496.610635.310588.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KMin: 10477.7 / Avg: 10496.63 / Max: 10521.3Min: 10611.2 / Avg: 10635.33 / Max: 10671.5Min: 10543.9 / Avg: 10587.97 / Max: 10612.21. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231224364860SE +/- 0.84, N = 3SE +/- 0.85, N = 3SE +/- 0.28, N = 353.5653.6052.551. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231122334455Min: 52.7 / Avg: 53.56 / Max: 55.24Min: 52.7 / Avg: 53.6 / Max: 55.3Min: 51.99 / Avg: 52.55 / Max: 52.861. (CC) gcc options: -O3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1231224364860SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 352.3952.4652.401. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1231122334455Min: 52.38 / Avg: 52.39 / Max: 52.4Min: 52.4 / Avg: 52.46 / Max: 52.55Min: 52.38 / Avg: 52.4 / Max: 52.421. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 51230.23810.47620.71430.95241.1905SE +/- 0.007, N = 3SE +/- 0.004, N = 3SE +/- 0.000, N = 31.0501.0471.058
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123246810Min: 1.04 / Avg: 1.05 / Max: 1.06Min: 1.04 / Avg: 1.05 / Max: 1.05Min: 1.06 / Avg: 1.06 / Max: 1.06

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 11230.08370.16740.25110.33480.4185SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.006, N = 30.3720.3650.361
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 112312345Min: 0.36 / Avg: 0.37 / Max: 0.38Min: 0.35 / Avg: 0.36 / Max: 0.37Min: 0.35 / Avg: 0.36 / Max: 0.37

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single Core12330060090012001500SE +/- 0.58, N = 3SE +/- 1.33, N = 3SE +/- 0.88, N = 3122312281228
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single Core1232004006008001000Min: 1222 / Avg: 1223 / Max: 1224Min: 1225 / Avg: 1227.67 / Max: 1229Min: 1226 / Avg: 1227.67 / Max: 1229

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123500K1000K1500K2000K2500KSE +/- 40313.16, N = 15SE +/- 26352.66, N = 15SE +/- 21009.40, N = 152310604.622095708.192135888.901. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KMin: 2100907.5 / Avg: 2310604.62 / Max: 2551918.5Min: 1905371.5 / Avg: 2095708.19 / Max: 2336673Min: 2020589.88 / Avg: 2135888.9 / Max: 2294091.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231122334455SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 349.1249.1249.14
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231020304050Min: 48.98 / Avg: 49.12 / Max: 49.2Min: 49.05 / Avg: 49.12 / Max: 49.22Min: 49.07 / Avg: 49.14 / Max: 49.2

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10012311K22K33K44K55KSE +/- 14.01, N = 3SE +/- 8.08, N = 3SE +/- 27.06, N = 34908249047489621. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1001239K18K27K36K45KMin: 49055 / Avg: 49082 / Max: 49102Min: 49037 / Avg: 49047 / Max: 49063Min: 48909 / Avg: 48962 / Max: 489981. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.01249, N = 3SE +/- 0.12846, N = 3SE +/- 0.17039, N = 158.052219.747499.82517MIN: 7.86MIN: 9.46MIN: 8.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU1233691215Min: 8.03 / Avg: 8.05 / Max: 8.07Min: 9.62 / Avg: 9.75 / Max: 10Min: 8.76 / Avg: 9.83 / Max: 10.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.7212.7512.761. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium12348121620Min: 12.71 / Avg: 12.72 / Max: 12.73Min: 12.74 / Avg: 12.75 / Max: 12.76Min: 12.73 / Avg: 12.76 / Max: 12.781. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go12350100150200250SE +/- 0.33, N = 3249248248
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go12350100150200250Min: 248 / Avg: 248.33 / Max: 249

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 61230.31590.63180.94771.26361.5795SE +/- 0.007, N = 3SE +/- 0.006, N = 3SE +/- 0.011, N = 31.4041.3701.393
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123246810Min: 1.4 / Avg: 1.4 / Max: 1.42Min: 1.36 / Avg: 1.37 / Max: 1.38Min: 1.38 / Avg: 1.39 / Max: 1.42

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835SE +/- 0.15, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 430.9930.9430.961. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835Min: 30.61 / Avg: 30.99 / Max: 31.36Min: 30.92 / Avg: 30.94 / Max: 30.96Min: 30.93 / Avg: 30.96 / Max: 31.021. (CC) gcc options: -O2 -std=c99

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast12348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 314.5514.6314.601. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast12348121620Min: 14.51 / Avg: 14.55 / Max: 14.58Min: 14.61 / Avg: 14.63 / Max: 14.65Min: 14.58 / Avg: 14.6 / Max: 14.631. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-Pass1230.05850.1170.17550.2340.2925SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.260.250.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-Pass12312345Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.25 / Avg: 0.25 / Max: 0.25Min: 0.25 / Avg: 0.25 / Max: 0.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software Only123100200300400500SE +/- 0.40, N = 3SE +/- 0.40, N = 3SE +/- 0.38, N = 3454.91457.51456.91MIN: 299.99 / MAX: 631.56MIN: 299.99 / MAX: 666.65MIN: 292.67 / MAX: 631.561. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software Only12380160240320400Min: 454.16 / Avg: 454.91 / Max: 455.54Min: 456.72 / Avg: 457.51 / Max: 457.97Min: 456.18 / Avg: 456.91 / Max: 457.451. mpv 0.32.0

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123400K800K1200K1600K2000KSE +/- 22720.13, N = 15SE +/- 25708.45, N = 15SE +/- 18274.51, N = 31951987.091947660.532004338.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123300K600K900K1200K1500KMin: 1842092.12 / Avg: 1951987.09 / Max: 2119186.5Min: 1808318.38 / Avg: 1947660.53 / Max: 2118983Min: 1984127 / Avg: 2004338.25 / Max: 2040816.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template1231122334455SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 346.946.746.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template1231020304050Min: 46.8 / Avg: 46.9 / Max: 47.1Min: 46.7 / Avg: 46.73 / Max: 46.8Min: 46.7 / Avg: 46.8 / Max: 46.9

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU123110220330440550SE +/- 2.86, N = 3SE +/- 2.10, N = 3SE +/- 3.41, N = 3503.70491.11462.80MIN: 495.04MIN: 474.1MIN: 453.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12390180270360450Min: 499.13 / Avg: 503.7 / Max: 508.95Min: 486.92 / Avg: 491.11 / Max: 493.37Min: 458.96 / Avg: 462.8 / Max: 469.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12360120180240300SE +/- 0.60, N = 3SE +/- 3.42, N = 3SE +/- 1.11, N = 3249.37262.37247.16MIN: 247.16MIN: 254.47MIN: 243.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12350100150200250Min: 248.65 / Avg: 249.37 / Max: 250.57Min: 255.94 / Avg: 262.37 / Max: 267.61Min: 245.12 / Avg: 247.16 / Max: 248.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime12348121620SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 316.9316.6616.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime12348121620Min: 16.9 / Avg: 16.93 / Max: 16.95Min: 16.58 / Avg: 16.66 / Max: 16.8Min: 16.85 / Avg: 16.91 / Max: 16.941. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPU123246810SE +/- 0.03162, N = 3SE +/- 0.12292, N = 3SE +/- 0.12077, N = 157.432678.420728.81065MIN: 7.23MIN: 8.01MIN: 8.211. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPU1233691215Min: 7.39 / Avg: 7.43 / Max: 7.49Min: 8.18 / Avg: 8.42 / Max: 8.56Min: 8.37 / Avg: 8.81 / Max: 10.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123600K1200K1800K2400K3000KSE +/- 39136.83, N = 15SE +/- 123032.71, N = 12SE +/- 13136.89, N = 32624535.482159676.181559670.961. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123500K1000K1500K2000K2500KMin: 2404076.75 / Avg: 2624535.48 / Max: 2849914.5Min: 1366295.12 / Avg: 2159676.18 / Max: 2611133.25Min: 1540930.75 / Avg: 1559670.96 / Max: 15849891. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile1234080120160200169170168

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123130K260K390K520K650KSE +/- 5320.80, N = 3SE +/- 3538.13, N = 3SE +/- 3258.38, N = 3604079604512604322
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123100K200K300K400K500KMin: 595535 / Avg: 604079.33 / Max: 613845Min: 597580 / Avg: 604511.67 / Max: 609210Min: 599074 / Avg: 604322.33 / Max: 610292

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 2020-09-291234K8K12K16K20KSE +/- 80.13, N = 3SE +/- 18.75, N = 3SE +/- 15.18, N = 3184321847918478
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 2020-09-291233K6K9K12K15KMin: 18272 / Avg: 18432 / Max: 18520Min: 18457 / Avg: 18478.67 / Max: 18516Min: 18448 / Avg: 18478 / Max: 18497

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF Document123714212835SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 332.1632.2332.26
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF Document123714212835Min: 31.96 / Avg: 32.16 / Max: 32.36Min: 32.09 / Avg: 32.23 / Max: 32.31Min: 32.22 / Avg: 32.26 / Max: 32.32

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KSE +/- 46.13, N = 3SE +/- 12.36, N = 3SE +/- 11.80, N = 311116.311146.311202.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KMin: 11060.9 / Avg: 11116.3 / Max: 11207.9Min: 11121.8 / Avg: 11146.33 / Max: 11161.2Min: 11179.8 / Avg: 11202.6 / Max: 11219.31. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed1232K4K6K8K10KSE +/- 45.71, N = 3SE +/- 99.34, N = 3SE +/- 105.59, N = 39763.1010001.819959.061. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed1232K4K6K8K10KMin: 9681.1 / Avg: 9763.1 / Max: 9839.09Min: 9806.75 / Avg: 10001.81 / Max: 10132.05Min: 9811.02 / Avg: 9959.06 / Max: 10163.521. (CC) gcc options: -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass1230.77181.54362.31543.08723.859SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.433.423.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass123246810Min: 3.43 / Avg: 3.43 / Max: 3.44Min: 3.41 / Avg: 3.42 / Max: 3.42Min: 3.43 / Avg: 3.43 / Max: 3.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib12348121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 317.717.717.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib123510152025Min: 17.7 / Avg: 17.7 / Max: 17.7Min: 17.7 / Avg: 17.7 / Max: 17.7Min: 17.8 / Avg: 17.8 / Max: 17.8

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 101230.69461.38922.08382.77843.473SE +/- 0.003, N = 3SE +/- 0.030, N = 3SE +/- 0.026, N = 33.0413.0683.087
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123246810Min: 3.04 / Avg: 3.04 / Max: 3.04Min: 3.02 / Avg: 3.07 / Max: 3.13Min: 3.04 / Avg: 3.09 / Max: 3.13

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31238001600240032004000SE +/- 19.56, N = 3SE +/- 37.93, N = 3SE +/- 19.08, N = 33687.73632.03687.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31236001200180024003000Min: 3657.3 / Avg: 3687.67 / Max: 3724.2Min: 3557.1 / Avg: 3632.03 / Max: 3679.7Min: 3649.5 / Avg: 3687.67 / Max: 3707.11. (CC) gcc options: -O3 -pthread -lz -llzma

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python123100200300400500440439438

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.7M3.4M5.1M6.8M8.5MSE +/- 11851.34, N = 3SE +/- 103135.62, N = 3SE +/- 94283.61, N = 37738146757349773946991. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.3M2.6M3.9M5.2M6.5MMin: 7718515 / Avg: 7738146 / Max: 7759465Min: 7374117 / Avg: 7573496.67 / Max: 7718976Min: 7231919 / Avg: 7394699.33 / Max: 75585221. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads123612182430SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 327.227.327.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads123612182430Min: 27.2 / Avg: 27.23 / Max: 27.3Min: 27.3 / Avg: 27.3 / Max: 27.3Min: 27.3 / Avg: 27.33 / Max: 27.4

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 321.4321.4421.52
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123510152025Min: 21.41 / Avg: 21.43 / Max: 21.44Min: 21.42 / Avg: 21.44 / Max: 21.45Min: 21.48 / Avg: 21.52 / Max: 21.57

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KSE +/- 14444.21, N = 3SE +/- 12228.65, N = 3SE +/- 13914.34, N = 151437805.001413366.451459398.531. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KMin: 1414472.38 / Avg: 1437805 / Max: 1464222.5Min: 1390820.62 / Avg: 1413366.45 / Max: 1432848.12Min: 1373626.38 / Avg: 1459398.53 / Max: 1536442.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123400K800K1200K1600K2000KSE +/- 18920.71, N = 3SE +/- 29276.53, N = 3SE +/- 19775.05, N = 151725875.751754885.381695298.031. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KMin: 1692372.25 / Avg: 1725875.75 / Max: 1757862.88Min: 1715759.88 / Avg: 1754885.38 / Max: 1812174Min: 1579879.88 / Avg: 1695298.03 / Max: 1799309.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 Images123612182430SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 324.5724.5124.51
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 Images123612182430Min: 24.52 / Avg: 24.57 / Max: 24.62Min: 24.38 / Avg: 24.5 / Max: 24.6Min: 24.48 / Avg: 24.51 / Max: 24.54

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times1232004006008001000SE +/- 2.31, N = 3SE +/- 1.76, N = 3102010171021
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times1232004006008001000Min: 1016 / Avg: 1020 / Max: 1024Min: 1018 / Avg: 1020.67 / Max: 1024

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny1233691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 311.2611.2911.30MIN: 11.09 / MAX: 11.79MIN: 11.1 / MAX: 11.73MIN: 11.15 / MAX: 11.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny1233691215Min: 11.22 / Avg: 11.26 / Max: 11.29Min: 11.26 / Avg: 11.29 / Max: 11.32Min: 11.29 / Avg: 11.3 / Max: 11.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50123246810SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 36.106.136.09MIN: 6.06 / MAX: 10.11MIN: 6.04 / MAX: 11.31MIN: 6.06 / MAX: 6.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50123246810Min: 6.09 / Avg: 6.1 / Max: 6.12Min: 6.1 / Avg: 6.13 / Max: 6.18Min: 6.09 / Avg: 6.09 / Max: 6.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet1230.92251.8452.76753.694.6125SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.094.104.09MIN: 3.96 / MAX: 4.79MIN: 3.99 / MAX: 4.76MIN: 3.97 / MAX: 4.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet123246810Min: 4.08 / Avg: 4.09 / Max: 4.1Min: 4.09 / Avg: 4.1 / Max: 4.11Min: 4.08 / Avg: 4.09 / Max: 4.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet181230.48830.97661.46491.95322.4415SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.162.172.17MIN: 2.09 / MAX: 2.76MIN: 2.1 / MAX: 2.77MIN: 2.1 / MAX: 2.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18123246810Min: 2.15 / Avg: 2.16 / Max: 2.17Min: 2.16 / Avg: 2.17 / Max: 2.18Min: 2.16 / Avg: 2.17 / Max: 2.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg161233691215SE +/- 0.16, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 310.8210.7110.61MIN: 10.18 / MAX: 24.02MIN: 10.19 / MAX: 20.01MIN: 10.2 / MAX: 241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg161233691215Min: 10.54 / Avg: 10.82 / Max: 11.09Min: 10.61 / Avg: 10.71 / Max: 10.8Min: 10.5 / Avg: 10.61 / Max: 10.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet1231.29152.5833.87455.1666.4575SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 35.745.705.69MIN: 5.64 / MAX: 15.26MIN: 5.65 / MAX: 8.52MIN: 5.63 / MAX: 10.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet123246810Min: 5.66 / Avg: 5.74 / Max: 5.83Min: 5.68 / Avg: 5.7 / Max: 5.71Min: 5.66 / Avg: 5.69 / Max: 5.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface1230.20480.40960.61440.81921.024SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 30.910.880.87MIN: 0.85 / MAX: 1.18MIN: 0.86 / MAX: 1.45MIN: 0.85 / MAX: 1.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface123246810Min: 0.87 / Avg: 0.91 / Max: 0.97Min: 0.87 / Avg: 0.88 / Max: 0.91Min: 0.86 / Avg: 0.87 / Max: 0.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b01233691215SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 39.339.349.36MIN: 8.97 / MAX: 19.5MIN: 8.93 / MAX: 17.56MIN: 8.96 / MAX: 201. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b01233691215Min: 9.13 / Avg: 9.33 / Max: 9.55Min: 9.08 / Avg: 9.34 / Max: 9.66Min: 9.23 / Avg: 9.36 / Max: 9.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet1230.62551.2511.87652.5023.1275SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.782.762.76MIN: 2.71 / MAX: 15.58MIN: 2.72 / MAX: 3.3MIN: 2.71 / MAX: 3.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet123246810Min: 2.75 / Avg: 2.78 / Max: 2.82Min: 2.76 / Avg: 2.76 / Max: 2.77Min: 2.75 / Avg: 2.76 / Max: 2.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v21230.52431.04861.57292.09722.6215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.322.332.32MIN: 2.28 / MAX: 3.46MIN: 2.3 / MAX: 2.98MIN: 2.29 / MAX: 2.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2123246810Min: 2.3 / Avg: 2.32 / Max: 2.33Min: 2.32 / Avg: 2.33 / Max: 2.35Min: 2.31 / Avg: 2.32 / Max: 2.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31230.80551.6112.41653.2224.0275SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 33.573.583.58MIN: 3.54 / MAX: 4.28MIN: 3.53 / MAX: 4.28MIN: 3.54 / MAX: 4.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3123246810Min: 3.56 / Avg: 3.57 / Max: 3.58Min: 3.57 / Avg: 3.58 / Max: 3.59Min: 3.57 / Avg: 3.58 / Max: 3.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v21230.58051.1611.74152.3222.9025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.572.582.58MIN: 2.53 / MAX: 3.17MIN: 2.53 / MAX: 5.2MIN: 2.53 / MAX: 3.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2123246810Min: 2.56 / Avg: 2.57 / Max: 2.58Min: 2.57 / Avg: 2.58 / Max: 2.59Min: 2.57 / Avg: 2.58 / Max: 2.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.288.258.27MIN: 7.59 / MAX: 11.65MIN: 7.57 / MAX: 11.24MIN: 7.6 / MAX: 11.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet1233691215Min: 8.26 / Avg: 8.28 / Max: 8.3Min: 8.23 / Avg: 8.25 / Max: 8.27Min: 8.25 / Avg: 8.27 / Max: 8.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet1231.09132.18263.27394.36525.4565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 34.844.854.85MIN: 4.71 / MAX: 5.84MIN: 4.69 / MAX: 5.86MIN: 4.71 / MAX: 5.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet123246810Min: 4.82 / Avg: 4.84 / Max: 4.86Min: 4.84 / Avg: 4.85 / Max: 4.87Min: 4.82 / Avg: 4.85 / Max: 4.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass1230.49050.9811.47151.9622.4525SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.182.172.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass123246810Min: 2.18 / Avg: 2.18 / Max: 2.18Min: 2.16 / Avg: 2.17 / Max: 2.17Min: 2.18 / Avg: 2.18 / Max: 2.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis1230.44910.89821.34731.79642.2455SE +/- 0.029, N = 4SE +/- 0.024, N = 3SE +/- 0.023, N = 31.9481.9651.996MIN: 1.73 / MAX: 2.71MIN: 1.78 / MAX: 2.54MIN: 1.81 / MAX: 2.69
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis123246810Min: 1.88 / Avg: 1.95 / Max: 2.01Min: 1.93 / Avg: 1.96 / Max: 2.01Min: 1.97 / Avg: 2 / Max: 2.04

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos12320406080100111111111

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float123306090120150SE +/- 0.33, N = 3117116117
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float12320406080100Min: 117 / Avg: 117.33 / Max: 118

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody123306090120150117117117

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes123306090120150116116116

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performance1232004006008001000SE +/- 0.15, N = 3SE +/- 0.27, N = 3SE +/- 0.07, N = 31145.01144.31144.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performance1232004006008001000Min: 1144.8 / Avg: 1145 / Max: 1145.3Min: 1143.9 / Avg: 1144.27 / Max: 1144.8Min: 1144.6 / Avg: 1144.67 / Max: 1144.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123510152025SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 320.0220.0220.031. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123510152025Min: 19.98 / Avg: 20.02 / Max: 20.05Min: 20 / Avg: 20.02 / Max: 20.03Min: 20.02 / Avg: 20.03 / Max: 20.041. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123816243240SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 332.3432.3832.381. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123714212835Min: 32.27 / Avg: 32.34 / Max: 32.45Min: 32.35 / Avg: 32.38 / Max: 32.41Min: 32.31 / Avg: 32.38 / Max: 32.441. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212360120180240300SE +/- 0.00, N = 3SE +/- 0.19, N = 3SE +/- 0.03, N = 3263.82264.30264.12MIN: 263.16 / MAX: 270.27MIN: 263.28 / MAX: 314.67MIN: 263.32 / MAX: 271.641. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212350100150200250Min: 263.81 / Avg: 263.81 / Max: 263.82Min: 263.93 / Avg: 264.3 / Max: 264.55Min: 264.07 / Avg: 264.12 / Max: 264.151. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300SE +/- 0.22, N = 3SE +/- 0.15, N = 3SE +/- 0.05, N = 3260.26260.45260.44MIN: 259.28 / MAX: 261.34MIN: 259.62 / MAX: 261.24MIN: 259.68 / MAX: 261.291. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112350100150200250Min: 259.82 / Avg: 260.26 / Max: 260.54Min: 260.28 / Avg: 260.45 / Max: 260.76Min: 260.34 / Avg: 260.44 / Max: 260.521. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics12348121620SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 317.7617.7017.68
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics12348121620Min: 17.74 / Avg: 17.76 / Max: 17.77Min: 17.64 / Avg: 17.7 / Max: 17.79Min: 17.64 / Avg: 17.68 / Max: 17.71

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123816243240SE +/- 0.05, N = 3SE +/- 0.20, N = 3SE +/- 0.22, N = 334.7135.1234.661. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123816243240Min: 34.64 / Avg: 34.71 / Max: 34.81Min: 34.72 / Avg: 35.12 / Max: 35.38Min: 34.36 / Avg: 34.66 / Max: 35.081. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtime123918273645SE +/- 0.08, N = 3SE +/- 0.53, N = 3SE +/- 0.08, N = 337.0936.1837.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtime123816243240Min: 36.94 / Avg: 37.09 / Max: 37.17Min: 35.17 / Avg: 36.18 / Max: 36.96Min: 36.92 / Avg: 37.07 / Max: 37.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest123246810SE +/- 0.213, N = 15SE +/- 0.003, N = 3SE +/- 0.003, N = 36.6886.4766.4701. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest1233691215Min: 6.46 / Avg: 6.69 / Max: 9.67Min: 6.47 / Avg: 6.48 / Max: 6.48Min: 6.47 / Avg: 6.47 / Max: 6.471. (CXX) g++ options: -O3 -O2 -lpthread -ldl

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only12330060090012001500SE +/- 2.29, N = 3SE +/- 2.23, N = 3SE +/- 7.87, N = 31300.121305.291304.01MIN: 749.97 / MAX: 2399.95MIN: 799.97 / MAX: 2399.92MIN: 799.97 / MAX: 2399.921. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only1232004006008001000Min: 1295.58 / Avg: 1300.12 / Max: 1302.92Min: 1301.03 / Avg: 1305.29 / Max: 1308.57Min: 1289.26 / Avg: 1304.01 / Max: 1316.151. mpv 0.32.0

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1230.75151.5032.25453.0063.7575SE +/- 0.052, N = 15SE +/- 0.029, N = 3SE +/- 0.021, N = 33.2493.3293.3401. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein123246810Min: 2.68 / Avg: 3.25 / Max: 3.37Min: 3.27 / Avg: 3.33 / Max: 3.37Min: 3.3 / Avg: 3.34 / Max: 3.371. (CXX) g++ options: -O3 -pthread -lm

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest1231.30482.60963.91445.21926.524SE +/- 0.224, N = 15SE +/- 0.062, N = 3SE +/- 0.064, N = 35.7995.6085.6021. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest123246810Min: 5.55 / Avg: 5.8 / Max: 8.94Min: 5.53 / Avg: 5.61 / Max: 5.73Min: 5.52 / Avg: 5.6 / Max: 5.731. (CXX) g++ options: -O3 -O2 -lpthread -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 312.6212.4812.51
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only12348121620Min: 12.61 / Avg: 12.62 / Max: 12.63Min: 12.47 / Avg: 12.48 / Max: 12.49Min: 12.5 / Avg: 12.51 / Max: 12.52

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium123246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 38.308.308.301. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium1233691215Min: 8.3 / Avg: 8.3 / Max: 8.31Min: 8.3 / Avg: 8.3 / Max: 8.3Min: 8.29 / Avg: 8.3 / Max: 8.311. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1231.06442.12883.19324.25765.322SE +/- 0.00490, N = 3SE +/- 0.01879, N = 3SE +/- 0.00436, N = 34.730574.039104.06739MIN: 4.66MIN: 3.96MIN: 3.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810Min: 4.72 / Avg: 4.73 / Max: 4.74Min: 4 / Avg: 4.04 / Max: 4.07Min: 4.06 / Avg: 4.07 / Max: 4.081. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231326395265SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 357.3757.5757.471. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231122334455Min: 57.34 / Avg: 57.37 / Max: 57.43Min: 57.51 / Avg: 57.57 / Max: 57.61Min: 57.36 / Avg: 57.47 / Max: 57.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215SE +/- 0.015, N = 3SE +/- 0.018, N = 3SE +/- 0.014, N = 39.2979.2679.2671. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215Min: 9.28 / Avg: 9.3 / Max: 9.33Min: 9.25 / Avg: 9.27 / Max: 9.3Min: 9.25 / Avg: 9.27 / Max: 9.291. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only123246810SE +/- 0.016, N = 3SE +/- 0.006, N = 3SE +/- 0.018, N = 38.0197.9638.026
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only1233691215Min: 7.99 / Avg: 8.02 / Max: 8.05Min: 7.95 / Avg: 7.96 / Max: 7.97Min: 8.01 / Avg: 8.03 / Max: 8.06

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only123246810SE +/- 0.003, N = 3SE +/- 0.010, N = 3SE +/- 0.002, N = 36.3186.2746.292
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only1233691215Min: 6.31 / Avg: 6.32 / Max: 6.32Min: 6.25 / Avg: 6.27 / Max: 6.29Min: 6.29 / Avg: 6.29 / Max: 6.3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast123246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 37.397.397.371. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast1233691215Min: 7.38 / Avg: 7.39 / Max: 7.41Min: 7.39 / Avg: 7.39 / Max: 7.4Min: 7.37 / Avg: 7.37 / Max: 7.381. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes123246810SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.008, N = 37.1587.1847.183
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes1233691215Min: 7.16 / Avg: 7.16 / Max: 7.16Min: 7.18 / Avg: 7.18 / Max: 7.2Min: 7.17 / Avg: 7.18 / Max: 7.2

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8123246810SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 36.6316.5756.5841. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 81233691215Min: 6.62 / Avg: 6.63 / Max: 6.64Min: 6.57 / Avg: 6.58 / Max: 6.58Min: 6.58 / Avg: 6.58 / Max: 6.591. (CXX) g++ options: -O3 -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 320.6519.2019.41MIN: 20.28MIN: 19.01MIN: 19.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025Min: 20.63 / Avg: 20.65 / Max: 20.67Min: 19.19 / Avg: 19.2 / Max: 19.22Min: 19.39 / Avg: 19.41 / Max: 19.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100SE +/- 0.48, N = 3SE +/- 0.30, N = 3SE +/- 0.31, N = 3110.9109.5110.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100Min: 110.2 / Avg: 110.87 / Max: 111.8Min: 108.9 / Avg: 109.47 / Max: 109.9Min: 109.8 / Avg: 110.4 / Max: 110.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10123246810SE +/- 0.015, N = 3SE +/- 0.019, N = 3SE +/- 0.017, N = 36.1006.0376.0641. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10123246810Min: 6.07 / Avg: 6.1 / Max: 6.12Min: 6 / Avg: 6.04 / Max: 6.06Min: 6.03 / Avg: 6.06 / Max: 6.091. (CXX) g++ options: -O3 -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.17, N = 3SE +/- 0.02, N = 312.0313.1812.89MIN: 11.78MIN: 12.66MIN: 12.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU12348121620Min: 12.01 / Avg: 12.03 / Max: 12.08Min: 12.9 / Avg: 13.18 / Max: 13.49Min: 12.86 / Avg: 12.89 / Max: 12.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1236K12K18K24K30KSE +/- 31.78, N = 3SE +/- 19.99, N = 3SE +/- 17.94, N = 325405.9925808.6625835.651. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1234K8K12K16K20KMin: 25371.95 / Avg: 25405.99 / Max: 25469.48Min: 25782.41 / Avg: 25808.66 / Max: 25847.9Min: 25802.36 / Avg: 25835.65 / Max: 25863.891. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080123160320480640800SE +/- 8.88, N = 3SE +/- 9.93, N = 3SE +/- 1.13, N = 3737.4747.0730.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080123130260390520650Min: 727.3 / Avg: 737.4 / Max: 755.1Min: 729.8 / Avg: 747 / Max: 764.2Min: 728.1 / Avg: 730.37 / Max: 731.51. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10801232004006008001000SE +/- 1.84, N = 3SE +/- 0.50, N = 3SE +/- 2.82, N = 3977.8977.8975.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10801232004006008001000Min: 975.3 / Avg: 977.83 / Max: 981.4Min: 976.8 / Avg: 977.8 / Max: 978.3Min: 972.3 / Avg: 975.83 / Max: 981.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only1230.05270.10540.15810.21080.2635SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.004, N = 30.2300.2290.234
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only12312345Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.24

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth13K6K9K12K15K14400.771. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth11.11632.23263.34894.46525.58154.961311. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency10.08870.17740.26610.35480.44350.394001. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access10.00450.0090.01350.0180.02250.020041. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad12468106.741791. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans10.14710.29420.44130.58840.73550.653661. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM1122436486055.511. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte10.7441.4882.2322.9763.723.306861. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

178 Results Shown

Kripke
Timed LLVM Compilation
Basis Universal
LeelaChessZero:
  Eigen
  BLAS
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
ASTC Encoder
HPC Challenge
Blender
GROMACS
DDraceNetwork
Monte Carlo Simulations of Ionised Nebulae
BRL-CAD
Mlpack Benchmark
Numpy Benchmark
Hierarchical INTegration
NAMD
Kvazaar
asmFish
Timed Linux Kernel Compilation
TensorFlow Lite:
  Inception ResNet V2
  Inception V4
libavif avifenc
oneDNN
Caffe
Embree
Zstd Compression
Basis Universal
Embree
Timed HMMer Search
GLmark2
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Embree:
  Pathtracer ISPC - Asian Dragon
  Pathtracer - Asian Dragon
Stockfish
libavif avifenc
PyPerformance
RawTherapee
NCNN:
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
  CPU - squeezenet
x265
Kvazaar
PyPerformance
Geekbench:
  CPU Multi Core
  GPU Vulkan
Basis Universal
Hugin
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
TensorFlow Lite:
  SqueezeNet
  Mobilenet Float
  Mobilenet Quant
  NASNet Mobile
DDraceNetwork
DDraceNetwork
Mlpack Benchmark
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
PyPerformance
Basis Universal
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
ASTC Encoder
rav1e:
  5
  1
Geekbench
Redis
Mlpack Benchmark
Caffe
oneDNN
Kvazaar
PyPerformance
rav1e
eSpeak-NG Speech Engine
Kvazaar
AOM AV1
MPV
Redis
PyPerformance
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
AOM AV1
oneDNN
Redis
PyPerformance
PHPBench
VkFFT
OCRMyPDF
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
AOM AV1
PyPerformance
rav1e
Zstd Compression
PyPerformance
Crafty
PyPerformance
Mlpack Benchmark
Redis:
  LPUSH
  SET
Tesseract OCR
PyBench
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
  Vulkan GPU - squeezenet
AOM AV1
Sunflow Rendering System
PyPerformance:
  chaos
  float
  nbody
  crypto_pyaes
OpenSSL
RNNoise
Kvazaar
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
Dolfyn
x265
AOM AV1
Betsy GPU Compressor
MPV
LAMMPS Molecular Dynamics Simulator
Betsy GPU Compressor
Darktable
ASTC Encoder
oneDNN
Kvazaar
Basis Universal
Darktable:
  Masskrug - CPU-only
  Server Room - CPU-only
ASTC Encoder
Waifu2x-NCNN Vulkan
libavif avifenc
oneDNN
yquake2
libavif avifenc
oneDNN
FFTE
yquake2:
  OpenGL 1.x - 1920 x 1080
  OpenGL 3.x - 1920 x 1080
Darktable
HPC Challenge:
  Max Ping Pong Bandwidth
  Rand Ring Bandwidth
  Rand Ring Latency
  G-Rand Access
  EP-STREAM Triad
  G-Ptrans
  EP-DGEMM
  G-Ffte