Ryzen 3 2200G 2021

AMD Ryzen 3 2200G testing with a ASUS PRIME B350M-E (5220 BIOS) and ASUS AMD Radeon Vega / Mobile 2GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101191-HA-RYZEN322022
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 3 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 21 Tests
Creator Workloads 24 Tests
Database Test Suite 4 Tests
Encoding 8 Tests
Fortran Tests 6 Tests
Game Development 3 Tests
HPC - High Performance Computing 24 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 9 Tests
Molecular Dynamics 9 Tests
MPI Benchmarks 4 Tests
Multi-Core 19 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 9 Tests
Programmer / Developer System Benchmarks 9 Tests
Python Tests 5 Tests
Scientific Computing 15 Tests
Server 7 Tests
Server CPU Tests 12 Tests
Single-Threaded 6 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 2 Tests
Video Encoding 5 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
January 16 2021
  18 Hours, 35 Minutes
2
January 17 2021
  20 Hours, 52 Minutes
3
January 18 2021
  19 Hours, 6 Minutes
Invert Hiding All Results Option
  19 Hours, 31 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 3 2200G 2021ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen 3 2200G @ 3.50GHz (4 Cores)ASUS PRIME B350M-E (5220 BIOS)AMD Raven/Raven26GBSamsung SSD 970 EVO 250GBASUS AMD Radeon Vega / Mobile 2GB (1100/1600MHz)AMD Raven/Raven2/FenghuangG237HLRealtek RTL8111/8168/8411Ubuntu 20.105.8.0-38-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.6 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8101016 Graphics Details- GLAMORJava Details- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%105%110%115%121%LeelaChessZeroRedisSunflow Rendering SystemNode.js V8 Web Tooling BenchmarkSockperfLULESHFFTELibRawGROMACSRNNoiseHuginOSBenchasmFishDarktableStockfishKeyDBCP2K Molecular DynamicsOpenFOAMNAMDTensorFlow LiteCraftyLAMMPS Molecular Dynamics SimulatorBYTE Unix BenchmarkTimed Godot Game Engine CompilationAOM AV1Zstd Compressionrav1eWarsowIncompact3DSQLite SpeedtestIndigoBenchLZ4 CompressionNumpy BenchmarkPHPBenchx265dav1dCoremarkDolfynMonte Carlo Simulations of Ionised NebulaeBasis UniversalWavPack Audio EncodingNCNNOCRMyPDFTimed Eigen CompilationTimed FFmpeg CompilationeSpeak-NG Speech EngineGoogle SynthMarkInfluxDBoneDNNTimed HMMer SearchCloverLeafAlgebraic Multi-Grid BenchmarkMobile Neural NetworkTimed MAFFT AlignmentTNNEmbreeMonkey Audio EncodingVKMarkRawTherapeeOpus Codec EncodingGIMPWaifu2x-NCNN VulkanUnpacking Firefoxyquake2ASTC EncoderRealSR-NCNNKvazaarHierarchical INTegrationGLmark2WebP Image EncodeCaffeBuild2simdjsonCLOMP

Ryzen 3 2200G 2021redis: LPOPkripke: lczero: BLASlczero: Eigenredis: GETonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsunflow: Global Illumination + Image Synthesisnode-web-tooling: osbench: Memory Allocationsonednn: IP Shapes 1D - u8s8f32 - CPUncnn: Vulkan GPU - resnet50onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUtensorflow-lite: Mobilenet Quantonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUmnn: mobilenet-v1-1.0onednn: Convolution Batch Shapes Auto - f32 - CPUrav1e: 10ncnn: Vulkan GPU - shufflenet-v2onednn: Recurrent Neural Network Training - u8s8f32 - CPUdarktable: Boat - CPU-onlyncnn: CPU-v2-v2 - mobilenet-v2mnn: SqueezeNetV1.0onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsockperf: Latency Ping Pongosbench: Create Processesncnn: Vulkan GPU - blazefacencnn: CPU - mnasnetlulesh: ffte: N=256, 3D Complex FFT Routinecompress-lz4: 3 - Compression Speedtensorflow-lite: Mobilenet Floatonednn: IP Shapes 3D - f32 - CPUlibraw: Post-Processing Benchmarkgromacs: Water Benchmarkdav1d: Chimera 1080p 10-bitrnnoise: onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUhugin: Panorama Photo Assistant + Stitching Timencnn: CPU - regnety_400mncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - vgg16ncnn: CPU - efficientnet-b0ncnn: CPU - resnet50asmfish: 1024 Hash Memory, 26 Depthbasis: UASTC Level 0stockfish: Total Timeonednn: Deconvolution Batch shapes_1d - f32 - CPUdarktable: Server Room - CPU-onlyonednn: Recurrent Neural Network Inference - f32 - CPUkeydb: ncnn: Vulkan GPU - mobilenetdarktable: Server Rack - CPU-onlycompress-zstd: 3aom-av1: Speed 4 Two-Passinfluxdb: 4 - 10000 - 2,5000,1 - 10000darktable: Masskrug - CPU-onlycompress-zstd: 19openfoam: Motorbike 30Mredis: SADDaom-av1: Speed 6 Realtimetensorflow-lite: SqueezeNetincompact3d: Cylinderncnn: Vulkan GPU - vgg16tensorflow-lite: Inception V4ncnn: CPU - blazefaceonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUredis: SETnamd: ATPase Simulation - 327,506 Atomsmnn: inception-v3x265: Bosphorus 1080pncnn: CPU - googlenetncnn: CPU - mobilenetosbench: Create Filesrav1e: 5crafty: Elapsed Timedav1d: Summer Nature 1080plammps: Rhodopsin Proteinncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - alexnetbyte: Dhrystone 2ncnn: Vulkan GPU - regnety_400membree: Pathtracer - Asian Dragoncp2k: Fayalite-FIST Datancnn: CPU - alexnetaom-av1: Speed 6 Two-Passcompress-lz4: 1 - Decompression Speedsqlite-speedtest: Timed Time - Size 1,000onednn: IP Shapes 3D - u8s8f32 - CPUyquake2: OpenGL 3.x - 1920 x 1080webp: Defaultredis: LPUSHsockperf: Throughputwarsow: 1920 x 1080indigobench: CPU - Supercarindigobench: CPU - Bedroomwebp: Quality 100, Lossless, Highest Compressionnumpy: phpbench: PHP Benchmark Suiteembree: Pathtracer ISPC - Asian Dragon Objcoremark: CoreMark Size 666 - Iterations Per Secondembree: Pathtracer - Asian Dragon Objcaffe: AlexNet - CPU - 100dolfyn: Computational Fluid Dynamicsncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - yolov4-tinyosbench: Launch Programskvazaar: Bosphorus 4K - Mediumosbench: Create Threadsrav1e: 6embree: Pathtracer - Crownkvazaar: Bosphorus 1080p - Very Fastonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUencode-wavpack: WAV To WavPackastcenc: Mediumncnn: CPU - squeezenet_ssdastcenc: Fastembree: Pathtracer ISPC - Crownbuild-eigen: Time To Compileocrmypdf: Processing 60 Page PDF Documentmocassin: Dust 2D tau100.0onednn: Recurrent Neural Network Inference - u8s8f32 - CPUncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - squeezenet_ssdbuild-godot: Time To Compilemnn: resnet-v2-50espeak: Text-To-Speech Synthesisinfluxdb: 64 - 10000 - 2,5000,1 - 10000ncnn: CPU - resnet18mnn: MobileNetV2_224ncnn: Vulkan GPU - efficientnet-b0build-ffmpeg: Time To Compilesynthmark: VoiceMark_100tensorflow-lite: NASNet Mobilegimp: auto-levelswaifu2x-ncnn: 2x - 3 - Nogimp: rotateyquake2: Software CPU - 1920 x 1080webp: Quality 100x265: Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUaom-av1: Speed 8 Realtimebuild2: Time To Compilehmmer: Pfam Database Searchtnn: CPU - SqueezeNet v1.1compress-lz4: 1 - Compression Speedamg: astcenc: Thoroughdav1d: Summer Nature 4Kastcenc: Exhaustivegimp: resizekvazaar: Bosphorus 4K - Ultra Fastmafft: Multiple Sequence Alignment - LSU RNArawtherapee: Total Benchmark Timekvazaar: Bosphorus 4K - Very Fastwebp: Quality 100, Losslessvkmark: 1920 x 1080encode-ape: WAV To APEncnn: Vulkan GPU - googlenetncnn: CPU - yolov4-tinyencode-opus: WAV To Opus Encodebasis: UASTC Level 2cloverleaf: Lagrangian-Eulerian Hydrodynamicsembree: Pathtracer ISPC - Asian Dragonkvazaar: Bosphorus 1080p - Ultra Fastcaffe: GoogleNet - CPU - 100unpack-firefox: firefox-84.0.source.tar.xzdav1d: Chimera 1080phint: FLOATglmark2: 1920 x 1080kvazaar: Bosphorus 1080p - Mediumcompress-lz4: 9 - Decompression Speedbasis: ETC1Stensorflow-lite: Inception ResNet V2tnn: CPU - MobileNet v2gimp: unsharp-maskcompress-lz4: 3 - Decompression Speedrealsr-ncnn: 4x - Yeswaifu2x-ncnn: 2x - 3 - Yesrealsr-ncnn: 4x - Nowebp: Quality 100, Highest Compressionsimdjson: DistinctUserIDsimdjson: PartialTweetssimdjson: LargeRandsimdjson: Kostyaclomp: Static OMP Speedupncnn: Vulkan GPU-v2-v2 - mobilenet-v2onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUcompress-lz4: 9 - Compression Speedsockperf: Latency Under Load1232261210.9248115634324482064794.8314.82153.2067.3881.74268414.632271.687913.613287338195.207.39523.02132.57312.638193.6125.20611.209.73229.70986.92726.1902813.2510.501180.097115392.80995192242.7730994613.073619.360.33352.5522.23830.80738437.1282.00619.1112.659.67117.1917.0772.55774804711.898571816922.541320.6957721.13265074.4546.400.3392346.01.38706035.524.17214.0342.981735687.3310.13467745810.954712117.4664410173.3138.83921489969.256.7540763.41519.4932.5946.8318.2458880.8486274275182.892.6039.5923.4635499453.018.883.31401448.58923.302.228722.481.2225.81417814.11.6481216336.46555055158.11.1070.49457.670242.345081062.8199102524.4421762.99034187721.06929.1759.0081.5232601.4914.9202351.0822.760115.607.3574415.08212.7759.399.752.5819113.52052.6753427746.8710.3859.28501.20050.21635.134721428.329.045.42416.98182.189596.25431611215.9244.11514.48792.92.5954.8116.542927.20514.482127.149287.2557994.3021349153384.3351.92696.0512.8556.8415.042123.6843.9424.894119915.95532.5659.288.93686.479191.413.243127.0111008423.847184.25301333349.9938218496.518565.282.0605691310279.33917.3328554.3482.60926.67563.0288.8720.460.450.350.38210.8836.559642.2253.7751258380.5031177173743801931045.2015.08253.1487.7481.99938214.870074.107667.943186858438.647.52623.67912.56612.988356.1925.90110.919.61329.88896.75126.4795623.3110.341208.386615755.58527176242.3430627013.168519.660.33052.3922.69331.41478277.5383.58118.8712.899.49119.3816.7673.14780282812.054562822022.314421.0087794.13267212.9547.010.3422358.01.38696009.524.51814.2339.541758200.5010.12461955821.055725118.9163899433.3338.97281486411.636.7990263.99719.6032.7946.4918.3158460.8446255015183.502.5869.6923.2435791748.519.063.31131461.85323.422.228646.381.4235.84129807.21.6621213155.04559663159.41.0980.49457.216241.365060552.8371101765.5669262.96824167221.13929.0859.4081.9675131.4914.8232781.0832.765915.527.3606115.07712.8359.719.742.5670113.65452.7413407725.6310.4459.16503.97750.26935.319725224.229.045.41916.95183.024596.61531721315.8534.11014.47093.32.6014.8316.555827.18516.516127.645287.0638015.3921423223384.6152.00697.5012.8666.8515.000123.4053.9424.957119915.99432.6459.428.92386.540191.453.250427.0711032023.799184.17301687480.1884218516.518552.182.1785697083279.62617.3178547.8482.83926.68563.0238.8710.460.450.350.382.011.3035.846141.0252.7581275489.173533771930168.3815.56193.3027.3885.63200615.197571.697750.493271878426.847.31323.66972.63912.758419.8225.45111.049.86729.12106.79026.8570583.3310.251208.018515437.46857899841.8131321613.366119.790.32653.5122.58030.82088342.7382.18818.7512.709.59117.4516.8971.82766904311.863564858922.661920.7397837.72269044.4846.320.3442324.21.40700554.624.19414.2338.271734495.8310.25467404820.322815118.0464685673.2938.50551472539.676.8328463.26919.7132.4346.3218.4418620.8396322069184.812.6139.6023.4835427649.219.073.34321452.46923.512.248690.181.9335.79119807.91.6571223284.42557665159.41.1060.49857.452243.265041592.8151102339.4086412.97824157320.98829.2859.2882.0732121.5014.9091081.0892.777915.627.3135115.17412.7559.349.802.5828112.96452.9863417701.7510.4359.50502.60850.48935.312723222.129.185.39816.90182.601593.90931579015.8944.09714.42493.32.5904.8316.610827.29514.799127.385286.1408022.4821407263384.4952.09695.2412.8266.8315.035123.3393.9524.900119615.99532.5959.288.91586.340191.013.248227.0511015723.806183.88301185316.5890918526.508562.882.0765689070279.27817.3288547.2482.55126.67363.0028.8690.460.450.350.38210.6833.058741.2350.699OpenBenchmarking.org

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP231500K1000K1500K2000K2500KSE +/- 11741.76, N = 3SE +/- 8952.66, N = 3SE +/- 16948.58, N = 31258380.501275489.172261210.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP231400K800K1200K1600K2000KMin: 1236766.38 / Avg: 1258380.5 / Max: 1277139.25Min: 1261558.62 / Avg: 1275489.17 / Max: 1292196.38Min: 2227314 / Avg: 2261210.92 / Max: 2278268.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4211000K2000K3000K4000K5000KSE +/- 35494.54, N = 3SE +/- 36406.50, N = 2311771748115631. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.421800K1600K2400K3200K4000KMin: 3047023 / Avg: 3117717.33 / Max: 3158661Min: 4775156 / Avg: 4811562.5 / Max: 48479691. (CXX) g++ options: -O3 -fopenmp

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS32190180270360450SE +/- 2.52, N = 3SE +/- 6.01, N = 9SE +/- 4.54, N = 83533744321. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS32180160240320400Min: 348 / Avg: 353 / Max: 356Min: 355 / Avg: 374.33 / Max: 400Min: 400 / Avg: 431.63 / Max: 4381. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen321100200300400500SE +/- 5.13, N = 9SE +/- 4.81, N = 33773804481. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen32180160240320400Min: 354 / Avg: 377.22 / Max: 403Min: 439 / Avg: 448.33 / Max: 4551. (CXX) g++ options: -flto -pthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET321400K800K1200K1600K2000KSE +/- 22292.08, N = 3SE +/- 23617.43, N = 5SE +/- 35016.53, N = 31930168.381931045.202064794.831. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET321400K800K1200K1600K2000KMin: 1886792.5 / Avg: 1930168.38 / Max: 1960784.38Min: 1872958.88 / Avg: 1931045.2 / Max: 2012265.5Min: 2016129.12 / Avg: 2064794.83 / Max: 21327421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU32148121620SE +/- 0.09, N = 3SE +/- 0.18, N = 15SE +/- 0.22, N = 1515.5615.0814.82MIN: 13.12MIN: 12.35MIN: 11.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU32148121620Min: 15.38 / Avg: 15.56 / Max: 15.67Min: 13.61 / Avg: 15.08 / Max: 15.69Min: 13.41 / Avg: 14.82 / Max: 15.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis3120.7431.4862.2292.9723.715SE +/- 0.028, N = 15SE +/- 0.041, N = 3SE +/- 0.032, N = 33.3023.2063.148MIN: 2.87 / MAX: 4.18MIN: 2.88 / MAX: 3.79MIN: 2.89 / MAX: 3.84
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis312246810Min: 3.09 / Avg: 3.3 / Max: 3.47Min: 3.15 / Avg: 3.21 / Max: 3.29Min: 3.1 / Avg: 3.15 / Max: 3.21

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark132246810SE +/- 0.08, N = 3SE +/- 0.09, N = 4SE +/- 0.03, N = 37.387.387.741. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1323691215Min: 7.22 / Avg: 7.38 / Max: 7.46Min: 7.12 / Avg: 7.38 / Max: 7.56Min: 7.69 / Avg: 7.74 / Max: 7.81. Nodejs v12.18.2

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocations32120406080100SE +/- 1.45, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 385.6382.0081.741. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocations3211632486480Min: 82.72 / Avg: 85.63 / Max: 87.09Min: 81.87 / Avg: 82 / Max: 82.19Min: 81.7 / Avg: 81.74 / Max: 81.771. (CC) gcc options: -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU32148121620SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 315.2014.8714.63MIN: 13.47MIN: 13.38MIN: 13.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU32148121620Min: 14.96 / Avg: 15.2 / Max: 15.34Min: 14.62 / Avg: 14.87 / Max: 15Min: 14.49 / Avg: 14.63 / Max: 14.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet502311632486480SE +/- 0.63, N = 4SE +/- 0.29, N = 3SE +/- 0.20, N = 374.1071.6971.68MIN: 66.26 / MAX: 110.26MIN: 66.47 / MAX: 91.18MIN: 65.87 / MAX: 90.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet502311428425670Min: 73.21 / Avg: 74.1 / Max: 75.96Min: 71.39 / Avg: 71.69 / Max: 72.28Min: 71.39 / Avg: 71.68 / Max: 72.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1322K4K6K8K10KSE +/- 101.76, N = 3SE +/- 13.61, N = 3SE +/- 22.70, N = 37913.617750.497667.94MIN: 7617.25MIN: 7562.51MIN: 7509.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU13214002800420056007000Min: 7752.25 / Avg: 7913.61 / Max: 8101.69Min: 7723.31 / Avg: 7750.49 / Max: 7765.32Min: 7640.74 / Avg: 7667.94 / Max: 7713.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant13270K140K210K280K350KSE +/- 1183.02, N = 3SE +/- 1866.22, N = 3SE +/- 1025.34, N = 3328733327187318685
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant13260K120K180K240K300KMin: 326554 / Avg: 328733 / Max: 330621Min: 325104 / Avg: 327187.33 / Max: 330911Min: 316646 / Avg: 318684.67 / Max: 319896

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU2312K4K6K8K10KSE +/- 66.22, N = 15SE +/- 46.50, N = 3SE +/- 99.84, N = 58438.648426.848195.20MIN: 7752.96MIN: 8003.45MIN: 75051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU23115003000450060007500Min: 8073.51 / Avg: 8438.64 / Max: 9047.54Min: 8372.71 / Avg: 8426.84 / Max: 8519.39Min: 7835.76 / Avg: 8195.2 / Max: 8382.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0213246810SE +/- 0.065, N = 3SE +/- 0.021, N = 3SE +/- 0.030, N = 37.5267.3957.313MIN: 6.6 / MAX: 20.05MIN: 6.57 / MAX: 16.47MIN: 6.61 / MAX: 17.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.02133691215Min: 7.4 / Avg: 7.53 / Max: 7.62Min: 7.35 / Avg: 7.39 / Max: 7.42Min: 7.28 / Avg: 7.31 / Max: 7.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU231612182430SE +/- 0.20, N = 3SE +/- 0.14, N = 3SE +/- 0.34, N = 323.6823.6723.02MIN: 20.06MIN: 20.01MIN: 19.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU231612182430Min: 23.27 / Avg: 23.68 / Max: 23.89Min: 23.43 / Avg: 23.67 / Max: 23.92Min: 22.35 / Avg: 23.02 / Max: 23.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 102130.59381.18761.78142.37522.969SE +/- 0.008, N = 3SE +/- 0.015, N = 3SE +/- 0.007, N = 32.5662.5732.639
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10213246810Min: 2.56 / Avg: 2.57 / Max: 2.58Min: 2.54 / Avg: 2.57 / Max: 2.59Min: 2.63 / Avg: 2.64 / Max: 2.65

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v22313691215SE +/- 0.16, N = 4SE +/- 0.20, N = 3SE +/- 0.09, N = 312.9812.7512.63MIN: 10.26 / MAX: 24.78MIN: 10.36 / MAX: 21.98MIN: 10.39 / MAX: 25.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v223148121620Min: 12.66 / Avg: 12.98 / Max: 13.43Min: 12.42 / Avg: 12.75 / Max: 13.12Min: 12.48 / Avg: 12.63 / Max: 12.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU3212K4K6K8K10KSE +/- 28.77, N = 3SE +/- 144.66, N = 3SE +/- 128.90, N = 38419.828356.198193.61MIN: 8053.44MIN: 7776.27MIN: 7671.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU32115003000450060007500Min: 8373.46 / Avg: 8419.82 / Max: 8472.51Min: 8101.02 / Avg: 8356.19 / Max: 8601.87Min: 8013.08 / Avg: 8193.61 / Max: 8443.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only231612182430SE +/- 0.25, N = 13SE +/- 0.09, N = 3SE +/- 0.09, N = 325.9025.4525.21
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only231612182430Min: 25.34 / Avg: 25.9 / Max: 28.69Min: 25.27 / Avg: 25.45 / Max: 25.55Min: 25.07 / Avg: 25.21 / Max: 25.38

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21323691215SE +/- 0.13, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 311.2011.0410.91MIN: 8.97 / MAX: 20.5MIN: 8.96 / MAX: 21.86MIN: 8.91 / MAX: 18.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21323691215Min: 10.99 / Avg: 11.2 / Max: 11.45Min: 10.94 / Avg: 11.04 / Max: 11.13Min: 10.77 / Avg: 10.91 / Max: 11.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.03123691215SE +/- 0.066, N = 3SE +/- 0.127, N = 3SE +/- 0.039, N = 39.8679.7329.613MIN: 8.73 / MAX: 39.42MIN: 8.67 / MAX: 18.76MIN: 8.69 / MAX: 20.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.03123691215Min: 9.76 / Avg: 9.87 / Max: 9.98Min: 9.59 / Avg: 9.73 / Max: 9.99Min: 9.56 / Avg: 9.61 / Max: 9.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU213714212835SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.29, N = 329.8929.7129.12MIN: 26.42MIN: 26.4MIN: 26.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU213714212835Min: 29.62 / Avg: 29.89 / Max: 30.07Min: 29.56 / Avg: 29.71 / Max: 29.86Min: 28.66 / Avg: 29.12 / Max: 29.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pong132246810SE +/- 0.065, N = 5SE +/- 0.074, N = 5SE +/- 0.049, N = 56.9276.7906.7511. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pong1323691215Min: 6.77 / Avg: 6.93 / Max: 7.08Min: 6.54 / Avg: 6.79 / Max: 6.97Min: 6.64 / Avg: 6.75 / Max: 6.91. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processes321612182430SE +/- 0.21, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 326.8626.4826.191. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processes321612182430Min: 26.45 / Avg: 26.86 / Max: 27.13Min: 26.44 / Avg: 26.48 / Max: 26.54Min: 26.03 / Avg: 26.19 / Max: 26.361. (CC) gcc options: -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface3210.74931.49862.24792.99723.7465SE +/- 0.03, N = 3SE +/- 0.02, N = 4SE +/- 0.01, N = 33.333.313.25MIN: 2.62 / MAX: 5.7MIN: 2.6 / MAX: 4.93MIN: 2.61 / MAX: 14.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface321246810Min: 3.29 / Avg: 3.33 / Max: 3.39Min: 3.26 / Avg: 3.31 / Max: 3.35Min: 3.23 / Avg: 3.25 / Max: 3.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1233691215SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 310.5010.3410.25MIN: 8.45 / MAX: 18.4MIN: 8.42 / MAX: 16.24MIN: 8.43 / MAX: 24.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1233691215Min: 10.4 / Avg: 10.5 / Max: 10.57Min: 10.24 / Avg: 10.34 / Max: 10.54Min: 10.15 / Avg: 10.25 / Max: 10.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.313230060090012001500SE +/- 0.53, N = 3SE +/- 2.03, N = 3SE +/- 0.69, N = 31180.101208.021208.391. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.31322004006008001000Min: 1179.34 / Avg: 1180.1 / Max: 1181.12Min: 1203.96 / Avg: 1208.02 / Max: 1210.15Min: 1207.56 / Avg: 1208.39 / Max: 1209.751. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1323K6K9K12K15KSE +/- 120.97, N = 3SE +/- 161.49, N = 3SE +/- 110.35, N = 315392.8115437.4715755.591. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1323K6K9K12K15KMin: 15152.03 / Avg: 15392.81 / Max: 15533.71Min: 15152.55 / Avg: 15437.47 / Max: 15711.64Min: 15541.61 / Avg: 15755.59 / Max: 15909.411. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed3211020304050SE +/- 0.65, N = 15SE +/- 0.43, N = 15SE +/- 0.58, N = 341.8142.3442.771. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed321918273645Min: 36.12 / Avg: 41.81 / Max: 44.43Min: 38.94 / Avg: 42.34 / Max: 44.38Min: 42.02 / Avg: 42.77 / Max: 43.91. (CC) gcc options: -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float31270K140K210K280K350KSE +/- 2441.23, N = 3SE +/- 172.14, N = 3SE +/- 1840.34, N = 3313216309946306270
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float31250K100K150K200K250KMin: 308497 / Avg: 313215.67 / Max: 316661Min: 309629 / Avg: 309945.67 / Max: 310221Min: 302657 / Avg: 306270 / Max: 308685

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU3213691215SE +/- 0.06, N = 3SE +/- 0.17, N = 15SE +/- 0.18, N = 1513.3713.1713.07MIN: 12.28MIN: 10.78MIN: 10.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU32148121620Min: 13.26 / Avg: 13.37 / Max: 13.46Min: 11.89 / Avg: 13.17 / Max: 13.59Min: 11.68 / Avg: 13.07 / Max: 13.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark123510152025SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 319.3619.6619.791. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark123510152025Min: 19.29 / Avg: 19.36 / Max: 19.46Min: 19.51 / Avg: 19.66 / Max: 19.87Min: 19.64 / Avg: 19.79 / Max: 20.031. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark3210.07490.14980.22470.29960.3745SE +/- 0.005, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.3260.3300.3331. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark32112345Min: 0.32 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.341. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit2131224364860SE +/- 0.21, N = 3SE +/- 0.17, N = 3SE +/- 0.31, N = 352.3952.5553.51MIN: 35.47 / MAX: 120.48MIN: 35.45 / MAX: 124.71MIN: 35.6 / MAX: 125.131. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit2131122334455Min: 51.98 / Avg: 52.39 / Max: 52.64Min: 52.28 / Avg: 52.55 / Max: 52.85Min: 52.9 / Avg: 53.51 / Max: 53.911. (CC) gcc options: -pthread -ldl -lm

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28231510152025SE +/- 0.23, N = 8SE +/- 0.35, N = 3SE +/- 0.03, N = 322.6922.5822.241. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28231510152025Min: 22.2 / Avg: 22.69 / Max: 24.18Min: 22.21 / Avg: 22.58 / Max: 23.28Min: 22.2 / Avg: 22.24 / Max: 22.291. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU231714212835SE +/- 0.28, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 331.4130.8230.81MIN: 22.58MIN: 22.62MIN: 22.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU231714212835Min: 30.98 / Avg: 31.41 / Max: 31.95Min: 30.62 / Avg: 30.82 / Max: 31.09Min: 30.64 / Avg: 30.81 / Max: 31.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1322K4K6K8K10KSE +/- 133.51, N = 3SE +/- 61.07, N = 3SE +/- 86.65, N = 38437.128342.738277.53MIN: 7874.47MIN: 7929.12MIN: 7767.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU13215003000450060007500Min: 8172.26 / Avg: 8437.12 / Max: 8598.98Min: 8220.67 / Avg: 8342.73 / Max: 8407.38Min: 8112.62 / Avg: 8277.53 / Max: 8406.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time23120406080100SE +/- 0.26, N = 3SE +/- 0.13, N = 3SE +/- 0.57, N = 383.5882.1982.01
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time2311632486480Min: 83.18 / Avg: 83.58 / Max: 84.08Min: 81.94 / Avg: 82.19 / Max: 82.32Min: 81.07 / Avg: 82.01 / Max: 83.05

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.18, N = 319.1118.8718.75MIN: 16.69 / MAX: 35.81MIN: 16.81 / MAX: 33.23MIN: 16.84 / MAX: 32.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025Min: 19 / Avg: 19.11 / Max: 19.25Min: 18.76 / Avg: 18.87 / Max: 19.06Min: 18.41 / Avg: 18.75 / Max: 19.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v22313691215SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.12, N = 312.8912.7012.65MIN: 10.42 / MAX: 26.77MIN: 10.47 / MAX: 19.71MIN: 10.41 / MAX: 23.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v223148121620Min: 12.64 / Avg: 12.89 / Max: 13.12Min: 12.48 / Avg: 12.7 / Max: 12.96Min: 12.45 / Avg: 12.65 / Max: 12.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31323691215SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 39.679.599.49MIN: 7.8 / MAX: 15.65MIN: 7.81 / MAX: 16.29MIN: 7.78 / MAX: 14.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31323691215Min: 9.54 / Avg: 9.67 / Max: 9.75Min: 9.53 / Avg: 9.59 / Max: 9.64Min: 9.44 / Avg: 9.49 / Max: 9.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16231306090120150SE +/- 0.23, N = 3SE +/- 0.17, N = 3SE +/- 0.18, N = 3119.38117.45117.19MIN: 113.9 / MAX: 142.52MIN: 112.54 / MAX: 135.06MIN: 112.25 / MAX: 143.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1623120406080100Min: 118.95 / Avg: 119.38 / Max: 119.72Min: 117.14 / Avg: 117.45 / Max: 117.71Min: 116.85 / Avg: 117.19 / Max: 117.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b013248121620SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 317.0716.8916.76MIN: 14.1 / MAX: 30.1MIN: 14.11 / MAX: 30.24MIN: 13.96 / MAX: 31.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b013248121620Min: 16.87 / Avg: 17.07 / Max: 17.36Min: 16.65 / Avg: 16.89 / Max: 17.05Min: 16.71 / Avg: 16.76 / Max: 16.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet502131632486480SE +/- 0.39, N = 3SE +/- 0.82, N = 3SE +/- 0.19, N = 373.1472.5571.82MIN: 66.24 / MAX: 103.74MIN: 66.75 / MAX: 91.97MIN: 65.77 / MAX: 87.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet502131428425670Min: 72.36 / Avg: 73.14 / Max: 73.58Min: 71.55 / Avg: 72.55 / Max: 74.18Min: 71.45 / Avg: 71.82 / Max: 72.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth3122M4M6M8M10MSE +/- 29445.58, N = 3SE +/- 28500.12, N = 3SE +/- 50254.79, N = 3766904377480477802828
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth3121.4M2.8M4.2M5.6M7MMin: 7611696 / Avg: 7669043 / Max: 7709319Min: 7696997 / Avg: 7748046.67 / Max: 7795531Min: 7713402 / Avg: 7802828.33 / Max: 7887276

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 02133691215SE +/- 0.09, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 312.0511.9011.861. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 021348121620Min: 11.93 / Avg: 12.05 / Max: 12.24Min: 11.87 / Avg: 11.9 / Max: 11.94Min: 11.86 / Avg: 11.86 / Max: 11.871. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time2311.2M2.4M3.6M4.8M6MSE +/- 74806.22, N = 3SE +/- 39149.01, N = 3SE +/- 48644.36, N = 35628220564858957181691. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time2311000K2000K3000K4000K5000KMin: 5499526 / Avg: 5628219.67 / Max: 5758645Min: 5574905 / Avg: 5648589.33 / Max: 5708364Min: 5629479 / Avg: 5718169.33 / Max: 57971461. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU312510152025SE +/- 0.14, N = 3SE +/- 0.26, N = 3SE +/- 0.18, N = 1522.6622.5422.31MIN: 17.69MIN: 17.75MIN: 17.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU312510152025Min: 22.39 / Avg: 22.66 / Max: 22.87Min: 22.09 / Avg: 22.54 / Max: 23Min: 21.09 / Avg: 22.31 / Max: 23.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only231510152025SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.21, N = 321.0120.7420.70
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only231510152025Min: 20.77 / Avg: 21.01 / Max: 21.18Min: 20.36 / Avg: 20.74 / Max: 20.96Min: 20.28 / Avg: 20.7 / Max: 20.94

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU3212K4K6K8K10KSE +/- 30.28, N = 3SE +/- 85.05, N = 3SE +/- 11.25, N = 37837.727794.137721.13MIN: 7613.07MIN: 7534.49MIN: 7547.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU32114002800420056007000Min: 7779.6 / Avg: 7837.72 / Max: 7881.52Min: 7708.43 / Avg: 7794.13 / Max: 7964.23Min: 7702.17 / Avg: 7721.13 / Max: 7741.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612360K120K180K240K300KSE +/- 3138.20, N = 3SE +/- 2051.97, N = 3SE +/- 1852.38, N = 3265074.45267212.95269044.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612350K100K150K200K250KMin: 259875.13 / Avg: 265074.45 / Max: 270718.81Min: 264632.8 / Avg: 267212.95 / Max: 271266.88Min: 265340.35 / Avg: 269044.48 / Max: 270955.211. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet2131122334455SE +/- 0.66, N = 4SE +/- 0.03, N = 3SE +/- 0.07, N = 347.0146.4046.32MIN: 42.66 / MAX: 60.93MIN: 42.8 / MAX: 59.97MIN: 42.77 / MAX: 61.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet2131020304050Min: 46.22 / Avg: 47.01 / Max: 48.98Min: 46.35 / Avg: 46.4 / Max: 46.45Min: 46.21 / Avg: 46.32 / Max: 46.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only3210.07740.15480.23220.30960.387SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3440.3420.339
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only32112345Min: 0.34 / Avg: 0.34 / Max: 0.35Min: 0.34 / Avg: 0.34 / Max: 0.34Min: 0.34 / Avg: 0.34 / Max: 0.34

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 33125001000150020002500SE +/- 7.82, N = 3SE +/- 27.78, N = 3SE +/- 16.00, N = 32324.22346.02358.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3312400800120016002000Min: 2308.6 / Avg: 2324.23 / Max: 2332.6Min: 2297.1 / Avg: 2346.03 / Max: 2393.3Min: 2329.7 / Avg: 2357.97 / Max: 2385.11. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass1230.3150.630.9451.261.575SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.381.381.401. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass123246810Min: 1.37 / Avg: 1.38 / Max: 1.39Min: 1.38 / Avg: 1.38 / Max: 1.38Min: 1.39 / Avg: 1.4 / Max: 1.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000231150K300K450K600K750KSE +/- 6558.55, N = 3SE +/- 5594.95, N = 3SE +/- 8641.46, N = 3696009.5700554.6706035.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000231120K240K360K480K600KMin: 689123.7 / Avg: 696009.53 / Max: 709121.1Min: 694830.4 / Avg: 700554.6 / Max: 711743.5Min: 695838.4 / Avg: 706035.5 / Max: 723218.7

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only231612182430SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 324.5224.1924.17
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only231612182430Min: 24.45 / Avg: 24.52 / Max: 24.6Min: 24.06 / Avg: 24.19 / Max: 24.34Min: 24.12 / Avg: 24.17 / Max: 24.23

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 1912348121620SE +/- 0.18, N = 5SE +/- 0.03, N = 3SE +/- 0.06, N = 314.014.214.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 1912348121620Min: 13.3 / Avg: 14.02 / Max: 14.2Min: 14.2 / Avg: 14.23 / Max: 14.3Min: 14.1 / Avg: 14.2 / Max: 14.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M12370140210280350SE +/- 1.66, N = 3SE +/- 0.27, N = 3SE +/- 2.23, N = 3342.98339.54338.271. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M12360120180240300Min: 341.02 / Avg: 342.98 / Max: 346.27Min: 339.01 / Avg: 339.54 / Max: 339.84Min: 333.84 / Avg: 338.27 / Max: 340.931. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD312400K800K1200K1600K2000KSE +/- 4337.84, N = 3SE +/- 11595.14, N = 3SE +/- 21162.12, N = 31734495.831735687.331758200.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD312300K600K900K1200K1500KMin: 1727502.62 / Avg: 1734495.83 / Max: 1742439Min: 1712548 / Avg: 1735687.33 / Max: 1748587.38Min: 1727115.75 / Avg: 1758200.5 / Max: 1798618.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime2133691215SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.13, N = 310.1210.1310.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime2133691215Min: 10.02 / Avg: 10.12 / Max: 10.28Min: 10.09 / Avg: 10.13 / Max: 10.21Min: 10.09 / Avg: 10.25 / Max: 10.51. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet132100K200K300K400K500KSE +/- 158.13, N = 3SE +/- 564.75, N = 3SE +/- 1459.42, N = 3467745467404461955
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet13280K160K240K320K400KMin: 467441 / Avg: 467745.33 / Max: 467972Min: 466822 / Avg: 467403.67 / Max: 468533Min: 460469 / Avg: 461955.33 / Max: 464874

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder2312004006008001000SE +/- 10.03, N = 3SE +/- 2.19, N = 3SE +/- 3.54, N = 3821.06820.32810.951. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder231140280420560700Min: 802.01 / Avg: 821.06 / Max: 836.03Min: 816.22 / Avg: 820.32 / Max: 823.68Min: 806.85 / Avg: 810.95 / Max: 8181. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16231306090120150SE +/- 0.24, N = 4SE +/- 0.16, N = 3SE +/- 0.37, N = 3118.91118.04117.46MIN: 113.22 / MAX: 141.78MIN: 112.22 / MAX: 141.41MIN: 111.97 / MAX: 149.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg1623120406080100Min: 118.22 / Avg: 118.91 / Max: 119.25Min: 117.87 / Avg: 118.04 / Max: 118.36Min: 116.8 / Avg: 117.46 / Max: 118.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V43121.4M2.8M4.2M5.6M7MSE +/- 4440.22, N = 3SE +/- 24424.39, N = 3SE +/- 7475.10, N = 3646856764410176389943
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V43121.1M2.2M3.3M4.4M5.5MMin: 6460380 / Avg: 6468566.67 / Max: 6475640Min: 6392450 / Avg: 6441016.67 / Max: 6469840Min: 6377600 / Avg: 6389943.33 / Max: 6403420

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface2130.74931.49862.24792.99723.7465SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 33.333.313.29MIN: 2.62 / MAX: 5.11MIN: 2.64 / MAX: 9.91MIN: 2.73 / MAX: 4.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface213246810Min: 3.27 / Avg: 3.33 / Max: 3.38Min: 3.29 / Avg: 3.31 / Max: 3.33Min: 3.23 / Avg: 3.29 / Max: 3.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU213918273645SE +/- 0.23, N = 3SE +/- 0.18, N = 3SE +/- 0.51, N = 338.9738.8438.51MIN: 35.93MIN: 35.67MIN: 35.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU213816243240Min: 38.53 / Avg: 38.97 / Max: 39.3Min: 38.63 / Avg: 38.84 / Max: 39.19Min: 37.49 / Avg: 38.51 / Max: 39.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET321300K600K900K1200K1500KSE +/- 15253.86, N = 8SE +/- 19097.96, N = 3SE +/- 5888.95, N = 31472539.671486411.631489969.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET321300K600K900K1200K1500KMin: 1383258.62 / Avg: 1472539.67 / Max: 1520291.75Min: 1449321.75 / Avg: 1486411.63 / Max: 1512859.25Min: 1483774.38 / Avg: 1489969.25 / Max: 1501741.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms321246810SE +/- 0.08887, N = 5SE +/- 0.03865, N = 3SE +/- 0.01425, N = 36.832846.799026.75407
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms3213691215Min: 6.7 / Avg: 6.83 / Max: 7.18Min: 6.72 / Avg: 6.8 / Max: 6.85Min: 6.73 / Avg: 6.75 / Max: 6.78

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v32131428425670SE +/- 0.32, N = 3SE +/- 0.18, N = 3SE +/- 0.19, N = 364.0063.4263.27MIN: 60.33 / MAX: 93.06MIN: 60.02 / MAX: 120.02MIN: 60.45 / MAX: 98.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v32131326395265Min: 63.62 / Avg: 64 / Max: 64.64Min: 63.23 / Avg: 63.42 / Max: 63.77Min: 63.01 / Avg: 63.27 / Max: 63.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123510152025SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 319.4919.6019.711. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123510152025Min: 19.19 / Avg: 19.49 / Max: 19.72Min: 19.45 / Avg: 19.6 / Max: 19.69Min: 19.51 / Avg: 19.71 / Max: 19.91. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet213816243240SE +/- 0.05, N = 3SE +/- 0.18, N = 3SE +/- 0.18, N = 332.7932.5932.43MIN: 28.77 / MAX: 48.75MIN: 28.72 / MAX: 51.96MIN: 28.44 / MAX: 46.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet213714212835Min: 32.72 / Avg: 32.79 / Max: 32.9Min: 32.36 / Avg: 32.59 / Max: 32.94Min: 32.19 / Avg: 32.43 / Max: 32.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet1231122334455SE +/- 0.46, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 346.8346.4946.32MIN: 42.34 / MAX: 64.41MIN: 42.72 / MAX: 62.2MIN: 43.57 / MAX: 62.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet1231020304050Min: 46.24 / Avg: 46.83 / Max: 47.73Min: 46.4 / Avg: 46.49 / Max: 46.62Min: 46.22 / Avg: 46.32 / Max: 46.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Files321510152025SE +/- 0.12, N = 3SE +/- 0.21, N = 3SE +/- 0.24, N = 318.4418.3218.251. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Files321510152025Min: 18.22 / Avg: 18.44 / Max: 18.64Min: 18.08 / Avg: 18.32 / Max: 18.73Min: 17.94 / Avg: 18.25 / Max: 18.711. (CC) gcc options: -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 53210.19080.38160.57240.76320.954SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.8390.8440.848
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5321246810Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.84 / Avg: 0.84 / Max: 0.85Min: 0.85 / Avg: 0.85 / Max: 0.85

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time2131.4M2.8M4.2M5.6M7MSE +/- 20483.82, N = 3SE +/- 2855.69, N = 3SE +/- 23149.53, N = 36255015627427563220691. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time2131.1M2.2M3.3M4.4M5.5MMin: 6218031 / Avg: 6255015 / Max: 6288768Min: 6268571 / Avg: 6274275.33 / Max: 6277373Min: 6298694 / Avg: 6322068.67 / Max: 63683671. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p1234080120160200SE +/- 0.82, N = 3SE +/- 0.30, N = 3SE +/- 0.36, N = 3182.89183.50184.81MIN: 167.96 / MAX: 203.4MIN: 169.65 / MAX: 201.98MIN: 171.93 / MAX: 203.271. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p123306090120150Min: 181.85 / Avg: 182.89 / Max: 184.51Min: 183.08 / Avg: 183.5 / Max: 184.07Min: 184.11 / Avg: 184.81 / Max: 185.271. (CC) gcc options: -pthread -ldl -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein2130.58791.17581.76372.35162.9395SE +/- 0.030, N = 3SE +/- 0.015, N = 3SE +/- 0.031, N = 32.5862.6032.6131. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein213246810Min: 2.55 / Avg: 2.59 / Max: 2.65Min: 2.59 / Avg: 2.6 / Max: 2.63Min: 2.58 / Avg: 2.61 / Max: 2.681. (CXX) g++ options: -O3 -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v32313691215SE +/- 0.14, N = 4SE +/- 0.09, N = 3SE +/- 0.13, N = 39.699.609.59MIN: 7.81 / MAX: 18.31MIN: 7.73 / MAX: 19.55MIN: 7.78 / MAX: 22.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v32313691215Min: 9.46 / Avg: 9.69 / Max: 10.08Min: 9.47 / Avg: 9.6 / Max: 9.78Min: 9.4 / Avg: 9.59 / Max: 9.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet312612182430SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 423.4823.4623.24MIN: 21.25 / MAX: 36.41MIN: 21.29 / MAX: 37.36MIN: 21.11 / MAX: 37.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet312510152025Min: 23.44 / Avg: 23.48 / Max: 23.51Min: 23.36 / Avg: 23.46 / Max: 23.61Min: 23.18 / Avg: 23.24 / Max: 23.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 23128M16M24M32M40MSE +/- 503813.27, N = 3SE +/- 289585.60, N = 3SE +/- 208729.08, N = 335427649.235499453.035791748.5
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 23126M12M18M24M30MMin: 34420043.2 / Avg: 35427649.23 / Max: 35937019.7Min: 35091593.5 / Avg: 35499453 / Max: 36059497.1Min: 35378389.7 / Avg: 35791748.47 / Max: 36048968.8

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m321510152025SE +/- 0.01, N = 3SE +/- 0.15, N = 4SE +/- 0.09, N = 319.0719.0618.88MIN: 16.77 / MAX: 34.77MIN: 16.61 / MAX: 34.07MIN: 16.76 / MAX: 26.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m321510152025Min: 19.05 / Avg: 19.07 / Max: 19.08Min: 18.82 / Avg: 19.06 / Max: 19.49Min: 18.7 / Avg: 18.88 / Max: 191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon2130.75221.50442.25663.00883.761SE +/- 0.0143, N = 3SE +/- 0.0186, N = 3SE +/- 0.0299, N = 33.31133.31403.3432MIN: 3.26 / MAX: 3.4MIN: 3.25 / MAX: 3.4MIN: 3.25 / MAX: 3.45
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon213246810Min: 3.29 / Avg: 3.31 / Max: 3.34Min: 3.28 / Avg: 3.31 / Max: 3.34Min: 3.29 / Avg: 3.34 / Max: 3.39

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Fayalite-FIST Data231300600900120015001461.851452.471448.59

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet321612182430SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 323.5123.4223.30MIN: 21.21 / MAX: 37.48MIN: 21.27 / MAX: 37.51MIN: 21.24 / MAX: 38.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet321510152025Min: 23.4 / Avg: 23.51 / Max: 23.64Min: 23.28 / Avg: 23.42 / Max: 23.51Min: 23.15 / Avg: 23.3 / Max: 23.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass1230.5041.0081.5122.0162.52SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.222.222.241. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass123246810Min: 2.21 / Avg: 2.22 / Max: 2.22Min: 2.22 / Avg: 2.22 / Max: 2.22Min: 2.22 / Avg: 2.24 / Max: 2.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed2312K4K6K8K10KSE +/- 8.66, N = 3SE +/- 57.65, N = 3SE +/- 6.35, N = 38646.38690.18722.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed23115003000450060007500Min: 8636 / Avg: 8646.3 / Max: 8663.5Min: 8605 / Avg: 8690.07 / Max: 8800Min: 8712.3 / Avg: 8722.37 / Max: 8734.11. (CC) gcc options: -O3

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00032120406080100SE +/- 0.70, N = 3SE +/- 0.76, N = 3SE +/- 0.14, N = 381.9381.4281.221. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0003211632486480Min: 81.1 / Avg: 81.93 / Max: 83.33Min: 80.56 / Avg: 81.42 / Max: 82.94Min: 80.97 / Avg: 81.22 / Max: 81.441. (CC) gcc options: -O2 -ldl -lz -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU2131.31432.62863.94295.25726.5715SE +/- 0.01362, N = 3SE +/- 0.02077, N = 3SE +/- 0.01488, N = 35.841295.814175.79119MIN: 5.26MIN: 5.17MIN: 5.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU213246810Min: 5.81 / Avg: 5.84 / Max: 5.86Min: 5.79 / Avg: 5.81 / Max: 5.86Min: 5.76 / Avg: 5.79 / Max: 5.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10802312004006008001000SE +/- 3.33, N = 3SE +/- 4.89, N = 3SE +/- 4.11, N = 3807.2807.9814.11. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080231140280420560700Min: 802 / Avg: 807.17 / Max: 813.4Min: 802 / Avg: 807.9 / Max: 817.6Min: 806.1 / Avg: 814.1 / Max: 819.71. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default2310.3740.7481.1221.4961.87SE +/- 0.003, N = 3SE +/- 0.009, N = 3SE +/- 0.002, N = 31.6621.6571.6481. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default231246810Min: 1.66 / Avg: 1.66 / Max: 1.67Min: 1.65 / Avg: 1.66 / Max: 1.68Min: 1.65 / Avg: 1.65 / Max: 1.651. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH213300K600K900K1200K1500KSE +/- 2985.46, N = 3SE +/- 16215.78, N = 3SE +/- 4396.42, N = 31213155.041216336.461223284.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH213200K400K600K800K1000KMin: 1207729.5 / Avg: 1213155.04 / Max: 1218026.88Min: 1199462.88 / Avg: 1216336.46 / Max: 1248759Min: 1215105.75 / Avg: 1223284.42 / Max: 1230169.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughput132120K240K360K480K600KSE +/- 6800.99, N = 5SE +/- 3270.45, N = 5SE +/- 3595.18, N = 55550555576655596631. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughput132100K200K300K400K500KMin: 535468 / Avg: 555054.6 / Max: 575646Min: 551013 / Avg: 557664.8 / Max: 569216Min: 551226 / Avg: 559663.2 / Max: 5722281. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 10801234080120160200SE +/- 1.30, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 3158.1159.4159.4
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080123306090120150Min: 155.5 / Avg: 158.1 / Max: 159.4Min: 159.2 / Avg: 159.4 / Max: 159.6Min: 159.3 / Avg: 159.4 / Max: 159.6

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar2310.24910.49820.74730.99641.2455SE +/- 0.009, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 31.0981.1061.107
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar231246810Min: 1.08 / Avg: 1.1 / Max: 1.11Min: 1.1 / Avg: 1.11 / Max: 1.11Min: 1.1 / Avg: 1.11 / Max: 1.11

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1230.11210.22420.33630.44840.5605SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.4940.4940.498
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom123246810Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.5

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression1321326395265SE +/- 0.26, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 357.6757.4557.221. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression1321122334455Min: 57.37 / Avg: 57.67 / Max: 58.18Min: 57.27 / Avg: 57.45 / Max: 57.57Min: 57.18 / Avg: 57.22 / Max: 57.251. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark21350100150200250SE +/- 0.33, N = 3SE +/- 0.34, N = 3SE +/- 0.50, N = 3241.36242.34243.26
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark2134080120160200Min: 240.7 / Avg: 241.36 / Max: 241.78Min: 241.66 / Avg: 242.34 / Max: 242.75Min: 242.62 / Avg: 243.26 / Max: 244.24

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite321110K220K330K440K550KSE +/- 2233.09, N = 3SE +/- 1952.23, N = 3SE +/- 423.52, N = 3504159506055508106
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite32190K180K270K360K450KMin: 499707 / Avg: 504159 / Max: 506693Min: 502231 / Avg: 506055.33 / Max: 508649Min: 507598 / Avg: 508106 / Max: 508947

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj3120.63831.27661.91492.55323.1915SE +/- 0.0152, N = 3SE +/- 0.0119, N = 3SE +/- 0.0157, N = 32.81512.81992.8371MIN: 2.75 / MAX: 2.89MIN: 2.75 / MAX: 2.92MIN: 2.77 / MAX: 2.9
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj312246810Min: 2.79 / Avg: 2.82 / Max: 2.84Min: 2.8 / Avg: 2.82 / Max: 2.83Min: 2.82 / Avg: 2.84 / Max: 2.87

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second23120K40K60K80K100KSE +/- 290.85, N = 3SE +/- 373.76, N = 3SE +/- 816.07, N = 3101765.57102339.41102524.441. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second23120K40K60K80K100KMin: 101355.63 / Avg: 101765.57 / Max: 102327.96Min: 101878.38 / Avg: 102339.41 / Max: 103079.5Min: 101426.31 / Avg: 102524.44 / Max: 104119.221. (CC) gcc options: -O2 -lrt" -lrt

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj2310.67281.34562.01842.69123.364SE +/- 0.0135, N = 3SE +/- 0.0217, N = 3SE +/- 0.0237, N = 32.96822.97822.9903MIN: 2.9 / MAX: 3.07MIN: 2.91 / MAX: 3.08MIN: 2.9 / MAX: 3.08
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj231246810Min: 2.94 / Avg: 2.97 / Max: 2.99Min: 2.95 / Avg: 2.98 / Max: 3.02Min: 2.95 / Avg: 2.99 / Max: 3.03

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1001239K18K27K36K45KSE +/- 193.32, N = 3SE +/- 90.86, N = 3SE +/- 137.35, N = 34187741672415731. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1001237K14K21K28K35KMin: 41557 / Avg: 41877.33 / Max: 42225Min: 41491 / Avg: 41671.67 / Max: 41779Min: 41305 / Avg: 41572.67 / Max: 417601. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics213510152025SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 321.1421.0720.99
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics213510152025Min: 21.08 / Avg: 21.14 / Max: 21.24Min: 20.98 / Avg: 21.07 / Max: 21.21Min: 20.91 / Avg: 20.99 / Max: 21.03

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18312714212835SE +/- 0.02, N = 3SE +/- 0.26, N = 3SE +/- 0.14, N = 429.2829.1729.08MIN: 25.57 / MAX: 39.41MIN: 26 / MAX: 40.97MIN: 25.74 / MAX: 44.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18312612182430Min: 29.24 / Avg: 29.28 / Max: 29.32Min: 28.72 / Avg: 29.17 / Max: 29.61Min: 28.92 / Avg: 29.08 / Max: 29.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny2311326395265SE +/- 0.06, N = 4SE +/- 0.16, N = 3SE +/- 0.04, N = 359.4059.2859.00MIN: 55.12 / MAX: 75.24MIN: 54.59 / MAX: 74.87MIN: 55.05 / MAX: 74.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny2311224364860Min: 59.26 / Avg: 59.4 / Max: 59.57Min: 58.98 / Avg: 59.28 / Max: 59.5Min: 58.93 / Avg: 59 / Max: 59.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programs32120406080100SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.27, N = 382.0781.9781.521. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programs3211632486480Min: 81.89 / Avg: 82.07 / Max: 82.2Min: 81.9 / Avg: 81.97 / Max: 82.07Min: 81.02 / Avg: 81.52 / Max: 81.961. (CC) gcc options: -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1230.33750.6751.01251.351.6875SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.491.491.501. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium123246810Min: 1.49 / Avg: 1.49 / Max: 1.49Min: 1.49 / Avg: 1.49 / Max: 1.49Min: 1.49 / Avg: 1.5 / Max: 1.51. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threads13248121620SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 314.9214.9114.821. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threads13248121620Min: 14.89 / Avg: 14.92 / Max: 14.97Min: 14.67 / Avg: 14.91 / Max: 15.17Min: 14.7 / Avg: 14.82 / Max: 151. (CC) gcc options: -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 61230.2450.490.7350.981.225SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 31.0821.0831.089
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6123246810Min: 1.08 / Avg: 1.08 / Max: 1.09Min: 1.08 / Avg: 1.08 / Max: 1.09Min: 1.09 / Avg: 1.09 / Max: 1.09

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown1230.6251.251.8752.53.125SE +/- 0.0145, N = 3SE +/- 0.0043, N = 3SE +/- 0.0087, N = 32.76012.76592.7779MIN: 2.71 / MAX: 2.86MIN: 2.73 / MAX: 2.83MIN: 2.75 / MAX: 2.87
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown123246810Min: 2.74 / Avg: 2.76 / Max: 2.79Min: 2.76 / Avg: 2.77 / Max: 2.77Min: 2.77 / Avg: 2.78 / Max: 2.8

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast21348121620SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 315.5215.6015.621. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast21348121620Min: 15.43 / Avg: 15.52 / Max: 15.64Min: 15.52 / Avg: 15.6 / Max: 15.71Min: 15.56 / Avg: 15.62 / Max: 15.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU213246810SE +/- 0.00711, N = 3SE +/- 0.02489, N = 3SE +/- 0.00751, N = 37.360617.357447.31351MIN: 6.34MIN: 6.35MIN: 6.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU2133691215Min: 7.35 / Avg: 7.36 / Max: 7.37Min: 7.32 / Avg: 7.36 / Max: 7.4Min: 7.3 / Avg: 7.31 / Max: 7.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack31248121620SE +/- 0.09, N = 21SE +/- 0.01, N = 5SE +/- 0.01, N = 515.1715.0815.081. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack31248121620Min: 15.05 / Avg: 15.17 / Max: 16.91Min: 15.06 / Avg: 15.08 / Max: 15.1Min: 15.05 / Avg: 15.08 / Max: 15.111. (CXX) g++ options: -rdynamic

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium2133691215SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 312.8312.7712.751. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium21348121620Min: 12.76 / Avg: 12.83 / Max: 12.9Min: 12.71 / Avg: 12.77 / Max: 12.8Min: 12.7 / Avg: 12.75 / Max: 12.811. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd2131326395265SE +/- 0.20, N = 3SE +/- 0.05, N = 3SE +/- 0.27, N = 359.7159.3959.34MIN: 52.9 / MAX: 78.52MIN: 53.07 / MAX: 79.98MIN: 52.64 / MAX: 71.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd2131224364860Min: 59.34 / Avg: 59.71 / Max: 60.01Min: 59.31 / Avg: 59.39 / Max: 59.47Min: 58.87 / Avg: 59.34 / Max: 59.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast3123691215SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 39.809.759.741. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast3123691215Min: 9.7 / Avg: 9.8 / Max: 9.92Min: 9.68 / Avg: 9.75 / Max: 9.88Min: 9.66 / Avg: 9.74 / Max: 9.811. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown2130.58111.16221.74332.32442.9055SE +/- 0.0105, N = 3SE +/- 0.0040, N = 3SE +/- 0.0165, N = 32.56702.58192.5828MIN: 2.52 / MAX: 2.63MIN: 2.55 / MAX: 2.62MIN: 2.51 / MAX: 2.65
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown213246810Min: 2.55 / Avg: 2.57 / Max: 2.59Min: 2.57 / Avg: 2.58 / Max: 2.59Min: 2.55 / Avg: 2.58 / Max: 2.6

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile213306090120150SE +/- 0.19, N = 3SE +/- 0.54, N = 3SE +/- 0.21, N = 3113.65113.52112.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile21320406080100Min: 113.29 / Avg: 113.65 / Max: 113.92Min: 112.85 / Avg: 113.52 / Max: 114.58Min: 112.66 / Avg: 112.96 / Max: 113.36

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF Document3211224364860SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 352.9952.7452.68
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF Document3211122334455Min: 52.79 / Avg: 52.99 / Max: 53.13Min: 52.6 / Avg: 52.74 / Max: 52.89Min: 52.43 / Avg: 52.67 / Max: 52.89

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.013270140210280350SE +/- 0.67, N = 3SE +/- 1.76, N = 33423413401. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.013260120180240300Min: 340 / Avg: 341.33 / Max: 342Min: 337 / Avg: 339.67 / Max: 3431. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12317003400510068008500SE +/- 38.84, N = 3SE +/- 27.04, N = 3SE +/- 16.23, N = 37746.877725.637701.75MIN: 7556.29MIN: 7520.66MIN: 7494.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12313002600390052006500Min: 7698.52 / Avg: 7746.87 / Max: 7823.7Min: 7697.2 / Avg: 7725.63 / Max: 7779.68Min: 7674.81 / Avg: 7701.75 / Max: 7730.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet2313691215SE +/- 0.20, N = 4SE +/- 0.08, N = 3SE +/- 0.08, N = 310.4410.4310.38MIN: 8.36 / MAX: 26.87MIN: 8.39 / MAX: 21.7MIN: 8.43 / MAX: 17.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet2313691215Min: 10.05 / Avg: 10.44 / Max: 10.98Min: 10.3 / Avg: 10.43 / Max: 10.56Min: 10.24 / Avg: 10.38 / Max: 10.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd3121326395265SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.14, N = 459.5059.2859.16MIN: 52.65 / MAX: 71.96MIN: 53.06 / MAX: 77.54MIN: 51.95 / MAX: 72.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd3121224364860Min: 59.19 / Avg: 59.5 / Max: 59.73Min: 59.18 / Avg: 59.28 / Max: 59.43Min: 58.89 / Avg: 59.16 / Max: 59.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile231110220330440550SE +/- 0.30, N = 3SE +/- 0.32, N = 3SE +/- 0.16, N = 3503.98502.61501.20
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile23190180270360450Min: 503.51 / Avg: 503.98 / Max: 504.53Min: 502.04 / Avg: 502.61 / Max: 503.15Min: 500.99 / Avg: 501.2 / Max: 501.52

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-503211122334455SE +/- 0.27, N = 3SE +/- 0.33, N = 3SE +/- 0.51, N = 350.4950.2750.22MIN: 47.85 / MAX: 147.84MIN: 47.5 / MAX: 72.85MIN: 47.21 / MAX: 83.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-503211020304050Min: 50.03 / Avg: 50.49 / Max: 50.97Min: 49.61 / Avg: 50.27 / Max: 50.65Min: 49.54 / Avg: 50.22 / Max: 51.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis231816243240SE +/- 0.12, N = 4SE +/- 0.11, N = 4SE +/- 0.11, N = 435.3235.3135.131. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis231816243240Min: 34.98 / Avg: 35.32 / Max: 35.52Min: 35.1 / Avg: 35.31 / Max: 35.62Min: 34.93 / Avg: 35.13 / Max: 35.421. (CC) gcc options: -O2 -std=c99

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000132160K320K480K640K800KSE +/- 1902.33, N = 3SE +/- 3366.02, N = 3SE +/- 1666.30, N = 3721428.3723222.1725224.2
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000132130K260K390K520K650KMin: 718969.3 / Avg: 721428.33 / Max: 725172.1Min: 717363.4 / Avg: 723222.07 / Max: 729023.2Min: 722536.1 / Avg: 725224.2 / Max: 728274.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18321714212835SE +/- 0.33, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 329.1829.0429.04MIN: 25.64 / MAX: 44.22MIN: 25.69 / MAX: 36.07MIN: 25.89 / MAX: 42.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18321612182430Min: 28.6 / Avg: 29.18 / Max: 29.76Min: 28.84 / Avg: 29.04 / Max: 29.24Min: 28.76 / Avg: 29.04 / Max: 29.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_2241231.22042.44083.66124.88166.102SE +/- 0.019, N = 3SE +/- 0.038, N = 3SE +/- 0.037, N = 35.4245.4195.398MIN: 4.8 / MAX: 14.75MIN: 4.88 / MAX: 15.69MIN: 4.83 / MAX: 15.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224123246810Min: 5.4 / Avg: 5.42 / Max: 5.46Min: 5.36 / Avg: 5.42 / Max: 5.49Min: 5.33 / Avg: 5.4 / Max: 5.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b012348121620SE +/- 0.07, N = 3SE +/- 0.26, N = 4SE +/- 0.14, N = 316.9816.9516.90MIN: 14.01 / MAX: 31.1MIN: 14.04 / MAX: 27.15MIN: 13.97 / MAX: 30.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b012348121620Min: 16.91 / Avg: 16.98 / Max: 17.11Min: 16.59 / Avg: 16.95 / Max: 17.7Min: 16.75 / Avg: 16.9 / Max: 17.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile2314080120160200SE +/- 0.22, N = 3SE +/- 0.72, N = 3SE +/- 0.34, N = 3183.02182.60182.19
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile231306090120150Min: 182.6 / Avg: 183.02 / Max: 183.3Min: 181.36 / Avg: 182.6 / Max: 183.86Min: 181.83 / Avg: 182.19 / Max: 182.88

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100312130260390520650SE +/- 1.20, N = 3SE +/- 1.56, N = 3SE +/- 0.89, N = 3593.91596.25596.621. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100312110220330440550Min: 591.64 / Avg: 593.91 / Max: 595.71Min: 593.23 / Avg: 596.25 / Max: 598.45Min: 594.86 / Avg: 596.62 / Max: 597.781. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile21370K140K210K280K350KSE +/- 384.90, N = 3SE +/- 995.45, N = 3SE +/- 504.11, N = 3317213316112315790
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile21350K100K150K200K250KMin: 316460 / Avg: 317213 / Max: 317728Min: 314718 / Avg: 316112 / Max: 318040Min: 314867 / Avg: 315789.67 / Max: 316603

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levels13248121620SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 315.9215.8915.85
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levels13248121620Min: 15.9 / Avg: 15.92 / Max: 15.94Min: 15.79 / Avg: 15.89 / Max: 15.99Min: 15.65 / Avg: 15.85 / Max: 16.01

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: No1230.92591.85182.77773.70364.6295SE +/- 0.023, N = 3SE +/- 0.005, N = 3SE +/- 0.005, N = 34.1154.1104.097
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: No123246810Min: 4.09 / Avg: 4.11 / Max: 4.16Min: 4.1 / Avg: 4.11 / Max: 4.12Min: 4.09 / Avg: 4.1 / Max: 4.11

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotate12348121620SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 314.4914.4714.42
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotate12348121620Min: 14.48 / Avg: 14.49 / Max: 14.51Min: 14.41 / Avg: 14.47 / Max: 14.54Min: 14.38 / Avg: 14.42 / Max: 14.49

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100SE +/- 0.38, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 392.993.393.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100Min: 92.3 / Avg: 92.9 / Max: 93.6Min: 93.2 / Avg: 93.27 / Max: 93.3Min: 93.2 / Avg: 93.27 / Max: 93.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 1002130.58521.17041.75562.34082.926SE +/- 0.007, N = 3SE +/- 0.009, N = 3SE +/- 0.004, N = 32.6012.5952.5901. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100213246810Min: 2.59 / Avg: 2.6 / Max: 2.61Min: 2.59 / Avg: 2.59 / Max: 2.61Min: 2.59 / Avg: 2.59 / Max: 2.61. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K1231.08682.17363.26044.34725.434SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 34.814.834.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K123246810Min: 4.79 / Avg: 4.81 / Max: 4.84Min: 4.82 / Avg: 4.83 / Max: 4.85Min: 4.8 / Avg: 4.83 / Max: 4.871. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU32148121620SE +/- 0.24, N = 3SE +/- 0.19, N = 3SE +/- 0.17, N = 316.6116.5616.54MIN: 13.5MIN: 13.67MIN: 13.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU32148121620Min: 16.17 / Avg: 16.61 / Max: 17.02Min: 16.35 / Avg: 16.56 / Max: 16.93Min: 16.36 / Avg: 16.54 / Max: 16.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtime213612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 327.1827.2027.291. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtime213612182430Min: 27.14 / Avg: 27.18 / Max: 27.24Min: 27.16 / Avg: 27.2 / Max: 27.23Min: 27.25 / Avg: 27.29 / Max: 27.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile231110220330440550SE +/- 2.15, N = 3SE +/- 1.15, N = 3SE +/- 0.43, N = 3516.52514.80514.48
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile23190180270360450Min: 513.92 / Avg: 516.52 / Max: 520.79Min: 512.7 / Avg: 514.8 / Max: 516.68Min: 513.82 / Avg: 514.48 / Max: 515.3

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search231306090120150SE +/- 0.26, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3127.65127.39127.151. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search23120406080100Min: 127.21 / Avg: 127.65 / Max: 128.1Min: 127.27 / Avg: 127.39 / Max: 127.53Min: 127.11 / Avg: 127.15 / Max: 127.221. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300SE +/- 0.12, N = 3SE +/- 0.28, N = 3SE +/- 0.02, N = 3287.26287.06286.14MIN: 286.22 / MAX: 288.27MIN: 286.17 / MAX: 287.91MIN: 285.41 / MAX: 286.851. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112350100150200250Min: 287.01 / Avg: 287.26 / Max: 287.4Min: 286.54 / Avg: 287.06 / Max: 287.5Min: 286.11 / Avg: 286.14 / Max: 286.181. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed1232K4K6K8K10KSE +/- 51.24, N = 3SE +/- 41.71, N = 3SE +/- 92.88, N = 37994.308015.398022.481. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12314002800420056007000Min: 7940.21 / Avg: 7994.3 / Max: 8096.73Min: 7940.01 / Avg: 8015.39 / Max: 8084.04Min: 7839.29 / Avg: 8022.48 / Max: 8140.721. (CC) gcc options: -O3

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.213250M100M150M200M250MSE +/- 326563.65, N = 3SE +/- 483314.65, N = 3SE +/- 543041.48, N = 32134915332140726332142322331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.213240M80M120M160M200MMin: 212969400 / Avg: 213491533.33 / Max: 214092400Min: 213157800 / Avg: 214072633.33 / Max: 214800400Min: 213541300 / Avg: 214232233.33 / Max: 2153034001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough23120406080100SE +/- 0.21, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 384.6184.4984.331. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough2311632486480Min: 84.2 / Avg: 84.61 / Max: 84.82Min: 84.41 / Avg: 84.49 / Max: 84.56Min: 83.94 / Avg: 84.33 / Max: 84.651. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K1231224364860SE +/- 0.35, N = 3SE +/- 0.37, N = 3SE +/- 0.44, N = 351.9252.0052.09MIN: 48.08 / MAX: 61.73MIN: 48.01 / MAX: 61.72MIN: 48.07 / MAX: 61.571. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K1231020304050Min: 51.52 / Avg: 51.92 / Max: 52.61Min: 51.56 / Avg: 52 / Max: 52.74Min: 51.56 / Avg: 52.09 / Max: 52.961. (CC) gcc options: -pthread -ldl -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive213150300450600750SE +/- 0.48, N = 3SE +/- 1.18, N = 3SE +/- 0.20, N = 3697.50696.05695.241. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive213120240360480600Min: 696.81 / Avg: 697.5 / Max: 698.41Min: 694.33 / Avg: 696.05 / Max: 698.3Min: 695.01 / Avg: 695.24 / Max: 695.641. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resize2133691215SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 312.8712.8612.83
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resize21348121620Min: 12.74 / Avg: 12.87 / Max: 12.98Min: 12.74 / Avg: 12.86 / Max: 13.04Min: 12.71 / Avg: 12.83 / Max: 13.03

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast312246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 36.836.846.851. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast3123691215Min: 6.81 / Avg: 6.83 / Max: 6.87Min: 6.8 / Avg: 6.84 / Max: 6.86Min: 6.79 / Avg: 6.85 / Max: 6.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA13248121620SE +/- 0.21, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 315.0415.0415.001. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA13248121620Min: 14.81 / Avg: 15.04 / Max: 15.47Min: 14.88 / Avg: 15.04 / Max: 15.23Min: 14.91 / Avg: 15 / Max: 15.11. (CC) gcc options: -std=c99 -O3 -lm -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time123306090120150SE +/- 0.06, N = 3SE +/- 0.21, N = 3SE +/- 0.14, N = 3123.68123.41123.341. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time12320406080100Min: 123.58 / Avg: 123.68 / Max: 123.79Min: 123.05 / Avg: 123.4 / Max: 123.79Min: 123.06 / Avg: 123.34 / Max: 123.541. RawTherapee, version 5.8, command line.

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast1230.88881.77762.66643.55524.444SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 33.943.943.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast123246810Min: 3.94 / Avg: 3.94 / Max: 3.95Min: 3.93 / Avg: 3.94 / Max: 3.94Min: 3.93 / Avg: 3.95 / Max: 3.961. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless231612182430SE +/- 0.10, N = 3SE +/- 0.21, N = 3SE +/- 0.03, N = 324.9624.9024.891. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless231612182430Min: 24.76 / Avg: 24.96 / Max: 25.07Min: 24.56 / Avg: 24.9 / Max: 25.29Min: 24.83 / Avg: 24.89 / Max: 24.931. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080312300600900120015001196119911991. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE32148121620SE +/- 0.05, N = 5SE +/- 0.08, N = 5SE +/- 0.05, N = 516.0015.9915.961. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE32148121620Min: 15.94 / Avg: 16 / Max: 16.18Min: 15.86 / Avg: 15.99 / Max: 16.31Min: 15.89 / Avg: 15.96 / Max: 16.151. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet231816243240SE +/- 0.12, N = 4SE +/- 0.13, N = 3SE +/- 0.20, N = 332.6432.5932.56MIN: 28.94 / MAX: 42.38MIN: 28.42 / MAX: 45.73MIN: 28.77 / MAX: 47.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet231714212835Min: 32.33 / Avg: 32.64 / Max: 32.83Min: 32.46 / Avg: 32.59 / Max: 32.85Min: 32.21 / Avg: 32.56 / Max: 32.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny2311326395265SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 359.4259.2859.28MIN: 55.13 / MAX: 75.95MIN: 55.36 / MAX: 72.94MIN: 55.69 / MAX: 71.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny2311224364860Min: 59.26 / Avg: 59.42 / Max: 59.54Min: 59.2 / Avg: 59.28 / Max: 59.4Min: 59.17 / Avg: 59.28 / Max: 59.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode123246810SE +/- 0.033, N = 5SE +/- 0.030, N = 5SE +/- 0.021, N = 58.9368.9238.9151. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215Min: 8.82 / Avg: 8.94 / Max: 9.02Min: 8.84 / Avg: 8.92 / Max: 9Min: 8.88 / Avg: 8.91 / Max: 91. (CXX) g++ options: -fvisibility=hidden -logg -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 221320406080100SE +/- 0.18, N = 3SE +/- 0.24, N = 3SE +/- 0.16, N = 386.5486.4886.341. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 22131632486480Min: 86.19 / Avg: 86.54 / Max: 86.74Min: 86.04 / Avg: 86.48 / Max: 86.86Min: 86.05 / Avg: 86.34 / Max: 86.591. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics2134080120160200SE +/- 0.07, N = 3SE +/- 0.19, N = 3SE +/- 0.07, N = 3191.45191.41191.011. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics2134080120160200Min: 191.35 / Avg: 191.45 / Max: 191.58Min: 191.11 / Avg: 191.41 / Max: 191.76Min: 190.88 / Avg: 191.01 / Max: 191.111. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon1320.73131.46262.19392.92523.6565SE +/- 0.0133, N = 3SE +/- 0.0025, N = 3SE +/- 0.0130, N = 33.24313.24823.2504MIN: 3.18 / MAX: 3.32MIN: 3.2 / MAX: 3.32MIN: 3.19 / MAX: 3.32
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon132246810Min: 3.22 / Avg: 3.24 / Max: 3.27Min: 3.24 / Avg: 3.25 / Max: 3.25Min: 3.23 / Avg: 3.25 / Max: 3.27

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast132612182430SE +/- 0.16, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 327.0127.0527.071. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast132612182430Min: 26.77 / Avg: 27.01 / Max: 27.3Min: 27 / Avg: 27.05 / Max: 27.08Min: 26.98 / Avg: 27.07 / Max: 27.131. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10023120K40K60K80K100KSE +/- 38.96, N = 3SE +/- 223.03, N = 3SE +/- 106.80, N = 31103201101571100841. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10023120K40K60K80K100KMin: 110276 / Avg: 110320.33 / Max: 110398Min: 109747 / Avg: 110157.33 / Max: 110514Min: 109887 / Avg: 110084 / Max: 1102541. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz132612182430SE +/- 0.03, N = 4SE +/- 0.08, N = 4SE +/- 0.03, N = 423.8523.8123.80
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz132612182430Min: 23.8 / Avg: 23.85 / Max: 23.92Min: 23.59 / Avg: 23.81 / Max: 23.96Min: 23.74 / Avg: 23.8 / Max: 23.87

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p3214080120160200SE +/- 1.89, N = 3SE +/- 1.76, N = 3SE +/- 0.04, N = 3183.88184.17184.25MIN: 127.66 / MAX: 340.88MIN: 127.79 / MAX: 333.51MIN: 129.28 / MAX: 331.161. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p321306090120150Min: 180.11 / Avg: 183.88 / Max: 186.03Min: 180.81 / Avg: 184.17 / Max: 186.74Min: 184.21 / Avg: 184.25 / Max: 184.331. (CC) gcc options: -pthread -ldl -lm

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT31260M120M180M240M300MSE +/- 702868.97, N = 3SE +/- 252430.64, N = 3SE +/- 109501.99, N = 3301185316.59301333349.99301687480.191. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT31250M100M150M200M250MMin: 299847318.01 / Avg: 301185316.59 / Max: 302227671.23Min: 300870890 / Avg: 301333349.99 / Max: 301739970.27Min: 301490385.92 / Avg: 301687480.19 / Max: 301868716.181. (CC) gcc options: -O3 -march=native -lm

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080123400800120016002000184918511852

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium312246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.506.516.511. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium3123691215Min: 6.47 / Avg: 6.5 / Max: 6.53Min: 6.49 / Avg: 6.51 / Max: 6.53Min: 6.49 / Avg: 6.51 / Max: 6.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed2312K4K6K8K10KSE +/- 5.73, N = 15SE +/- 3.43, N = 15SE +/- 6.38, N = 138552.18562.88565.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed23115003000450060007500Min: 8514.6 / Avg: 8552.15 / Max: 8612.2Min: 8538.2 / Avg: 8562.85 / Max: 8585.4Min: 8513.2 / Avg: 8565.21 / Max: 8593.61. (CC) gcc options: -O3

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S23120406080100SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 382.1882.0882.061. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S2311632486480Min: 81.96 / Avg: 82.18 / Max: 82.45Min: 81.96 / Avg: 82.08 / Max: 82.19Min: 81.97 / Avg: 82.06 / Max: 82.141. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V22131.2M2.4M3.6M4.8M6MSE +/- 3137.20, N = 3SE +/- 3601.25, N = 3SE +/- 6688.08, N = 3569708356913105689070
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V22131000K2000K3000K4000K5000KMin: 5691440 / Avg: 5697083.33 / Max: 5702280Min: 5684170 / Avg: 5691310 / Max: 5695700Min: 5676430 / Avg: 5689070 / Max: 5699180

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v221360120180240300SE +/- 0.44, N = 3SE +/- 0.05, N = 3SE +/- 0.29, N = 3279.63279.34279.28MIN: 276.69 / MAX: 295.47MIN: 276.5 / MAX: 293.72MIN: 276.39 / MAX: 296.431. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v221350100150200250Min: 278.76 / Avg: 279.63 / Max: 280.23Min: 279.28 / Avg: 279.34 / Max: 279.43Min: 278.8 / Avg: 279.28 / Max: 279.811. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-mask13248121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 317.3317.3317.32
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-mask13248121620Min: 17.3 / Avg: 17.33 / Max: 17.35Min: 17.29 / Avg: 17.33 / Max: 17.37Min: 17.23 / Avg: 17.32 / Max: 17.37

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed3212K4K6K8K10KSE +/- 5.83, N = 15SE +/- 8.28, N = 15SE +/- 26.74, N = 38547.28547.88554.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed32115003000450060007500Min: 8499.7 / Avg: 8547.19 / Max: 8592.3Min: 8494.5 / Avg: 8547.83 / Max: 8615.7Min: 8515.3 / Avg: 8554.3 / Max: 8605.51. (CC) gcc options: -O3

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes213100200300400500SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3482.84482.61482.55
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes21390180270360450Min: 482.82 / Avg: 482.84 / Max: 482.86Min: 482.56 / Avg: 482.61 / Max: 482.64Min: 482.55 / Avg: 482.55 / Max: 482.56

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes213612182430SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 326.6926.6826.67
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes213612182430Min: 26.68 / Avg: 26.68 / Max: 26.7Min: 26.67 / Avg: 26.68 / Max: 26.68Min: 26.67 / Avg: 26.67 / Max: 26.68

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No1231428425670SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 363.0363.0263.00
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No1231224364860Min: 62.99 / Avg: 63.03 / Max: 63.08Min: 62.98 / Avg: 63.02 / Max: 63.09Min: 62.96 / Avg: 63 / Max: 63.03

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression123246810SE +/- 0.001, N = 3SE +/- 0.011, N = 3SE +/- 0.013, N = 38.8728.8718.8691. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression1233691215Min: 8.87 / Avg: 8.87 / Max: 8.88Min: 8.85 / Avg: 8.87 / Max: 8.89Min: 8.84 / Avg: 8.87 / Max: 8.891. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID1230.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.460.461. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID12312345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.461. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets1230.10130.20260.30390.40520.5065SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.450.450.451. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets12312345Min: 0.44 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.451. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom1230.07880.15760.23640.31520.394SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.350.350.351. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom12312345Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.35 / Max: 0.351. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya1230.08550.1710.25650.3420.4275SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.380.380.381. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya12312345Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.38 / Avg: 0.38 / Max: 0.381. (CXX) g++ options: -O3 -pthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1230.450.91.351.82.25SE +/- 0.03, N = 32.02.02.01. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup123246810Min: 1.9 / Avg: 1.97 / Max: 21. (CC) gcc options: -fopenmp -O3 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v22133691215SE +/- 0.21, N = 4SE +/- 0.01, N = 3SE +/- 0.38, N = 311.3010.8810.68MIN: 8.86 / MAX: 23.26MIN: 8.93 / MAX: 27.73MIN: 8.89 / MAX: 17.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v22133691215Min: 10.97 / Avg: 11.3 / Max: 11.92Min: 10.86 / Avg: 10.88 / Max: 10.91Min: 9.93 / Avg: 10.68 / Max: 11.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU123816243240SE +/- 1.59, N = 15SE +/- 1.77, N = 15SE +/- 1.44, N = 1236.5635.8533.06MIN: 27.16MIN: 27.16MIN: 26.991. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU123816243240Min: 29.55 / Avg: 36.56 / Max: 45.37Min: 29.58 / Avg: 35.85 / Max: 44.96Min: 29.19 / Avg: 33.06 / Max: 43.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed2311020304050SE +/- 0.55, N = 15SE +/- 0.47, N = 15SE +/- 0.73, N = 1341.0241.2342.221. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed231918273645Min: 36.17 / Avg: 41.02 / Max: 43.43Min: 38.26 / Avg: 41.23 / Max: 43.68Min: 35.77 / Avg: 42.22 / Max: 45.931. (CC) gcc options: -O3

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Load1231224364860SE +/- 2.25, N = 20SE +/- 1.88, N = 25SE +/- 1.93, N = 2553.7852.7650.701. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Load1231122334455Min: 33.81 / Avg: 53.77 / Max: 73.27Min: 33.2 / Avg: 52.76 / Max: 77.37Min: 30.2 / Avg: 50.7 / Max: 73.481. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

191 Results Shown

Redis
Kripke
LeelaChessZero:
  BLAS
  Eigen
Redis
oneDNN
Sunflow Rendering System
Node.js V8 Web Tooling Benchmark
OSBench
oneDNN
NCNN
oneDNN
TensorFlow Lite
oneDNN
Mobile Neural Network
oneDNN
rav1e
NCNN
oneDNN
Darktable
NCNN
Mobile Neural Network
oneDNN
Sockperf
OSBench
NCNN:
  Vulkan GPU - blazeface
  CPU - mnasnet
LULESH
FFTE
LZ4 Compression
TensorFlow Lite
oneDNN
LibRaw
GROMACS
dav1d
RNNoise
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
Hugin
NCNN:
  CPU - regnety_400m
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - vgg16
  CPU - efficientnet-b0
  CPU - resnet50
asmFish
Basis Universal
Stockfish
oneDNN
Darktable
oneDNN
KeyDB
NCNN
Darktable
Zstd Compression
AOM AV1
InfluxDB
Darktable
Zstd Compression
OpenFOAM
Redis
AOM AV1
TensorFlow Lite
Incompact3D
NCNN
TensorFlow Lite
NCNN
oneDNN
Redis
NAMD
Mobile Neural Network
x265
NCNN:
  CPU - googlenet
  CPU - mobilenet
OSBench
rav1e
Crafty
dav1d
LAMMPS Molecular Dynamics Simulator
NCNN:
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU - alexnet
BYTE Unix Benchmark
NCNN
Embree
CP2K Molecular Dynamics
NCNN
AOM AV1
LZ4 Compression
SQLite Speedtest
oneDNN
yquake2
WebP Image Encode
Redis
Sockperf
Warsow
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
WebP Image Encode
Numpy Benchmark
PHPBench
Embree
Coremark
Embree
Caffe
Dolfyn
NCNN:
  Vulkan GPU - resnet18
  Vulkan GPU - yolov4-tiny
OSBench
Kvazaar
OSBench
rav1e
Embree
Kvazaar
oneDNN
WavPack Audio Encoding
ASTC Encoder
NCNN
ASTC Encoder
Embree
Timed Eigen Compilation
OCRMyPDF
Monte Carlo Simulations of Ionised Nebulae
oneDNN
NCNN:
  Vulkan GPU - mnasnet
  Vulkan GPU - squeezenet_ssd
Timed Godot Game Engine Compilation
Mobile Neural Network
eSpeak-NG Speech Engine
InfluxDB
NCNN
Mobile Neural Network
NCNN
Timed FFmpeg Compilation
Google SynthMark
TensorFlow Lite
GIMP
Waifu2x-NCNN Vulkan
GIMP
yquake2
WebP Image Encode
x265
oneDNN
AOM AV1
Build2
Timed HMMer Search
TNN
LZ4 Compression
Algebraic Multi-Grid Benchmark
ASTC Encoder
dav1d
ASTC Encoder
GIMP
Kvazaar
Timed MAFFT Alignment
RawTherapee
Kvazaar
WebP Image Encode
VKMark
Monkey Audio Encoding
NCNN:
  Vulkan GPU - googlenet
  CPU - yolov4-tiny
Opus Codec Encoding
Basis Universal
CloverLeaf
Embree
Kvazaar
Caffe
Unpacking Firefox
dav1d
Hierarchical INTegration
GLmark2
Kvazaar
LZ4 Compression
Basis Universal
TensorFlow Lite
TNN
GIMP
LZ4 Compression
RealSR-NCNN
Waifu2x-NCNN Vulkan
RealSR-NCNN
WebP Image Encode
simdjson:
  DistinctUserID
  PartialTweets
  LargeRand
  Kostya
CLOMP
NCNN
oneDNN
LZ4 Compression
Sockperf