Ryzen 3 2200G 2021

AMD Ryzen 3 2200G testing with a ASUS PRIME B350M-E (5220 BIOS) and ASUS AMD Radeon Vega / Mobile 2GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101191-HA-RYZEN322022
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 3 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 21 Tests
Creator Workloads 24 Tests
Database Test Suite 4 Tests
Encoding 8 Tests
Fortran Tests 6 Tests
Game Development 3 Tests
HPC - High Performance Computing 24 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 9 Tests
Molecular Dynamics 9 Tests
MPI Benchmarks 4 Tests
Multi-Core 19 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 9 Tests
Programmer / Developer System Benchmarks 9 Tests
Python Tests 5 Tests
Scientific Computing 15 Tests
Server 7 Tests
Server CPU Tests 12 Tests
Single-Threaded 6 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 2 Tests
Video Encoding 5 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
January 16 2021
  18 Hours, 35 Minutes
2
January 17 2021
  20 Hours, 52 Minutes
3
January 18 2021
  19 Hours, 6 Minutes
Invert Hiding All Results Option
  19 Hours, 31 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


Ryzen 3 2200G 2021ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen 3 2200G @ 3.50GHz (4 Cores)ASUS PRIME B350M-E (5220 BIOS)AMD Raven/Raven26GBSamsung SSD 970 EVO 250GBASUS AMD Radeon Vega / Mobile 2GB (1100/1600MHz)AMD Raven/Raven2/FenghuangG237HLRealtek RTL8111/8168/8411Ubuntu 20.105.8.0-38-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.6 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8101016 Graphics Details- GLAMORJava Details- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite 10.8.4100%105%110%115%121%LeelaChessZeroRedisSunflow Rendering SystemNode.js V8 Web Tooling BenchmarkSockperfLULESHFFTELibRawGROMACSRNNoiseHuginOSBenchasmFishDarktableStockfishKeyDBCP2K Molecular DynamicsOpenFOAMNAMDTensorFlow LiteCraftyLAMMPS Molecular Dynamics SimulatorBYTE Unix BenchmarkTimed Godot Game Engine CompilationAOM AV1Zstd Compressionrav1eWarsowIncompact3DSQLite SpeedtestIndigoBenchLZ4 CompressionNumpy BenchmarkPHPBenchx265dav1dCoremarkDolfynMonte Carlo Simulations of Ionised NebulaeBasis UniversalWavPack Audio EncodingNCNNOCRMyPDFTimed Eigen CompilationTimed FFmpeg CompilationeSpeak-NG Speech EngineGoogle SynthMarkInfluxDBoneDNNTimed HMMer SearchCloverLeafAlgebraic Multi-Grid BenchmarkMobile Neural NetworkTimed MAFFT AlignmentTNNEmbreeMonkey Audio EncodingVKMarkRawTherapeeOpus Codec EncodingGIMPWaifu2x-NCNN VulkanUnpacking Firefoxyquake2ASTC EncoderRealSR-NCNNKvazaarHierarchical INTegrationGLmark2WebP Image EncodeCaffeBuild2simdjsonCLOMP

Ryzen 3 2200G 2021redis: LPOPkripke: lczero: BLASlczero: Eigenredis: GETonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsunflow: Global Illumination + Image Synthesisnode-web-tooling: osbench: Memory Allocationsonednn: IP Shapes 1D - u8s8f32 - CPUncnn: Vulkan GPU - resnet50onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUtensorflow-lite: Mobilenet Quantonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUmnn: mobilenet-v1-1.0onednn: Convolution Batch Shapes Auto - f32 - CPUrav1e: 10ncnn: Vulkan GPU - shufflenet-v2onednn: Recurrent Neural Network Training - u8s8f32 - CPUdarktable: Boat - CPU-onlyncnn: CPU-v2-v2 - mobilenet-v2mnn: SqueezeNetV1.0onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsockperf: Latency Ping Pongosbench: Create Processesncnn: Vulkan GPU - blazefacencnn: CPU - mnasnetlulesh: ffte: N=256, 3D Complex FFT Routinecompress-lz4: 3 - Compression Speedtensorflow-lite: Mobilenet Floatonednn: IP Shapes 3D - f32 - CPUlibraw: Post-Processing Benchmarkgromacs: Water Benchmarkdav1d: Chimera 1080p 10-bitrnnoise: onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUhugin: Panorama Photo Assistant + Stitching Timencnn: CPU - regnety_400mncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - vgg16ncnn: CPU - efficientnet-b0ncnn: CPU - resnet50asmfish: 1024 Hash Memory, 26 Depthbasis: UASTC Level 0stockfish: Total Timeonednn: Deconvolution Batch shapes_1d - f32 - CPUdarktable: Server Room - CPU-onlyonednn: Recurrent Neural Network Inference - f32 - CPUkeydb: ncnn: Vulkan GPU - mobilenetdarktable: Server Rack - CPU-onlycompress-zstd: 3aom-av1: Speed 4 Two-Passinfluxdb: 4 - 10000 - 2,5000,1 - 10000darktable: Masskrug - CPU-onlycompress-zstd: 19openfoam: Motorbike 30Mredis: SADDaom-av1: Speed 6 Realtimetensorflow-lite: SqueezeNetincompact3d: Cylinderncnn: Vulkan GPU - vgg16tensorflow-lite: Inception V4ncnn: CPU - blazefaceonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUredis: SETnamd: ATPase Simulation - 327,506 Atomsmnn: inception-v3x265: Bosphorus 1080pncnn: CPU - googlenetncnn: CPU - mobilenetosbench: Create Filesrav1e: 5crafty: Elapsed Timedav1d: Summer Nature 1080plammps: Rhodopsin Proteinncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - alexnetbyte: Dhrystone 2ncnn: Vulkan GPU - regnety_400membree: Pathtracer - Asian Dragoncp2k: Fayalite-FIST Datancnn: CPU - alexnetaom-av1: Speed 6 Two-Passcompress-lz4: 1 - Decompression Speedsqlite-speedtest: Timed Time - Size 1,000onednn: IP Shapes 3D - u8s8f32 - CPUyquake2: OpenGL 3.x - 1920 x 1080webp: Defaultredis: LPUSHsockperf: Throughputwarsow: 1920 x 1080indigobench: CPU - Supercarindigobench: CPU - Bedroomwebp: Quality 100, Lossless, Highest Compressionnumpy: phpbench: PHP Benchmark Suiteembree: Pathtracer ISPC - Asian Dragon Objcoremark: CoreMark Size 666 - Iterations Per Secondembree: Pathtracer - Asian Dragon Objcaffe: AlexNet - CPU - 100dolfyn: Computational Fluid Dynamicsncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - yolov4-tinyosbench: Launch Programskvazaar: Bosphorus 4K - Mediumosbench: Create Threadsrav1e: 6embree: Pathtracer - Crownkvazaar: Bosphorus 1080p - Very Fastonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUencode-wavpack: WAV To WavPackastcenc: Mediumncnn: CPU - squeezenet_ssdastcenc: Fastembree: Pathtracer ISPC - Crownbuild-eigen: Time To Compileocrmypdf: Processing 60 Page PDF Documentmocassin: Dust 2D tau100.0onednn: Recurrent Neural Network Inference - u8s8f32 - CPUncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - squeezenet_ssdbuild-godot: Time To Compilemnn: resnet-v2-50espeak: Text-To-Speech Synthesisinfluxdb: 64 - 10000 - 2,5000,1 - 10000ncnn: CPU - resnet18mnn: MobileNetV2_224ncnn: Vulkan GPU - efficientnet-b0build-ffmpeg: Time To Compilesynthmark: VoiceMark_100tensorflow-lite: NASNet Mobilegimp: auto-levelswaifu2x-ncnn: 2x - 3 - Nogimp: rotateyquake2: Software CPU - 1920 x 1080webp: Quality 100x265: Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUaom-av1: Speed 8 Realtimebuild2: Time To Compilehmmer: Pfam Database Searchtnn: CPU - SqueezeNet v1.1compress-lz4: 1 - Compression Speedamg: astcenc: Thoroughdav1d: Summer Nature 4Kastcenc: Exhaustivegimp: resizekvazaar: Bosphorus 4K - Ultra Fastmafft: Multiple Sequence Alignment - LSU RNArawtherapee: Total Benchmark Timekvazaar: Bosphorus 4K - Very Fastwebp: Quality 100, Losslessvkmark: 1920 x 1080encode-ape: WAV To APEncnn: Vulkan GPU - googlenetncnn: CPU - yolov4-tinyencode-opus: WAV To Opus Encodebasis: UASTC Level 2cloverleaf: Lagrangian-Eulerian Hydrodynamicsembree: Pathtracer ISPC - Asian Dragonkvazaar: Bosphorus 1080p - Ultra Fastcaffe: GoogleNet - CPU - 100unpack-firefox: firefox-84.0.source.tar.xzdav1d: Chimera 1080phint: FLOATglmark2: 1920 x 1080kvazaar: Bosphorus 1080p - Mediumcompress-lz4: 9 - Decompression Speedbasis: ETC1Stensorflow-lite: Inception ResNet V2tnn: CPU - MobileNet v2gimp: unsharp-maskcompress-lz4: 3 - Decompression Speedrealsr-ncnn: 4x - Yeswaifu2x-ncnn: 2x - 3 - Yesrealsr-ncnn: 4x - Nowebp: Quality 100, Highest Compressionsimdjson: DistinctUserIDsimdjson: PartialTweetssimdjson: LargeRandsimdjson: Kostyaclomp: Static OMP Speedupncnn: Vulkan GPU-v2-v2 - mobilenet-v2onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUcompress-lz4: 9 - Compression Speedsockperf: Latency Under Load1232261210.9248115634324482064794.8314.82153.2067.3881.74268414.632271.687913.613287338195.207.39523.02132.57312.638193.6125.20611.209.73229.70986.92726.1902813.2510.501180.097115392.80995192242.7730994613.073619.360.33352.5522.23830.80738437.1282.00619.1112.659.67117.1917.0772.55774804711.898571816922.541320.6957721.13265074.4546.400.3392346.01.38706035.524.17214.0342.981735687.3310.13467745810.954712117.4664410173.3138.83921489969.256.7540763.41519.4932.5946.8318.2458880.8486274275182.892.6039.5923.4635499453.018.883.31401448.58923.302.228722.481.2225.81417814.11.6481216336.46555055158.11.1070.49457.670242.345081062.8199102524.4421762.99034187721.06929.1759.0081.5232601.4914.9202351.0822.760115.607.3574415.08212.7759.399.752.5819113.52052.6753427746.8710.3859.28501.20050.21635.134721428.329.045.42416.98182.189596.25431611215.9244.11514.48792.92.5954.8116.542927.20514.482127.149287.2557994.3021349153384.3351.92696.0512.8556.8415.042123.6843.9424.894119915.95532.5659.288.93686.479191.413.243127.0111008423.847184.25301333349.9938218496.518565.282.0605691310279.33917.3328554.3482.60926.67563.0288.8720.460.450.350.38210.8836.559642.2253.7751258380.5031177173743801931045.2015.08253.1487.7481.99938214.870074.107667.943186858438.647.52623.67912.56612.988356.1925.90110.919.61329.88896.75126.4795623.3110.341208.386615755.58527176242.3430627013.168519.660.33052.3922.69331.41478277.5383.58118.8712.899.49119.3816.7673.14780282812.054562822022.314421.0087794.13267212.9547.010.3422358.01.38696009.524.51814.2339.541758200.5010.12461955821.055725118.9163899433.3338.97281486411.636.7990263.99719.6032.7946.4918.3158460.8446255015183.502.5869.6923.2435791748.519.063.31131461.85323.422.228646.381.4235.84129807.21.6621213155.04559663159.41.0980.49457.216241.365060552.8371101765.5669262.96824167221.13929.0859.4081.9675131.4914.8232781.0832.765915.527.3606115.07712.8359.719.742.5670113.65452.7413407725.6310.4459.16503.97750.26935.319725224.229.045.41916.95183.024596.61531721315.8534.11014.47093.32.6014.8316.555827.18516.516127.645287.0638015.3921423223384.6152.00697.5012.8666.8515.000123.4053.9424.957119915.99432.6459.428.92386.540191.453.250427.0711032023.799184.17301687480.1884218516.518552.182.1785697083279.62617.3178547.8482.83926.68563.0238.8710.460.450.350.382.011.3035.846141.0252.7581275489.173533771930168.3815.56193.3027.3885.63200615.197571.697750.493271878426.847.31323.66972.63912.758419.8225.45111.049.86729.12106.79026.8570583.3310.251208.018515437.46857899841.8131321613.366119.790.32653.5122.58030.82088342.7382.18818.7512.709.59117.4516.8971.82766904311.863564858922.661920.7397837.72269044.4846.320.3442324.21.40700554.624.19414.2338.271734495.8310.25467404820.322815118.0464685673.2938.50551472539.676.8328463.26919.7132.4346.3218.4418620.8396322069184.812.6139.6023.4835427649.219.073.34321452.46923.512.248690.181.9335.79119807.91.6571223284.42557665159.41.1060.49857.452243.265041592.8151102339.4086412.97824157320.98829.2859.2882.0732121.5014.9091081.0892.777915.627.3135115.17412.7559.349.802.5828112.96452.9863417701.7510.4359.50502.60850.48935.312723222.129.185.39816.90182.601593.90931579015.8944.09714.42493.32.5904.8316.610827.29514.799127.385286.1408022.4821407263384.4952.09695.2412.8266.8315.035123.3393.9524.900119615.99532.5959.288.91586.340191.013.248227.0511015723.806183.88301185316.5890918526.508562.882.0765689070279.27817.3288547.2482.55126.67363.0028.8690.460.450.350.38210.6833.058741.2350.699OpenBenchmarking.org

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123500K1000K1500K2000K2500KSE +/- 16948.58, N = 3SE +/- 11741.76, N = 3SE +/- 8952.66, N = 32261210.921258380.501275489.171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123400K800K1200K1600K2000KMin: 2227314 / Avg: 2261210.92 / Max: 2278268.75Min: 1236766.38 / Avg: 1258380.5 / Max: 1277139.25Min: 1261558.62 / Avg: 1275489.17 / Max: 1292196.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4121000K2000K3000K4000K5000KSE +/- 36406.50, N = 2SE +/- 35494.54, N = 3481156331177171. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.412800K1600K2400K3200K4000KMin: 4775156 / Avg: 4811562.5 / Max: 4847969Min: 3047023 / Avg: 3117717.33 / Max: 31586611. (CXX) g++ options: -O3 -fopenmp

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS12390180270360450SE +/- 4.54, N = 8SE +/- 6.01, N = 9SE +/- 2.52, N = 34323743531. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS12380160240320400Min: 400 / Avg: 431.63 / Max: 438Min: 355 / Avg: 374.33 / Max: 400Min: 348 / Avg: 353 / Max: 3561. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123100200300400500SE +/- 4.81, N = 3SE +/- 5.13, N = 94483803771. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen12380160240320400Min: 439 / Avg: 448.33 / Max: 455Min: 354 / Avg: 377.22 / Max: 4031. (CXX) g++ options: -flto -pthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KSE +/- 35016.53, N = 3SE +/- 23617.43, N = 5SE +/- 22292.08, N = 32064794.831931045.201930168.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KMin: 2016129.12 / Avg: 2064794.83 / Max: 2132742Min: 1872958.88 / Avg: 1931045.2 / Max: 2012265.5Min: 1886792.5 / Avg: 1930168.38 / Max: 1960784.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU12348121620SE +/- 0.22, N = 15SE +/- 0.18, N = 15SE +/- 0.09, N = 314.8215.0815.56MIN: 11.81MIN: 12.35MIN: 13.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU12348121620Min: 13.41 / Avg: 14.82 / Max: 15.75Min: 13.61 / Avg: 15.08 / Max: 15.69Min: 15.38 / Avg: 15.56 / Max: 15.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis1230.7431.4862.2292.9723.715SE +/- 0.041, N = 3SE +/- 0.032, N = 3SE +/- 0.028, N = 153.2063.1483.302MIN: 2.88 / MAX: 3.79MIN: 2.89 / MAX: 3.84MIN: 2.87 / MAX: 4.18
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis123246810Min: 3.15 / Avg: 3.21 / Max: 3.29Min: 3.1 / Avg: 3.15 / Max: 3.21Min: 3.09 / Avg: 3.3 / Max: 3.47

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark123246810SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 47.387.747.381. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215Min: 7.22 / Avg: 7.38 / Max: 7.46Min: 7.69 / Avg: 7.74 / Max: 7.8Min: 7.12 / Avg: 7.38 / Max: 7.561. Nodejs v12.18.2

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocations12320406080100SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 1.45, N = 381.7482.0085.631. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory Allocations1231632486480Min: 81.7 / Avg: 81.74 / Max: 81.77Min: 81.87 / Avg: 82 / Max: 82.19Min: 82.72 / Avg: 85.63 / Max: 87.091. (CC) gcc options: -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU12348121620SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 314.6314.8715.20MIN: 13.38MIN: 13.38MIN: 13.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU12348121620Min: 14.49 / Avg: 14.63 / Max: 14.84Min: 14.62 / Avg: 14.87 / Max: 15Min: 14.96 / Avg: 15.2 / Max: 15.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501231632486480SE +/- 0.20, N = 3SE +/- 0.63, N = 4SE +/- 0.29, N = 371.6874.1071.69MIN: 65.87 / MAX: 90.56MIN: 66.26 / MAX: 110.26MIN: 66.47 / MAX: 91.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501231428425670Min: 71.39 / Avg: 71.68 / Max: 72.06Min: 73.21 / Avg: 74.1 / Max: 75.96Min: 71.39 / Avg: 71.69 / Max: 72.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1232K4K6K8K10KSE +/- 101.76, N = 3SE +/- 22.70, N = 3SE +/- 13.61, N = 37913.617667.947750.49MIN: 7617.25MIN: 7509.56MIN: 7562.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU12314002800420056007000Min: 7752.25 / Avg: 7913.61 / Max: 8101.69Min: 7640.74 / Avg: 7667.94 / Max: 7713.03Min: 7723.31 / Avg: 7750.49 / Max: 7765.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12370K140K210K280K350KSE +/- 1183.02, N = 3SE +/- 1025.34, N = 3SE +/- 1866.22, N = 3328733318685327187
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12360K120K180K240K300KMin: 326554 / Avg: 328733 / Max: 330621Min: 316646 / Avg: 318684.67 / Max: 319896Min: 325104 / Avg: 327187.33 / Max: 330911

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1232K4K6K8K10KSE +/- 99.84, N = 5SE +/- 66.22, N = 15SE +/- 46.50, N = 38195.208438.648426.84MIN: 7505MIN: 7752.96MIN: 8003.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU12315003000450060007500Min: 7835.76 / Avg: 8195.2 / Max: 8382.75Min: 8073.51 / Avg: 8438.64 / Max: 9047.54Min: 8372.71 / Avg: 8426.84 / Max: 8519.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0123246810SE +/- 0.021, N = 3SE +/- 0.065, N = 3SE +/- 0.030, N = 37.3957.5267.313MIN: 6.57 / MAX: 16.47MIN: 6.6 / MAX: 20.05MIN: 6.61 / MAX: 17.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.01233691215Min: 7.35 / Avg: 7.39 / Max: 7.42Min: 7.4 / Avg: 7.53 / Max: 7.62Min: 7.28 / Avg: 7.31 / Max: 7.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123612182430SE +/- 0.34, N = 3SE +/- 0.20, N = 3SE +/- 0.14, N = 323.0223.6823.67MIN: 19.03MIN: 20.06MIN: 20.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123612182430Min: 22.35 / Avg: 23.02 / Max: 23.45Min: 23.27 / Avg: 23.68 / Max: 23.89Min: 23.43 / Avg: 23.67 / Max: 23.921. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 101230.59381.18761.78142.37522.969SE +/- 0.015, N = 3SE +/- 0.008, N = 3SE +/- 0.007, N = 32.5732.5662.639
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10123246810Min: 2.54 / Avg: 2.57 / Max: 2.59Min: 2.56 / Avg: 2.57 / Max: 2.58Min: 2.63 / Avg: 2.64 / Max: 2.65

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v21233691215SE +/- 0.09, N = 3SE +/- 0.16, N = 4SE +/- 0.20, N = 312.6312.9812.75MIN: 10.39 / MAX: 25.47MIN: 10.26 / MAX: 24.78MIN: 10.36 / MAX: 21.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v212348121620Min: 12.48 / Avg: 12.63 / Max: 12.8Min: 12.66 / Avg: 12.98 / Max: 13.43Min: 12.42 / Avg: 12.75 / Max: 13.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1232K4K6K8K10KSE +/- 128.90, N = 3SE +/- 144.66, N = 3SE +/- 28.77, N = 38193.618356.198419.82MIN: 7671.11MIN: 7776.27MIN: 8053.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12315003000450060007500Min: 8013.08 / Avg: 8193.61 / Max: 8443.25Min: 8101.02 / Avg: 8356.19 / Max: 8601.87Min: 8373.46 / Avg: 8419.82 / Max: 8472.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only123612182430SE +/- 0.09, N = 3SE +/- 0.25, N = 13SE +/- 0.09, N = 325.2125.9025.45
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only123612182430Min: 25.07 / Avg: 25.21 / Max: 25.38Min: 25.34 / Avg: 25.9 / Max: 28.69Min: 25.27 / Avg: 25.45 / Max: 25.55

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21233691215SE +/- 0.13, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 311.2010.9111.04MIN: 8.97 / MAX: 20.5MIN: 8.91 / MAX: 18.07MIN: 8.96 / MAX: 21.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 10.99 / Avg: 11.2 / Max: 11.45Min: 10.77 / Avg: 10.91 / Max: 11.05Min: 10.94 / Avg: 11.04 / Max: 11.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.01233691215SE +/- 0.127, N = 3SE +/- 0.039, N = 3SE +/- 0.066, N = 39.7329.6139.867MIN: 8.67 / MAX: 18.76MIN: 8.69 / MAX: 20.7MIN: 8.73 / MAX: 39.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.01233691215Min: 9.59 / Avg: 9.73 / Max: 9.99Min: 9.56 / Avg: 9.61 / Max: 9.69Min: 9.76 / Avg: 9.87 / Max: 9.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123714212835SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.29, N = 329.7129.8929.12MIN: 26.4MIN: 26.42MIN: 26.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123714212835Min: 29.56 / Avg: 29.71 / Max: 29.86Min: 29.62 / Avg: 29.89 / Max: 30.07Min: 28.66 / Avg: 29.12 / Max: 29.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pong123246810SE +/- 0.065, N = 5SE +/- 0.049, N = 5SE +/- 0.074, N = 56.9276.7516.7901. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pong1233691215Min: 6.77 / Avg: 6.93 / Max: 7.08Min: 6.64 / Avg: 6.75 / Max: 6.9Min: 6.54 / Avg: 6.79 / Max: 6.971. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processes123612182430SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.21, N = 326.1926.4826.861. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Processes123612182430Min: 26.03 / Avg: 26.19 / Max: 26.36Min: 26.44 / Avg: 26.48 / Max: 26.54Min: 26.45 / Avg: 26.86 / Max: 27.131. (CC) gcc options: -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface1230.74931.49862.24792.99723.7465SE +/- 0.01, N = 3SE +/- 0.02, N = 4SE +/- 0.03, N = 33.253.313.33MIN: 2.61 / MAX: 14.28MIN: 2.6 / MAX: 4.93MIN: 2.62 / MAX: 5.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface123246810Min: 3.23 / Avg: 3.25 / Max: 3.28Min: 3.26 / Avg: 3.31 / Max: 3.35Min: 3.29 / Avg: 3.33 / Max: 3.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1233691215SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 310.5010.3410.25MIN: 8.45 / MAX: 18.4MIN: 8.42 / MAX: 16.24MIN: 8.43 / MAX: 24.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1233691215Min: 10.4 / Avg: 10.5 / Max: 10.57Min: 10.24 / Avg: 10.34 / Max: 10.54Min: 10.15 / Avg: 10.25 / Max: 10.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.312330060090012001500SE +/- 0.53, N = 3SE +/- 0.69, N = 3SE +/- 2.03, N = 31180.101208.391208.021. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.31232004006008001000Min: 1179.34 / Avg: 1180.1 / Max: 1181.12Min: 1207.56 / Avg: 1208.39 / Max: 1209.75Min: 1203.96 / Avg: 1208.02 / Max: 1210.151. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1233K6K9K12K15KSE +/- 120.97, N = 3SE +/- 110.35, N = 3SE +/- 161.49, N = 315392.8115755.5915437.471. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1233K6K9K12K15KMin: 15152.03 / Avg: 15392.81 / Max: 15533.71Min: 15541.61 / Avg: 15755.59 / Max: 15909.41Min: 15152.55 / Avg: 15437.47 / Max: 15711.641. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231020304050SE +/- 0.58, N = 3SE +/- 0.43, N = 15SE +/- 0.65, N = 1542.7742.3441.811. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed123918273645Min: 42.02 / Avg: 42.77 / Max: 43.9Min: 38.94 / Avg: 42.34 / Max: 44.38Min: 36.12 / Avg: 41.81 / Max: 44.431. (CC) gcc options: -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12370K140K210K280K350KSE +/- 172.14, N = 3SE +/- 1840.34, N = 3SE +/- 2441.23, N = 3309946306270313216
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12350K100K150K200K250KMin: 309629 / Avg: 309945.67 / Max: 310221Min: 302657 / Avg: 306270 / Max: 308685Min: 308497 / Avg: 313215.67 / Max: 316661

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.18, N = 15SE +/- 0.17, N = 15SE +/- 0.06, N = 313.0713.1713.37MIN: 10.63MIN: 10.78MIN: 12.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU12348121620Min: 11.68 / Avg: 13.07 / Max: 13.48Min: 11.89 / Avg: 13.17 / Max: 13.59Min: 13.26 / Avg: 13.37 / Max: 13.461. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark123510152025SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 319.3619.6619.791. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark123510152025Min: 19.29 / Avg: 19.36 / Max: 19.46Min: 19.51 / Avg: 19.66 / Max: 19.87Min: 19.64 / Avg: 19.79 / Max: 20.031. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.07490.14980.22470.29960.3745SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 30.3330.3300.3261. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark12312345Min: 0.33 / Avg: 0.33 / Max: 0.34Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.33 / Max: 0.331. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit1231224364860SE +/- 0.17, N = 3SE +/- 0.21, N = 3SE +/- 0.31, N = 352.5552.3953.51MIN: 35.45 / MAX: 124.71MIN: 35.47 / MAX: 120.48MIN: 35.6 / MAX: 125.131. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit1231122334455Min: 52.28 / Avg: 52.55 / Max: 52.85Min: 51.98 / Avg: 52.39 / Max: 52.64Min: 52.9 / Avg: 53.51 / Max: 53.911. (CC) gcc options: -pthread -ldl -lm

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123510152025SE +/- 0.03, N = 3SE +/- 0.23, N = 8SE +/- 0.35, N = 322.2422.6922.581. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123510152025Min: 22.2 / Avg: 22.24 / Max: 22.29Min: 22.2 / Avg: 22.69 / Max: 24.18Min: 22.21 / Avg: 22.58 / Max: 23.281. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU123714212835SE +/- 0.15, N = 3SE +/- 0.28, N = 3SE +/- 0.14, N = 330.8131.4130.82MIN: 22.57MIN: 22.58MIN: 22.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU123714212835Min: 30.64 / Avg: 30.81 / Max: 31.1Min: 30.98 / Avg: 31.41 / Max: 31.95Min: 30.62 / Avg: 30.82 / Max: 31.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1232K4K6K8K10KSE +/- 133.51, N = 3SE +/- 86.65, N = 3SE +/- 61.07, N = 38437.128277.538342.73MIN: 7874.47MIN: 7767.68MIN: 7929.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12315003000450060007500Min: 8172.26 / Avg: 8437.12 / Max: 8598.98Min: 8112.62 / Avg: 8277.53 / Max: 8406.14Min: 8220.67 / Avg: 8342.73 / Max: 8407.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time12320406080100SE +/- 0.57, N = 3SE +/- 0.26, N = 3SE +/- 0.13, N = 382.0183.5882.19
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1231632486480Min: 81.07 / Avg: 82.01 / Max: 83.05Min: 83.18 / Avg: 83.58 / Max: 84.08Min: 81.94 / Avg: 82.19 / Max: 82.32

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.18, N = 319.1118.8718.75MIN: 16.69 / MAX: 35.81MIN: 16.81 / MAX: 33.23MIN: 16.84 / MAX: 32.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025Min: 19 / Avg: 19.11 / Max: 19.25Min: 18.76 / Avg: 18.87 / Max: 19.06Min: 18.41 / Avg: 18.75 / Max: 19.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 312.6512.8912.70MIN: 10.41 / MAX: 23.3MIN: 10.42 / MAX: 26.77MIN: 10.47 / MAX: 19.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v212348121620Min: 12.45 / Avg: 12.65 / Max: 12.85Min: 12.64 / Avg: 12.89 / Max: 13.12Min: 12.48 / Avg: 12.7 / Max: 12.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31233691215SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 39.679.499.59MIN: 7.8 / MAX: 15.65MIN: 7.78 / MAX: 14.76MIN: 7.81 / MAX: 16.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31233691215Min: 9.54 / Avg: 9.67 / Max: 9.75Min: 9.44 / Avg: 9.49 / Max: 9.59Min: 9.53 / Avg: 9.59 / Max: 9.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16123306090120150SE +/- 0.18, N = 3SE +/- 0.23, N = 3SE +/- 0.17, N = 3117.19119.38117.45MIN: 112.25 / MAX: 143.19MIN: 113.9 / MAX: 142.52MIN: 112.54 / MAX: 135.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1612320406080100Min: 116.85 / Avg: 117.19 / Max: 117.45Min: 118.95 / Avg: 119.38 / Max: 119.72Min: 117.14 / Avg: 117.45 / Max: 117.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b012348121620SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 317.0716.7616.89MIN: 14.1 / MAX: 30.1MIN: 13.96 / MAX: 31.87MIN: 14.11 / MAX: 30.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b012348121620Min: 16.87 / Avg: 17.07 / Max: 17.36Min: 16.71 / Avg: 16.76 / Max: 16.81Min: 16.65 / Avg: 16.89 / Max: 17.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231632486480SE +/- 0.82, N = 3SE +/- 0.39, N = 3SE +/- 0.19, N = 372.5573.1471.82MIN: 66.75 / MAX: 91.97MIN: 66.24 / MAX: 103.74MIN: 65.77 / MAX: 87.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231428425670Min: 71.55 / Avg: 72.55 / Max: 74.18Min: 72.36 / Avg: 73.14 / Max: 73.58Min: 71.45 / Avg: 71.82 / Max: 72.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1232M4M6M8M10MSE +/- 28500.12, N = 3SE +/- 50254.79, N = 3SE +/- 29445.58, N = 3774804778028287669043
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1231.4M2.8M4.2M5.6M7MMin: 7696997 / Avg: 7748046.67 / Max: 7795531Min: 7713402 / Avg: 7802828.33 / Max: 7887276Min: 7611696 / Avg: 7669043 / Max: 7709319

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.00, N = 311.9012.0511.861. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 012348121620Min: 11.87 / Avg: 11.9 / Max: 11.94Min: 11.93 / Avg: 12.05 / Max: 12.24Min: 11.86 / Avg: 11.86 / Max: 11.871. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1231.2M2.4M3.6M4.8M6MSE +/- 48644.36, N = 3SE +/- 74806.22, N = 3SE +/- 39149.01, N = 35718169562822056485891. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1231000K2000K3000K4000K5000KMin: 5629479 / Avg: 5718169.33 / Max: 5797146Min: 5499526 / Avg: 5628219.67 / Max: 5758645Min: 5574905 / Avg: 5648589.33 / Max: 57083641. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU123510152025SE +/- 0.26, N = 3SE +/- 0.18, N = 15SE +/- 0.14, N = 322.5422.3122.66MIN: 17.75MIN: 17.69MIN: 17.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU123510152025Min: 22.09 / Avg: 22.54 / Max: 23Min: 21.09 / Avg: 22.31 / Max: 23.32Min: 22.39 / Avg: 22.66 / Max: 22.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only123510152025SE +/- 0.21, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 320.7021.0120.74
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only123510152025Min: 20.28 / Avg: 20.7 / Max: 20.94Min: 20.77 / Avg: 21.01 / Max: 21.18Min: 20.36 / Avg: 20.74 / Max: 20.96

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1232K4K6K8K10KSE +/- 11.25, N = 3SE +/- 85.05, N = 3SE +/- 30.28, N = 37721.137794.137837.72MIN: 7547.72MIN: 7534.49MIN: 7613.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12314002800420056007000Min: 7702.17 / Avg: 7721.13 / Max: 7741.11Min: 7708.43 / Avg: 7794.13 / Max: 7964.23Min: 7779.6 / Avg: 7837.72 / Max: 7881.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612360K120K180K240K300KSE +/- 3138.20, N = 3SE +/- 2051.97, N = 3SE +/- 1852.38, N = 3265074.45267212.95269044.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612350K100K150K200K250KMin: 259875.13 / Avg: 265074.45 / Max: 270718.81Min: 264632.8 / Avg: 267212.95 / Max: 271266.88Min: 265340.35 / Avg: 269044.48 / Max: 270955.211. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet1231122334455SE +/- 0.03, N = 3SE +/- 0.66, N = 4SE +/- 0.07, N = 346.4047.0146.32MIN: 42.8 / MAX: 59.97MIN: 42.66 / MAX: 60.93MIN: 42.77 / MAX: 61.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet1231020304050Min: 46.35 / Avg: 46.4 / Max: 46.45Min: 46.22 / Avg: 47.01 / Max: 48.98Min: 46.21 / Avg: 46.32 / Max: 46.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only1230.07740.15480.23220.30960.387SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 30.3390.3420.344
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only12312345Min: 0.34 / Avg: 0.34 / Max: 0.34Min: 0.34 / Avg: 0.34 / Max: 0.34Min: 0.34 / Avg: 0.34 / Max: 0.35

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31235001000150020002500SE +/- 27.78, N = 3SE +/- 16.00, N = 3SE +/- 7.82, N = 32346.02358.02324.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3123400800120016002000Min: 2297.1 / Avg: 2346.03 / Max: 2393.3Min: 2329.7 / Avg: 2357.97 / Max: 2385.1Min: 2308.6 / Avg: 2324.23 / Max: 2332.61. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass1230.3150.630.9451.261.575SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.381.381.401. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass123246810Min: 1.37 / Avg: 1.38 / Max: 1.39Min: 1.38 / Avg: 1.38 / Max: 1.38Min: 1.39 / Avg: 1.4 / Max: 1.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123150K300K450K600K750KSE +/- 8641.46, N = 3SE +/- 6558.55, N = 3SE +/- 5594.95, N = 3706035.5696009.5700554.6
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123120K240K360K480K600KMin: 695838.4 / Avg: 706035.5 / Max: 723218.7Min: 689123.7 / Avg: 696009.53 / Max: 709121.1Min: 694830.4 / Avg: 700554.6 / Max: 711743.5

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only123612182430SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 324.1724.5224.19
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only123612182430Min: 24.12 / Avg: 24.17 / Max: 24.23Min: 24.45 / Avg: 24.52 / Max: 24.6Min: 24.06 / Avg: 24.19 / Max: 24.34

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 1912348121620SE +/- 0.18, N = 5SE +/- 0.03, N = 3SE +/- 0.06, N = 314.014.214.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 1912348121620Min: 13.3 / Avg: 14.02 / Max: 14.2Min: 14.2 / Avg: 14.23 / Max: 14.3Min: 14.1 / Avg: 14.2 / Max: 14.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M12370140210280350SE +/- 1.66, N = 3SE +/- 0.27, N = 3SE +/- 2.23, N = 3342.98339.54338.271. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M12360120180240300Min: 341.02 / Avg: 342.98 / Max: 346.27Min: 339.01 / Avg: 339.54 / Max: 339.84Min: 333.84 / Avg: 338.27 / Max: 340.931. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123400K800K1200K1600K2000KSE +/- 11595.14, N = 3SE +/- 21162.12, N = 3SE +/- 4337.84, N = 31735687.331758200.501734495.831. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123300K600K900K1200K1500KMin: 1712548 / Avg: 1735687.33 / Max: 1748587.38Min: 1727115.75 / Avg: 1758200.5 / Max: 1798618.75Min: 1727502.62 / Avg: 1734495.83 / Max: 17424391. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime1233691215SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 310.1310.1210.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime1233691215Min: 10.09 / Avg: 10.13 / Max: 10.21Min: 10.02 / Avg: 10.12 / Max: 10.28Min: 10.09 / Avg: 10.25 / Max: 10.51. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet123100K200K300K400K500KSE +/- 158.13, N = 3SE +/- 1459.42, N = 3SE +/- 564.75, N = 3467745461955467404
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12380K160K240K320K400KMin: 467441 / Avg: 467745.33 / Max: 467972Min: 460469 / Avg: 461955.33 / Max: 464874Min: 466822 / Avg: 467403.67 / Max: 468533

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder1232004006008001000SE +/- 3.54, N = 3SE +/- 10.03, N = 3SE +/- 2.19, N = 3810.95821.06820.321. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder123140280420560700Min: 806.85 / Avg: 810.95 / Max: 818Min: 802.01 / Avg: 821.06 / Max: 836.03Min: 816.22 / Avg: 820.32 / Max: 823.681. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16123306090120150SE +/- 0.37, N = 3SE +/- 0.24, N = 4SE +/- 0.16, N = 3117.46118.91118.04MIN: 111.97 / MAX: 149.37MIN: 113.22 / MAX: 141.78MIN: 112.22 / MAX: 141.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg1612320406080100Min: 116.8 / Avg: 117.46 / Max: 118.08Min: 118.22 / Avg: 118.91 / Max: 119.25Min: 117.87 / Avg: 118.04 / Max: 118.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V41231.4M2.8M4.2M5.6M7MSE +/- 24424.39, N = 3SE +/- 7475.10, N = 3SE +/- 4440.22, N = 3644101763899436468567
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V41231.1M2.2M3.3M4.4M5.5MMin: 6392450 / Avg: 6441016.67 / Max: 6469840Min: 6377600 / Avg: 6389943.33 / Max: 6403420Min: 6460380 / Avg: 6468566.67 / Max: 6475640

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1230.74931.49862.24792.99723.7465SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 33.313.333.29MIN: 2.64 / MAX: 9.91MIN: 2.62 / MAX: 5.11MIN: 2.73 / MAX: 4.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810Min: 3.29 / Avg: 3.31 / Max: 3.33Min: 3.27 / Avg: 3.33 / Max: 3.38Min: 3.23 / Avg: 3.29 / Max: 3.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123918273645SE +/- 0.18, N = 3SE +/- 0.23, N = 3SE +/- 0.51, N = 338.8438.9738.51MIN: 35.67MIN: 35.93MIN: 35.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123816243240Min: 38.63 / Avg: 38.84 / Max: 39.19Min: 38.53 / Avg: 38.97 / Max: 39.3Min: 37.49 / Avg: 38.51 / Max: 39.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KSE +/- 5888.95, N = 3SE +/- 19097.96, N = 3SE +/- 15253.86, N = 81489969.251486411.631472539.671. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KMin: 1483774.38 / Avg: 1489969.25 / Max: 1501741.75Min: 1449321.75 / Avg: 1486411.63 / Max: 1512859.25Min: 1383258.62 / Avg: 1472539.67 / Max: 1520291.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms123246810SE +/- 0.01425, N = 3SE +/- 0.03865, N = 3SE +/- 0.08887, N = 56.754076.799026.83284
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1233691215Min: 6.73 / Avg: 6.75 / Max: 6.78Min: 6.72 / Avg: 6.8 / Max: 6.85Min: 6.7 / Avg: 6.83 / Max: 7.18

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v31231428425670SE +/- 0.18, N = 3SE +/- 0.32, N = 3SE +/- 0.19, N = 363.4264.0063.27MIN: 60.02 / MAX: 120.02MIN: 60.33 / MAX: 93.06MIN: 60.45 / MAX: 98.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v31231326395265Min: 63.23 / Avg: 63.42 / Max: 63.77Min: 63.62 / Avg: 64 / Max: 64.64Min: 63.01 / Avg: 63.27 / Max: 63.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123510152025SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 319.4919.6019.711. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123510152025Min: 19.19 / Avg: 19.49 / Max: 19.72Min: 19.45 / Avg: 19.6 / Max: 19.69Min: 19.51 / Avg: 19.71 / Max: 19.91. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123816243240SE +/- 0.18, N = 3SE +/- 0.05, N = 3SE +/- 0.18, N = 332.5932.7932.43MIN: 28.72 / MAX: 51.96MIN: 28.77 / MAX: 48.75MIN: 28.44 / MAX: 46.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123714212835Min: 32.36 / Avg: 32.59 / Max: 32.94Min: 32.72 / Avg: 32.79 / Max: 32.9Min: 32.19 / Avg: 32.43 / Max: 32.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet1231122334455SE +/- 0.46, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 346.8346.4946.32MIN: 42.34 / MAX: 64.41MIN: 42.72 / MAX: 62.2MIN: 43.57 / MAX: 62.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet1231020304050Min: 46.24 / Avg: 46.83 / Max: 47.73Min: 46.4 / Avg: 46.49 / Max: 46.62Min: 46.22 / Avg: 46.32 / Max: 46.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Files123510152025SE +/- 0.24, N = 3SE +/- 0.21, N = 3SE +/- 0.12, N = 318.2518.3218.441. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Files123510152025Min: 17.94 / Avg: 18.25 / Max: 18.71Min: 18.08 / Avg: 18.32 / Max: 18.73Min: 18.22 / Avg: 18.44 / Max: 18.641. (CC) gcc options: -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 51230.19080.38160.57240.76320.954SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.8480.8440.839
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5123246810Min: 0.85 / Avg: 0.85 / Max: 0.85Min: 0.84 / Avg: 0.84 / Max: 0.85Min: 0.84 / Avg: 0.84 / Max: 0.84

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.4M2.8M4.2M5.6M7MSE +/- 2855.69, N = 3SE +/- 20483.82, N = 3SE +/- 23149.53, N = 36274275625501563220691. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.1M2.2M3.3M4.4M5.5MMin: 6268571 / Avg: 6274275.33 / Max: 6277373Min: 6218031 / Avg: 6255015 / Max: 6288768Min: 6298694 / Avg: 6322068.67 / Max: 63683671. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p1234080120160200SE +/- 0.82, N = 3SE +/- 0.30, N = 3SE +/- 0.36, N = 3182.89183.50184.81MIN: 167.96 / MAX: 203.4MIN: 169.65 / MAX: 201.98MIN: 171.93 / MAX: 203.271. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080p123306090120150Min: 181.85 / Avg: 182.89 / Max: 184.51Min: 183.08 / Avg: 183.5 / Max: 184.07Min: 184.11 / Avg: 184.81 / Max: 185.271. (CC) gcc options: -pthread -ldl -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1230.58791.17581.76372.35162.9395SE +/- 0.015, N = 3SE +/- 0.030, N = 3SE +/- 0.031, N = 32.6032.5862.6131. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein123246810Min: 2.59 / Avg: 2.6 / Max: 2.63Min: 2.55 / Avg: 2.59 / Max: 2.65Min: 2.58 / Avg: 2.61 / Max: 2.681. (CXX) g++ options: -O3 -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31233691215SE +/- 0.13, N = 3SE +/- 0.14, N = 4SE +/- 0.09, N = 39.599.699.60MIN: 7.78 / MAX: 22.63MIN: 7.81 / MAX: 18.31MIN: 7.73 / MAX: 19.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31233691215Min: 9.4 / Avg: 9.59 / Max: 9.83Min: 9.46 / Avg: 9.69 / Max: 10.08Min: 9.47 / Avg: 9.6 / Max: 9.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123612182430SE +/- 0.08, N = 3SE +/- 0.03, N = 4SE +/- 0.02, N = 323.4623.2423.48MIN: 21.29 / MAX: 37.36MIN: 21.11 / MAX: 37.35MIN: 21.25 / MAX: 36.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123510152025Min: 23.36 / Avg: 23.46 / Max: 23.61Min: 23.18 / Avg: 23.24 / Max: 23.32Min: 23.44 / Avg: 23.48 / Max: 23.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 21238M16M24M32M40MSE +/- 289585.60, N = 3SE +/- 208729.08, N = 3SE +/- 503813.27, N = 335499453.035791748.535427649.2
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 21236M12M18M24M30MMin: 35091593.5 / Avg: 35499453 / Max: 36059497.1Min: 35378389.7 / Avg: 35791748.47 / Max: 36048968.8Min: 34420043.2 / Avg: 35427649.23 / Max: 35937019.7

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m123510152025SE +/- 0.09, N = 3SE +/- 0.15, N = 4SE +/- 0.01, N = 318.8819.0619.07MIN: 16.76 / MAX: 26.52MIN: 16.61 / MAX: 34.07MIN: 16.77 / MAX: 34.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m123510152025Min: 18.7 / Avg: 18.88 / Max: 19Min: 18.82 / Avg: 19.06 / Max: 19.49Min: 19.05 / Avg: 19.07 / Max: 19.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon1230.75221.50442.25663.00883.761SE +/- 0.0186, N = 3SE +/- 0.0143, N = 3SE +/- 0.0299, N = 33.31403.31133.3432MIN: 3.25 / MAX: 3.4MIN: 3.26 / MAX: 3.4MIN: 3.25 / MAX: 3.45
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon123246810Min: 3.28 / Avg: 3.31 / Max: 3.34Min: 3.29 / Avg: 3.31 / Max: 3.34Min: 3.29 / Avg: 3.34 / Max: 3.39

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Fayalite-FIST Data123300600900120015001448.591461.851452.47

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123612182430SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 323.3023.4223.51MIN: 21.24 / MAX: 38.11MIN: 21.27 / MAX: 37.51MIN: 21.21 / MAX: 37.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123510152025Min: 23.15 / Avg: 23.3 / Max: 23.51Min: 23.28 / Avg: 23.42 / Max: 23.51Min: 23.4 / Avg: 23.51 / Max: 23.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass1230.5041.0081.5122.0162.52SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.222.222.241. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass123246810Min: 2.21 / Avg: 2.22 / Max: 2.22Min: 2.22 / Avg: 2.22 / Max: 2.22Min: 2.22 / Avg: 2.24 / Max: 2.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KSE +/- 6.35, N = 3SE +/- 8.66, N = 3SE +/- 57.65, N = 38722.48646.38690.11. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed12315003000450060007500Min: 8712.3 / Avg: 8722.37 / Max: 8734.1Min: 8636 / Avg: 8646.3 / Max: 8663.5Min: 8605 / Avg: 8690.07 / Max: 88001. (CC) gcc options: -O3

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00012320406080100SE +/- 0.14, N = 3SE +/- 0.76, N = 3SE +/- 0.70, N = 381.2281.4281.931. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231632486480Min: 80.97 / Avg: 81.22 / Max: 81.44Min: 80.56 / Avg: 81.42 / Max: 82.94Min: 81.1 / Avg: 81.93 / Max: 83.331. (CC) gcc options: -O2 -ldl -lz -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1231.31432.62863.94295.25726.5715SE +/- 0.02077, N = 3SE +/- 0.01362, N = 3SE +/- 0.01488, N = 35.814175.841295.79119MIN: 5.17MIN: 5.26MIN: 5.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU123246810Min: 5.79 / Avg: 5.81 / Max: 5.86Min: 5.81 / Avg: 5.84 / Max: 5.86Min: 5.76 / Avg: 5.79 / Max: 5.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10801232004006008001000SE +/- 4.11, N = 3SE +/- 3.33, N = 3SE +/- 4.89, N = 3814.1807.2807.91. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080123140280420560700Min: 806.1 / Avg: 814.1 / Max: 819.7Min: 802 / Avg: 807.17 / Max: 813.4Min: 802 / Avg: 807.9 / Max: 817.61. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default1230.3740.7481.1221.4961.87SE +/- 0.002, N = 3SE +/- 0.003, N = 3SE +/- 0.009, N = 31.6481.6621.6571. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default123246810Min: 1.65 / Avg: 1.65 / Max: 1.65Min: 1.66 / Avg: 1.66 / Max: 1.67Min: 1.65 / Avg: 1.66 / Max: 1.681. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KSE +/- 16215.78, N = 3SE +/- 2985.46, N = 3SE +/- 4396.42, N = 31216336.461213155.041223284.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123200K400K600K800K1000KMin: 1199462.88 / Avg: 1216336.46 / Max: 1248759Min: 1207729.5 / Avg: 1213155.04 / Max: 1218026.88Min: 1215105.75 / Avg: 1223284.42 / Max: 1230169.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughput123120K240K360K480K600KSE +/- 6800.99, N = 5SE +/- 3595.18, N = 5SE +/- 3270.45, N = 55550555596635576651. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughput123100K200K300K400K500KMin: 535468 / Avg: 555054.6 / Max: 575646Min: 551226 / Avg: 559663.2 / Max: 572228Min: 551013 / Avg: 557664.8 / Max: 5692161. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 10801234080120160200SE +/- 1.30, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 3158.1159.4159.4
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080123306090120150Min: 155.5 / Avg: 158.1 / Max: 159.4Min: 159.2 / Avg: 159.4 / Max: 159.6Min: 159.3 / Avg: 159.4 / Max: 159.6

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1230.24910.49820.74730.99641.2455SE +/- 0.004, N = 3SE +/- 0.009, N = 3SE +/- 0.002, N = 31.1071.0981.106
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar123246810Min: 1.1 / Avg: 1.11 / Max: 1.11Min: 1.08 / Avg: 1.1 / Max: 1.11Min: 1.1 / Avg: 1.11 / Max: 1.11

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1230.11210.22420.33630.44840.5605SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.4940.4940.498
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom123246810Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.5

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression1231326395265SE +/- 0.26, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 357.6757.2257.451. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression1231122334455Min: 57.37 / Avg: 57.67 / Max: 58.18Min: 57.18 / Avg: 57.22 / Max: 57.25Min: 57.27 / Avg: 57.45 / Max: 57.571. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12350100150200250SE +/- 0.34, N = 3SE +/- 0.33, N = 3SE +/- 0.50, N = 3242.34241.36243.26
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark1234080120160200Min: 241.66 / Avg: 242.34 / Max: 242.75Min: 240.7 / Avg: 241.36 / Max: 241.78Min: 242.62 / Avg: 243.26 / Max: 244.24

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123110K220K330K440K550KSE +/- 423.52, N = 3SE +/- 1952.23, N = 3SE +/- 2233.09, N = 3508106506055504159
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite12390K180K270K360K450KMin: 507598 / Avg: 508106 / Max: 508947Min: 502231 / Avg: 506055.33 / Max: 508649Min: 499707 / Avg: 504159 / Max: 506693

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj1230.63831.27661.91492.55323.1915SE +/- 0.0119, N = 3SE +/- 0.0157, N = 3SE +/- 0.0152, N = 32.81992.83712.8151MIN: 2.75 / MAX: 2.92MIN: 2.77 / MAX: 2.9MIN: 2.75 / MAX: 2.89
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj123246810Min: 2.8 / Avg: 2.82 / Max: 2.83Min: 2.82 / Avg: 2.84 / Max: 2.87Min: 2.79 / Avg: 2.82 / Max: 2.84

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12320K40K60K80K100KSE +/- 816.07, N = 3SE +/- 290.85, N = 3SE +/- 373.76, N = 3102524.44101765.57102339.411. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12320K40K60K80K100KMin: 101426.31 / Avg: 102524.44 / Max: 104119.22Min: 101355.63 / Avg: 101765.57 / Max: 102327.96Min: 101878.38 / Avg: 102339.41 / Max: 103079.51. (CC) gcc options: -O2 -lrt" -lrt

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj1230.67281.34562.01842.69123.364SE +/- 0.0237, N = 3SE +/- 0.0135, N = 3SE +/- 0.0217, N = 32.99032.96822.9782MIN: 2.9 / MAX: 3.08MIN: 2.9 / MAX: 3.07MIN: 2.91 / MAX: 3.08
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj123246810Min: 2.95 / Avg: 2.99 / Max: 3.03Min: 2.94 / Avg: 2.97 / Max: 2.99Min: 2.95 / Avg: 2.98 / Max: 3.02

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1001239K18K27K36K45KSE +/- 193.32, N = 3SE +/- 90.86, N = 3SE +/- 137.35, N = 34187741672415731. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1001237K14K21K28K35KMin: 41557 / Avg: 41877.33 / Max: 42225Min: 41491 / Avg: 41671.67 / Max: 41779Min: 41305 / Avg: 41572.67 / Max: 417601. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics123510152025SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 321.0721.1420.99
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics123510152025Min: 20.98 / Avg: 21.07 / Max: 21.21Min: 21.08 / Avg: 21.14 / Max: 21.24Min: 20.91 / Avg: 20.99 / Max: 21.03

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18123714212835SE +/- 0.26, N = 3SE +/- 0.14, N = 4SE +/- 0.02, N = 329.1729.0829.28MIN: 26 / MAX: 40.97MIN: 25.74 / MAX: 44.35MIN: 25.57 / MAX: 39.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18123612182430Min: 28.72 / Avg: 29.17 / Max: 29.61Min: 28.92 / Avg: 29.08 / Max: 29.49Min: 29.24 / Avg: 29.28 / Max: 29.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny1231326395265SE +/- 0.04, N = 3SE +/- 0.06, N = 4SE +/- 0.16, N = 359.0059.4059.28MIN: 55.05 / MAX: 74.18MIN: 55.12 / MAX: 75.24MIN: 54.59 / MAX: 74.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny1231224364860Min: 58.93 / Avg: 59 / Max: 59.06Min: 59.26 / Avg: 59.4 / Max: 59.57Min: 58.98 / Avg: 59.28 / Max: 59.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programs12320406080100SE +/- 0.27, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 381.5281.9782.071. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch Programs1231632486480Min: 81.02 / Avg: 81.52 / Max: 81.96Min: 81.9 / Avg: 81.97 / Max: 82.07Min: 81.89 / Avg: 82.07 / Max: 82.21. (CC) gcc options: -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1230.33750.6751.01251.351.6875SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.491.491.501. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium123246810Min: 1.49 / Avg: 1.49 / Max: 1.49Min: 1.49 / Avg: 1.49 / Max: 1.49Min: 1.49 / Avg: 1.5 / Max: 1.51. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threads12348121620SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 314.9214.8214.911. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create Threads12348121620Min: 14.89 / Avg: 14.92 / Max: 14.97Min: 14.7 / Avg: 14.82 / Max: 15Min: 14.67 / Avg: 14.91 / Max: 15.171. (CC) gcc options: -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 61230.2450.490.7350.981.225SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 31.0821.0831.089
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6123246810Min: 1.08 / Avg: 1.08 / Max: 1.09Min: 1.08 / Avg: 1.08 / Max: 1.09Min: 1.09 / Avg: 1.09 / Max: 1.09

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown1230.6251.251.8752.53.125SE +/- 0.0145, N = 3SE +/- 0.0043, N = 3SE +/- 0.0087, N = 32.76012.76592.7779MIN: 2.71 / MAX: 2.86MIN: 2.73 / MAX: 2.83MIN: 2.75 / MAX: 2.87
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown123246810Min: 2.74 / Avg: 2.76 / Max: 2.79Min: 2.76 / Avg: 2.77 / Max: 2.77Min: 2.77 / Avg: 2.78 / Max: 2.8

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast12348121620SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 315.6015.5215.621. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast12348121620Min: 15.52 / Avg: 15.6 / Max: 15.71Min: 15.43 / Avg: 15.52 / Max: 15.64Min: 15.56 / Avg: 15.62 / Max: 15.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810SE +/- 0.02489, N = 3SE +/- 0.00711, N = 3SE +/- 0.00751, N = 37.357447.360617.31351MIN: 6.35MIN: 6.34MIN: 6.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU12336912