Ryzen 3 2200G 2021

AMD Ryzen 3 2200G testing with a ASUS PRIME B350M-E (5220 BIOS) and ASUS AMD Radeon Vega / Mobile 2GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101191-HA-RYZEN322022
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 3 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 21 Tests
Creator Workloads 24 Tests
Database Test Suite 4 Tests
Encoding 8 Tests
Fortran Tests 6 Tests
Game Development 3 Tests
HPC - High Performance Computing 24 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 9 Tests
Molecular Dynamics 9 Tests
MPI Benchmarks 4 Tests
Multi-Core 19 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 9 Tests
Programmer / Developer System Benchmarks 9 Tests
Python Tests 5 Tests
Scientific Computing 15 Tests
Server 7 Tests
Server CPU Tests 12 Tests
Single-Threaded 6 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 2 Tests
Video Encoding 5 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
January 16 2021
  18 Hours, 35 Minutes
2
January 17 2021
  20 Hours, 52 Minutes
3
January 18 2021
  19 Hours, 6 Minutes
Invert Hiding All Results Option
  19 Hours, 31 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


Ryzen 3 2200G 2021ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen 3 2200G @ 3.50GHz (4 Cores)ASUS PRIME B350M-E (5220 BIOS)AMD Raven/Raven26GBSamsung SSD 970 EVO 250GBASUS AMD Radeon Vega / Mobile 2GB (1100/1600MHz)AMD Raven/Raven2/FenghuangG237HLRealtek RTL8111/8168/8411Ubuntu 20.105.8.0-38-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.6 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8101016 Graphics Details- GLAMORJava Details- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%105%110%115%121%LeelaChessZeroRedisSunflow Rendering SystemNode.js V8 Web Tooling BenchmarkSockperfLULESHFFTELibRawGROMACSRNNoiseHuginOSBenchasmFishDarktableStockfishKeyDBCP2K Molecular DynamicsOpenFOAMNAMDTensorFlow LiteCraftyLAMMPS Molecular Dynamics SimulatorBYTE Unix BenchmarkTimed Godot Game Engine CompilationAOM AV1Zstd Compressionrav1eWarsowIncompact3DSQLite SpeedtestIndigoBenchLZ4 CompressionNumpy BenchmarkPHPBenchx265dav1dCoremarkDolfynMonte Carlo Simulations of Ionised NebulaeBasis UniversalWavPack Audio EncodingNCNNOCRMyPDFTimed Eigen CompilationTimed FFmpeg CompilationeSpeak-NG Speech EngineGoogle SynthMarkInfluxDBoneDNNCloverLeafTimed HMMer SearchAlgebraic Multi-Grid BenchmarkMobile Neural NetworkTimed MAFFT AlignmentTNNEmbreeMonkey Audio EncodingVKMarkRawTherapeeOpus Codec EncodingGIMPWaifu2x-NCNN VulkanUnpacking Firefoxyquake2ASTC EncoderRealSR-NCNNKvazaarHierarchical INTegrationGLmark2WebP Image EncodeBuild2CLOMPCaffesimdjson

Ryzen 3 2200G 2021incompact3d: Cylinderlczero: BLASkripke: astcenc: Exhaustivelczero: Eigengromacs: Water Benchmarkbuild2: Time To Compilebuild-godot: Time To Compilecp2k: Fayalite-FIST Datarealsr-ncnn: 4x - Yeskvazaar: Bosphorus 4K - Mediumnamd: ATPase Simulation - 327,506 Atomsopenfoam: Motorbike 30Mcompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedmocassin: Dust 2D tau100.0numpy: compress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUdav1d: Chimera 1080p 10-bitembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer - Asian Dragon Objembree: Pathtracer - Crownasmfish: 1024 Hash Memory, 26 Depthcompress-zstd: 19cloverleaf: Lagrangian-Eulerian Hydrodynamicsembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Asian Dragonbuild-ffmpeg: Time To Compiletensorflow-lite: Inception V4ncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenettensorflow-lite: Inception ResNet V2kvazaar: Bosphorus 4K - Very Fastmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0influxdb: 4 - 10000 - 2,5000,1 - 10000hint: FLOATclomp: Static OMP Speedupinfluxdb: 64 - 10000 - 2,5000,1 - 10000ncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetvkmark: 1920 x 1080hmmer: Pfam Database Searchnode-web-tooling: rawtherapee: Total Benchmark Timex265: Bosphorus 4Kbyte: Dhrystone 2build-eigen: Time To Compilecaffe: GoogleNet - CPU - 100glmark2: 1920 x 1080onednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUkvazaar: Bosphorus 1080p - Mediumastcenc: Thoroughkvazaar: Bosphorus 4K - Ultra Fastbasis: UASTC Level 2hugin: Panorama Photo Assistant + Stitching Timebasis: ETC1Ssqlite-speedtest: Timed Time - Size 1,000warsow: 1920 x 1080stockfish: Total Timesunflow: Global Illumination + Image Synthesisdav1d: Summer Nature 4Krav1e: 5keydb: dav1d: Chimera 1080pcompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedsockperf: Latency Under Loadrealsr-ncnn: 4x - Noindigobench: CPU - Bedroomlibraw: Post-Processing Benchmarkindigobench: CPU - Supercartensorflow-lite: SqueezeNettensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantaom-av1: Speed 6 Realtimesimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDwebp: Quality 100, Lossless, Highest Compressiondarktable: Boat - CPU-onlyrav1e: 6ocrmypdf: Processing 60 Page PDF Documentencode-wavpack: WAV To WavPackonednn: Deconvolution Batch shapes_1d - f32 - CPUsimdjson: Kostyaespeak: Text-To-Speech Synthesisonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUaom-av1: Speed 6 Two-Passcompress-zstd: 3caffe: AlexNet - CPU - 100phpbench: PHP Benchmark Suitekvazaar: Bosphorus 1080p - Very Fastrnnoise: rav1e: 10onednn: IP Shapes 3D - f32 - CPUaom-av1: Speed 4 Two-Passunpack-firefox: firefox-84.0.source.tar.xzcrafty: Elapsed Timex265: Bosphorus 1080psynthmark: VoiceMark_100waifu2x-ncnn: 2x - 3 - Yesencode-ape: WAV To APEdarktable: Masskrug - CPU-onlywebp: Quality 100, Losslessaom-av1: Speed 8 Realtimedarktable: Server Room - CPU-onlykvazaar: Bosphorus 1080p - Ultra Fastdolfyn: Computational Fluid Dynamicsdav1d: Summer Nature 1080ptnn: CPU - SqueezeNet v1.1coremark: CoreMark Size 666 - Iterations Per Secondtnn: CPU - MobileNet v2astcenc: Mediumgimp: unsharp-maskredis: SETgimp: auto-levelsonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUmafft: Multiple Sequence Alignment - LSU RNAencode-opus: WAV To Opus Encodegimp: rotatesockperf: Latency Ping Pongsockperf: Throughputredis: GETgimp: resizeonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUbasis: UASTC Level 0amg: redis: LPUSHredis: LPOPredis: SADDastcenc: Fastonednn: IP Shapes 3D - u8s8f32 - CPUwebp: Quality 100, Highest Compressionyquake2: Software CPU - 1920 x 1080lammps: Rhodopsin Proteinonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUosbench: Create Fileslulesh: osbench: Memory Allocationsosbench: Launch Programsosbench: Create Processesosbench: Create Threadswaifu2x-ncnn: 2x - 3 - Noonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUwebp: Quality 100ffte: N=256, 3D Complex FFT Routinewebp: Defaultyquake2: OpenGL 3.x - 1920 x 1080darktable: Server Rack - CPU-only123810.9547124324811563696.054480.333514.482501.2001448.589482.6091.496.75407342.988565.242.22342242.348554.342.778195.2052.552.58192.81992.99032.7601774804714.0191.413.24313.3140182.189644101718.8859.2859.0071.6823.4629.17117.4632.563.2516.9810.3812.639.5910.8846.4056913103.9463.4157.3955.42450.2169.732706035.5301333349.993822721428.319.1159.3959.2872.5523.3029.04117.1932.593.3117.0710.5012.659.6711.2046.831199127.1497.38123.6844.8135499453.0113.52011008418497721.137913.617746.8736.55968437.128193.616.5184.336.8486.47982.00682.06081.222158.157181693.20651.920.848265074.45184.258722.47994.3053.77563.0280.49419.361.10746774531611230994632873310.130.350.450.4657.67025.2061.08252.67515.08222.54130.3835.13414.82152.222346.04187750810615.6022.2382.57313.07361.3823.847627427519.49596.25426.67515.95524.17224.89427.2020.69527.0121.069182.89287.255102524.442176279.33912.7717.3321489969.2515.92416.542914.632215.0428.93614.4876.9275550552064794.8312.8557.3574411.8982134915331216336.462261210.921735687.339.755.814178.87292.92.60338.839223.021318.2458881180.097181.74268481.52326026.19028114.9202354.11529.709830.80732.59515392.8099519221.648814.10.339821.0557253743117717697.503800.330516.516503.9771461.853482.8391.496.79902339.548552.141.02340241.368547.842.348438.6452.392.56702.83712.96822.7659780282814.2191.453.25043.3113183.024638994319.0659.1659.4074.1023.2429.08118.9132.643.3116.9510.4412.989.6911.3047.0156970833.9463.9977.5265.41950.2699.613696009.5301687480.188422.0725224.218.8759.7159.4273.1423.4229.04119.3832.793.3316.7610.3412.899.4910.9146.491199127.6457.74123.4054.8335791748.5113.65411032018517794.137667.947725.6335.84618277.538356.196.5184.616.8586.54083.58182.17881.423159.456282203.14852.000.844267212.95184.178646.38015.3952.75863.0230.49419.661.09846195531721330627031868510.120.350.450.4657.21625.9011.08352.74115.07722.31440.3835.31915.08252.222358.04167250605515.5222.6932.56613.16851.3823.799625501519.60596.61526.68515.99424.51824.95727.1821.00827.0721.139183.50287.063101765.566926279.62612.8317.3171486411.6315.85316.555814.870015.0008.92314.4706.7515596631931045.2012.8667.3606112.0542142322331213155.041258380.501758200.509.745.841298.87193.32.58638.972823.679118.3158461208.386681.99938281.96751326.47956214.8232784.11029.888931.41472.60115755.5852717621.662807.20.342820.322815353695.243770.326514.799502.6081452.469482.5511.506.83284338.278562.841.23341243.268547.241.818426.8453.512.58282.81512.97822.7779766904314.2191.013.24823.3432182.601646856719.0759.5059.2871.6923.4829.28118.0432.593.3316.9010.4312.759.6010.6846.3256890703.9563.2697.3135.39850.4899.867700554.6301185316.589092723222.118.7559.3459.2871.8223.5129.18117.4532.433.2916.8910.2512.709.5911.0446.321196127.3857.38123.3394.8335427649.2112.96411015718527837.727750.497701.7533.05878342.738419.826.5084.496.8386.34082.18882.07681.933159.456485893.30252.090.839269044.48183.888690.18022.4850.69963.0020.49819.791.10646740431579031321632718710.250.350.450.4657.45225.4511.08952.98615.17422.66190.3835.31215.56192.242324.24157350415915.6222.5802.63913.36611.4023.806632206919.71593.90926.67315.99524.19424.90027.2920.73927.0520.988184.81286.140102339.408641279.27812.7517.3281472539.6715.89416.610815.197515.0358.91514.4246.7905576651930168.3812.8267.3135111.8632140726331223284.421275489.171734495.839.805.791198.86993.32.61338.505523.669718.4418621208.018585.63200682.07321226.85705814.9091084.09729.121030.82082.59015437.4685789981.657807.90.344OpenBenchmarking.org

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder1322004006008001000SE +/- 3.54, N = 3SE +/- 2.19, N = 3SE +/- 10.03, N = 3810.95820.32821.061. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: Cylinder132140280420560700Min: 806.85 / Avg: 810.95 / Max: 818Min: 816.22 / Avg: 820.32 / Max: 823.68Min: 802.01 / Avg: 821.06 / Max: 836.031. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS12390180270360450SE +/- 4.54, N = 8SE +/- 6.01, N = 9SE +/- 2.52, N = 34323743531. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS12380160240320400Min: 400 / Avg: 431.63 / Max: 438Min: 355 / Avg: 374.33 / Max: 400Min: 348 / Avg: 353 / Max: 3561. (CXX) g++ options: -flto -pthread

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4121000K2000K3000K4000K5000KSE +/- 36406.50, N = 2SE +/- 35494.54, N = 3481156331177171. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.412800K1600K2400K3200K4000KMin: 4775156 / Avg: 4811562.5 / Max: 4847969Min: 3047023 / Avg: 3117717.33 / Max: 31586611. (CXX) g++ options: -O3 -fopenmp

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive312150300450600750SE +/- 0.20, N = 3SE +/- 1.18, N = 3SE +/- 0.48, N = 3695.24696.05697.501. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive312120240360480600Min: 695.01 / Avg: 695.24 / Max: 695.64Min: 694.33 / Avg: 696.05 / Max: 698.3Min: 696.81 / Avg: 697.5 / Max: 698.411. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123100200300400500SE +/- 4.81, N = 3SE +/- 5.13, N = 94483803771. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen12380160240320400Min: 439 / Avg: 448.33 / Max: 455Min: 354 / Avg: 377.22 / Max: 4031. (CXX) g++ options: -flto -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.07490.14980.22470.29960.3745SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 30.3330.3300.3261. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark12312345Min: 0.33 / Avg: 0.33 / Max: 0.34Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.33 / Max: 0.331. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile132110220330440550SE +/- 0.43, N = 3SE +/- 1.15, N = 3SE +/- 2.15, N = 3514.48514.80516.52
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile13290180270360450Min: 513.82 / Avg: 514.48 / Max: 515.3Min: 512.7 / Avg: 514.8 / Max: 516.68Min: 513.92 / Avg: 516.52 / Max: 520.79

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile132110220330440550SE +/- 0.16, N = 3SE +/- 0.32, N = 3SE +/- 0.30, N = 3501.20502.61503.98
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile13290180270360450Min: 500.99 / Avg: 501.2 / Max: 501.52Min: 502.04 / Avg: 502.61 / Max: 503.15Min: 503.51 / Avg: 503.98 / Max: 504.53

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Fayalite-FIST Data132300600900120015001448.591452.471461.85

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes312100200300400500SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3482.55482.61482.84
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes31290180270360450Min: 482.55 / Avg: 482.55 / Max: 482.56Min: 482.56 / Avg: 482.61 / Max: 482.64Min: 482.82 / Avg: 482.84 / Max: 482.86

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium3210.33750.6751.01251.351.6875SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.501.491.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium321246810Min: 1.49 / Avg: 1.5 / Max: 1.5Min: 1.49 / Avg: 1.49 / Max: 1.49Min: 1.49 / Avg: 1.49 / Max: 1.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms123246810SE +/- 0.01425, N = 3SE +/- 0.03865, N = 3SE +/- 0.08887, N = 56.754076.799026.83284
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1233691215Min: 6.73 / Avg: 6.75 / Max: 6.78Min: 6.72 / Avg: 6.8 / Max: 6.85Min: 6.7 / Avg: 6.83 / Max: 7.18

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M32170140210280350SE +/- 2.23, N = 3SE +/- 0.27, N = 3SE +/- 1.66, N = 3338.27339.54342.981. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30M32160120180240300Min: 333.84 / Avg: 338.27 / Max: 340.93Min: 339.01 / Avg: 339.54 / Max: 339.84Min: 341.02 / Avg: 342.98 / Max: 346.271. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1322K4K6K8K10KSE +/- 6.38, N = 13SE +/- 3.43, N = 15SE +/- 5.73, N = 158565.28562.88552.11. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed13215003000450060007500Min: 8513.2 / Avg: 8565.21 / Max: 8593.6Min: 8538.2 / Avg: 8562.85 / Max: 8585.4Min: 8514.6 / Avg: 8552.15 / Max: 8612.21. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1321020304050SE +/- 0.73, N = 13SE +/- 0.47, N = 15SE +/- 0.55, N = 1542.2241.2341.021. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed132918273645Min: 35.77 / Avg: 42.22 / Max: 45.93Min: 38.26 / Avg: 41.23 / Max: 43.68Min: 36.17 / Avg: 41.02 / Max: 43.431. (CC) gcc options: -O3

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.023170140210280350SE +/- 1.76, N = 3SE +/- 0.67, N = 33403413421. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.023160120180240300Min: 337 / Avg: 339.67 / Max: 343Min: 340 / Avg: 341.33 / Max: 3421. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark31250100150200250SE +/- 0.50, N = 3SE +/- 0.34, N = 3SE +/- 0.33, N = 3243.26242.34241.36
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark3124080120160200Min: 242.62 / Avg: 243.26 / Max: 244.24Min: 241.66 / Avg: 242.34 / Max: 242.75Min: 240.7 / Avg: 241.36 / Max: 241.78

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KSE +/- 26.74, N = 3SE +/- 8.28, N = 15SE +/- 5.83, N = 158554.38547.88547.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed12315003000450060007500Min: 8515.3 / Avg: 8554.3 / Max: 8605.5Min: 8494.5 / Avg: 8547.83 / Max: 8615.7Min: 8499.7 / Avg: 8547.19 / Max: 8592.31. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231020304050SE +/- 0.58, N = 3SE +/- 0.43, N = 15SE +/- 0.65, N = 1542.7742.3441.811. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed123918273645Min: 42.02 / Avg: 42.77 / Max: 43.9Min: 38.94 / Avg: 42.34 / Max: 44.38Min: 36.12 / Avg: 41.81 / Max: 44.431. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1322K4K6K8K10KSE +/- 99.84, N = 5SE +/- 46.50, N = 3SE +/- 66.22, N = 158195.208426.848438.64MIN: 7505MIN: 8003.45MIN: 7752.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU13215003000450060007500Min: 7835.76 / Avg: 8195.2 / Max: 8382.75Min: 8372.71 / Avg: 8426.84 / Max: 8519.39Min: 8073.51 / Avg: 8438.64 / Max: 9047.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit3121224364860SE +/- 0.31, N = 3SE +/- 0.17, N = 3SE +/- 0.21, N = 353.5152.5552.39MIN: 35.6 / MAX: 125.13MIN: 35.45 / MAX: 124.71MIN: 35.47 / MAX: 120.481. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bit3121122334455Min: 52.9 / Avg: 53.51 / Max: 53.91Min: 52.28 / Avg: 52.55 / Max: 52.85Min: 51.98 / Avg: 52.39 / Max: 52.641. (CC) gcc options: -pthread -ldl -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown3120.58111.16221.74332.32442.9055SE +/- 0.0165, N = 3SE +/- 0.0040, N = 3SE +/- 0.0105, N = 32.58282.58192.5670MIN: 2.51 / MAX: 2.65MIN: 2.55 / MAX: 2.62MIN: 2.52 / MAX: 2.63
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown312246810Min: 2.55 / Avg: 2.58 / Max: 2.6Min: 2.57 / Avg: 2.58 / Max: 2.59Min: 2.55 / Avg: 2.57 / Max: 2.59

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj2130.63831.27661.91492.55323.1915SE +/- 0.0157, N = 3SE +/- 0.0119, N = 3SE +/- 0.0152, N = 32.83712.81992.8151MIN: 2.77 / MAX: 2.9MIN: 2.75 / MAX: 2.92MIN: 2.75 / MAX: 2.89
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj213246810Min: 2.82 / Avg: 2.84 / Max: 2.87Min: 2.8 / Avg: 2.82 / Max: 2.83Min: 2.79 / Avg: 2.82 / Max: 2.84

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj1320.67281.34562.01842.69123.364SE +/- 0.0237, N = 3SE +/- 0.0217, N = 3SE +/- 0.0135, N = 32.99032.97822.9682MIN: 2.9 / MAX: 3.08MIN: 2.91 / MAX: 3.08MIN: 2.9 / MAX: 3.07
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj132246810Min: 2.95 / Avg: 2.99 / Max: 3.03Min: 2.95 / Avg: 2.98 / Max: 3.02Min: 2.94 / Avg: 2.97 / Max: 2.99

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown3210.6251.251.8752.53.125SE +/- 0.0087, N = 3SE +/- 0.0043, N = 3SE +/- 0.0145, N = 32.77792.76592.7601MIN: 2.75 / MAX: 2.87MIN: 2.73 / MAX: 2.83MIN: 2.71 / MAX: 2.86
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown321246810Min: 2.77 / Avg: 2.78 / Max: 2.8Min: 2.76 / Avg: 2.77 / Max: 2.77Min: 2.74 / Avg: 2.76 / Max: 2.79

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth2132M4M6M8M10MSE +/- 50254.79, N = 3SE +/- 28500.12, N = 3SE +/- 29445.58, N = 3780282877480477669043
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth2131.4M2.8M4.2M5.6M7MMin: 7713402 / Avg: 7802828.33 / Max: 7887276Min: 7696997 / Avg: 7748046.67 / Max: 7795531Min: 7611696 / Avg: 7669043 / Max: 7709319

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 1932148121620SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.18, N = 514.214.214.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 1932148121620Min: 14.1 / Avg: 14.2 / Max: 14.3Min: 14.2 / Avg: 14.23 / Max: 14.3Min: 13.3 / Avg: 14.02 / Max: 14.21. (CC) gcc options: -O3 -pthread -lz -llzma

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics3124080120160200SE +/- 0.07, N = 3SE +/- 0.19, N = 3SE +/- 0.07, N = 3191.01191.41191.451. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian Hydrodynamics3124080120160200Min: 190.88 / Avg: 191.01 / Max: 191.11Min: 191.11 / Avg: 191.41 / Max: 191.76Min: 191.35 / Avg: 191.45 / Max: 191.581. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon2310.73131.46262.19392.92523.6565SE +/- 0.0130, N = 3SE +/- 0.0025, N = 3SE +/- 0.0133, N = 33.25043.24823.2431MIN: 3.19 / MAX: 3.32MIN: 3.2 / MAX: 3.32MIN: 3.18 / MAX: 3.32
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon231246810Min: 3.23 / Avg: 3.25 / Max: 3.27Min: 3.24 / Avg: 3.25 / Max: 3.25Min: 3.22 / Avg: 3.24 / Max: 3.27

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon3120.75221.50442.25663.00883.761SE +/- 0.0299, N = 3SE +/- 0.0186, N = 3SE +/- 0.0143, N = 33.34323.31403.3113MIN: 3.25 / MAX: 3.45MIN: 3.25 / MAX: 3.4MIN: 3.26 / MAX: 3.4
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon312246810Min: 3.29 / Avg: 3.34 / Max: 3.39Min: 3.28 / Avg: 3.31 / Max: 3.34Min: 3.29 / Avg: 3.31 / Max: 3.34

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile1324080120160200SE +/- 0.34, N = 3SE +/- 0.72, N = 3SE +/- 0.22, N = 3182.19182.60183.02
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile132306090120150Min: 181.83 / Avg: 182.19 / Max: 182.88Min: 181.36 / Avg: 182.6 / Max: 183.86Min: 182.6 / Avg: 183.02 / Max: 183.3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V42131.4M2.8M4.2M5.6M7MSE +/- 7475.10, N = 3SE +/- 24424.39, N = 3SE +/- 4440.22, N = 3638994364410176468567
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V42131.1M2.2M3.3M4.4M5.5MMin: 6377600 / Avg: 6389943.33 / Max: 6403420Min: 6392450 / Avg: 6441016.67 / Max: 6469840Min: 6460380 / Avg: 6468566.67 / Max: 6475640

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m123510152025SE +/- 0.09, N = 3SE +/- 0.15, N = 4SE +/- 0.01, N = 318.8819.0619.07MIN: 16.76 / MAX: 26.52MIN: 16.61 / MAX: 34.07MIN: 16.77 / MAX: 34.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m123510152025Min: 18.7 / Avg: 18.88 / Max: 19Min: 18.82 / Avg: 19.06 / Max: 19.49Min: 19.05 / Avg: 19.07 / Max: 19.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd2131326395265SE +/- 0.14, N = 4SE +/- 0.08, N = 3SE +/- 0.16, N = 359.1659.2859.50MIN: 51.95 / MAX: 72.82MIN: 53.06 / MAX: 77.54MIN: 52.65 / MAX: 71.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd2131224364860Min: 58.89 / Avg: 59.16 / Max: 59.52Min: 59.18 / Avg: 59.28 / Max: 59.43Min: 59.19 / Avg: 59.5 / Max: 59.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny1321326395265SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 459.0059.2859.40MIN: 55.05 / MAX: 74.18MIN: 54.59 / MAX: 74.87MIN: 55.12 / MAX: 75.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny1321224364860Min: 58.93 / Avg: 59 / Max: 59.06Min: 58.98 / Avg: 59.28 / Max: 59.5Min: 59.26 / Avg: 59.4 / Max: 59.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501321632486480SE +/- 0.20, N = 3SE +/- 0.29, N = 3SE +/- 0.63, N = 471.6871.6974.10MIN: 65.87 / MAX: 90.56MIN: 66.47 / MAX: 91.18MIN: 66.26 / MAX: 110.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501321428425670Min: 71.39 / Avg: 71.68 / Max: 72.06Min: 71.39 / Avg: 71.69 / Max: 72.28Min: 73.21 / Avg: 74.1 / Max: 75.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet213612182430SE +/- 0.03, N = 4SE +/- 0.08, N = 3SE +/- 0.02, N = 323.2423.4623.48MIN: 21.11 / MAX: 37.35MIN: 21.29 / MAX: 37.36MIN: 21.25 / MAX: 36.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet213510152025Min: 23.18 / Avg: 23.24 / Max: 23.32Min: 23.36 / Avg: 23.46 / Max: 23.61Min: 23.44 / Avg: 23.48 / Max: 23.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18213714212835SE +/- 0.14, N = 4SE +/- 0.26, N = 3SE +/- 0.02, N = 329.0829.1729.28MIN: 25.74 / MAX: 44.35MIN: 26 / MAX: 40.97MIN: 25.57 / MAX: 39.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18213612182430Min: 28.92 / Avg: 29.08 / Max: 29.49Min: 28.72 / Avg: 29.17 / Max: 29.61Min: 29.24 / Avg: 29.28 / Max: 29.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16132306090120150SE +/- 0.37, N = 3SE +/- 0.16, N = 3SE +/- 0.24, N = 4117.46118.04118.91MIN: 111.97 / MAX: 149.37MIN: 112.22 / MAX: 141.41MIN: 113.22 / MAX: 141.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg1613220406080100Min: 116.8 / Avg: 117.46 / Max: 118.08Min: 117.87 / Avg: 118.04 / Max: 118.36Min: 118.22 / Avg: 118.91 / Max: 119.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet132816243240SE +/- 0.20, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 432.5632.5932.64MIN: 28.77 / MAX: 47.43MIN: 28.42 / MAX: 45.73MIN: 28.94 / MAX: 42.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet132714212835Min: 32.21 / Avg: 32.56 / Max: 32.9Min: 32.46 / Avg: 32.59 / Max: 32.85Min: 32.33 / Avg: 32.64 / Max: 32.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface1230.74931.49862.24792.99723.7465SE +/- 0.01, N = 3SE +/- 0.02, N = 4SE +/- 0.03, N = 33.253.313.33MIN: 2.61 / MAX: 14.28MIN: 2.6 / MAX: 4.93MIN: 2.62 / MAX: 5.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface123246810Min: 3.23 / Avg: 3.25 / Max: 3.28Min: 3.26 / Avg: 3.31 / Max: 3.35Min: 3.29 / Avg: 3.33 / Max: 3.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b032148121620SE +/- 0.14, N = 3SE +/- 0.26, N = 4SE +/- 0.07, N = 316.9016.9516.98MIN: 13.97 / MAX: 30.89MIN: 14.04 / MAX: 27.15MIN: 14.01 / MAX: 31.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b032148121620Min: 16.75 / Avg: 16.9 / Max: 17.18Min: 16.59 / Avg: 16.95 / Max: 17.7Min: 16.91 / Avg: 16.98 / Max: 17.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet1323691215SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.20, N = 410.3810.4310.44MIN: 8.43 / MAX: 17.05MIN: 8.39 / MAX: 21.7MIN: 8.36 / MAX: 26.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet1323691215Min: 10.24 / Avg: 10.38 / Max: 10.53Min: 10.3 / Avg: 10.43 / Max: 10.56Min: 10.05 / Avg: 10.44 / Max: 10.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v21323691215SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.16, N = 412.6312.7512.98MIN: 10.39 / MAX: 25.47MIN: 10.36 / MAX: 21.98MIN: 10.26 / MAX: 24.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v213248121620Min: 12.48 / Avg: 12.63 / Max: 12.8Min: 12.42 / Avg: 12.75 / Max: 13.12Min: 12.66 / Avg: 12.98 / Max: 13.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31323691215SE +/- 0.13, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 49.599.609.69MIN: 7.78 / MAX: 22.63MIN: 7.73 / MAX: 19.55MIN: 7.81 / MAX: 18.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31323691215Min: 9.4 / Avg: 9.59 / Max: 9.83Min: 9.47 / Avg: 9.6 / Max: 9.78Min: 9.46 / Avg: 9.69 / Max: 10.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v23123691215SE +/- 0.38, N = 3SE +/- 0.01, N = 3SE +/- 0.21, N = 410.6810.8811.30MIN: 8.89 / MAX: 17.41MIN: 8.93 / MAX: 27.73MIN: 8.86 / MAX: 23.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v23123691215Min: 9.93 / Avg: 10.68 / Max: 11.17Min: 10.86 / Avg: 10.88 / Max: 10.91Min: 10.97 / Avg: 11.3 / Max: 11.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet3121122334455SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.66, N = 446.3246.4047.01MIN: 42.77 / MAX: 61.8MIN: 42.8 / MAX: 59.97MIN: 42.66 / MAX: 60.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet3121020304050Min: 46.21 / Avg: 46.32 / Max: 46.44Min: 46.35 / Avg: 46.4 / Max: 46.45Min: 46.22 / Avg: 47.01 / Max: 48.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V23121.2M2.4M3.6M4.8M6MSE +/- 6688.08, N = 3SE +/- 3601.25, N = 3SE +/- 3137.20, N = 3568907056913105697083
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V23121000K2000K3000K4000K5000KMin: 5676430 / Avg: 5689070 / Max: 5699180Min: 5684170 / Avg: 5691310 / Max: 5695700Min: 5691440 / Avg: 5697083.33 / Max: 5702280

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast3210.88881.77762.66643.55524.444SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.953.943.941. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast321246810Min: 3.93 / Avg: 3.95 / Max: 3.96Min: 3.93 / Avg: 3.94 / Max: 3.94Min: 3.94 / Avg: 3.94 / Max: 3.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v33121428425670SE +/- 0.19, N = 3SE +/- 0.18, N = 3SE +/- 0.32, N = 363.2763.4264.00MIN: 60.45 / MAX: 98.45MIN: 60.02 / MAX: 120.02MIN: 60.33 / MAX: 93.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: inception-v33121326395265Min: 63.01 / Avg: 63.27 / Max: 63.64Min: 63.23 / Avg: 63.42 / Max: 63.77Min: 63.62 / Avg: 64 / Max: 64.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0312246810SE +/- 0.030, N = 3SE +/- 0.021, N = 3SE +/- 0.065, N = 37.3137.3957.526MIN: 6.61 / MAX: 17.32MIN: 6.57 / MAX: 16.47MIN: 6.6 / MAX: 20.051. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.03123691215Min: 7.28 / Avg: 7.31 / Max: 7.37Min: 7.35 / Avg: 7.39 / Max: 7.42Min: 7.4 / Avg: 7.53 / Max: 7.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_2243211.22042.44083.66124.88166.102SE +/- 0.037, N = 3SE +/- 0.038, N = 3SE +/- 0.019, N = 35.3985.4195.424MIN: 4.83 / MAX: 15.64MIN: 4.88 / MAX: 15.69MIN: 4.8 / MAX: 14.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224321246810Min: 5.33 / Avg: 5.4 / Max: 5.44Min: 5.36 / Avg: 5.42 / Max: 5.49Min: 5.4 / Avg: 5.42 / Max: 5.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-501231122334455SE +/- 0.51, N = 3SE +/- 0.33, N = 3SE +/- 0.27, N = 350.2250.2750.49MIN: 47.21 / MAX: 83.29MIN: 47.5 / MAX: 72.85MIN: 47.85 / MAX: 147.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-501231020304050Min: 49.54 / Avg: 50.22 / Max: 51.22Min: 49.61 / Avg: 50.27 / Max: 50.65Min: 50.03 / Avg: 50.49 / Max: 50.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.02133691215SE +/- 0.039, N = 3SE +/- 0.127, N = 3SE +/- 0.066, N = 39.6139.7329.867MIN: 8.69 / MAX: 20.7MIN: 8.67 / MAX: 18.76MIN: 8.73 / MAX: 39.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.02133691215Min: 9.56 / Avg: 9.61 / Max: 9.69Min: 9.59 / Avg: 9.73 / Max: 9.99Min: 9.76 / Avg: 9.87 / Max: 9.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000132150K300K450K600K750KSE +/- 8641.46, N = 3SE +/- 5594.95, N = 3SE +/- 6558.55, N = 3706035.5700554.6696009.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000132120K240K360K480K600KMin: 695838.4 / Avg: 706035.5 / Max: 723218.7Min: 694830.4 / Avg: 700554.6 / Max: 711743.5Min: 689123.7 / Avg: 696009.53 / Max: 709121.1

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT21360M120M180M240M300MSE +/- 109501.99, N = 3SE +/- 252430.64, N = 3SE +/- 702868.97, N = 3301687480.19301333349.99301185316.591. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT21350M100M150M200M250MMin: 301490385.92 / Avg: 301687480.19 / Max: 301868716.18Min: 300870890 / Avg: 301333349.99 / Max: 301739970.27Min: 299847318.01 / Avg: 301185316.59 / Max: 302227671.231. (CC) gcc options: -O3 -march=native -lm

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup3210.450.91.351.82.25SE +/- 0.03, N = 32.02.02.01. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup321246810Min: 1.9 / Avg: 1.97 / Max: 21. (CC) gcc options: -fopenmp -O3 -lm

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000231160K320K480K640K800KSE +/- 1666.30, N = 3SE +/- 3366.02, N = 3SE +/- 1902.33, N = 3725224.2723222.1721428.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000231130K260K390K520K650KMin: 722536.1 / Avg: 725224.2 / Max: 728274.2Min: 717363.4 / Avg: 723222.07 / Max: 729023.2Min: 718969.3 / Avg: 721428.33 / Max: 725172.1

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m321510152025SE +/- 0.18, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 318.7518.8719.11MIN: 16.84 / MAX: 32.85MIN: 16.81 / MAX: 33.23MIN: 16.69 / MAX: 35.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m321510152025Min: 18.41 / Avg: 18.75 / Max: 19.02Min: 18.76 / Avg: 18.87 / Max: 19.06Min: 19 / Avg: 19.11 / Max: 19.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd3121326395265SE +/- 0.27, N = 3SE +/- 0.05, N = 3SE +/- 0.20, N = 359.3459.3959.71MIN: 52.64 / MAX: 71.17MIN: 53.07 / MAX: 79.98MIN: 52.9 / MAX: 78.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd3121224364860Min: 58.87 / Avg: 59.34 / Max: 59.82Min: 59.31 / Avg: 59.39 / Max: 59.47Min: 59.34 / Avg: 59.71 / Max: 60.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1321326395265SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 359.2859.2859.42MIN: 55.69 / MAX: 71.74MIN: 55.36 / MAX: 72.94MIN: 55.13 / MAX: 75.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1321224364860Min: 59.17 / Avg: 59.28 / Max: 59.35Min: 59.2 / Avg: 59.28 / Max: 59.4Min: 59.26 / Avg: 59.42 / Max: 59.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet503121632486480SE +/- 0.19, N = 3SE +/- 0.82, N = 3SE +/- 0.39, N = 371.8272.5573.14MIN: 65.77 / MAX: 87.35MIN: 66.75 / MAX: 91.97MIN: 66.24 / MAX: 103.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet503121428425670Min: 71.45 / Avg: 71.82 / Max: 72.1Min: 71.55 / Avg: 72.55 / Max: 74.18Min: 72.36 / Avg: 73.14 / Max: 73.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123612182430SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 323.3023.4223.51MIN: 21.24 / MAX: 38.11MIN: 21.27 / MAX: 37.51MIN: 21.21 / MAX: 37.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123510152025Min: 23.15 / Avg: 23.3 / Max: 23.51Min: 23.28 / Avg: 23.42 / Max: 23.51Min: 23.4 / Avg: 23.51 / Max: 23.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123714212835SE +/- 0.16, N = 3SE +/- 0.12, N = 3SE +/- 0.33, N = 329.0429.0429.18MIN: 25.89 / MAX: 42.02MIN: 25.69 / MAX: 36.07MIN: 25.64 / MAX: 44.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123612182430Min: 28.76 / Avg: 29.04 / Max: 29.32Min: 28.84 / Avg: 29.04 / Max: 29.24Min: 28.6 / Avg: 29.18 / Max: 29.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16132306090120150SE +/- 0.18, N = 3SE +/- 0.17, N = 3SE +/- 0.23, N = 3117.19117.45119.38MIN: 112.25 / MAX: 143.19MIN: 112.54 / MAX: 135.06MIN: 113.9 / MAX: 142.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1613220406080100Min: 116.85 / Avg: 117.19 / Max: 117.45Min: 117.14 / Avg: 117.45 / Max: 117.71Min: 118.95 / Avg: 119.38 / Max: 119.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet312816243240SE +/- 0.18, N = 3SE +/- 0.18, N = 3SE +/- 0.05, N = 332.4332.5932.79MIN: 28.44 / MAX: 46.34MIN: 28.72 / MAX: 51.96MIN: 28.77 / MAX: 48.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet312714212835Min: 32.19 / Avg: 32.43 / Max: 32.77Min: 32.36 / Avg: 32.59 / Max: 32.94Min: 32.72 / Avg: 32.79 / Max: 32.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface3120.74931.49862.24792.99723.7465SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 33.293.313.33MIN: 2.73 / MAX: 4.72MIN: 2.64 / MAX: 9.91MIN: 2.62 / MAX: 5.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface312246810Min: 3.23 / Avg: 3.29 / Max: 3.35Min: 3.29 / Avg: 3.31 / Max: 3.33Min: 3.27 / Avg: 3.33 / Max: 3.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b023148121620SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.15, N = 316.7616.8917.07MIN: 13.96 / MAX: 31.87MIN: 14.11 / MAX: 30.24MIN: 14.1 / MAX: 30.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b023148121620Min: 16.71 / Avg: 16.76 / Max: 16.81Min: 16.65 / Avg: 16.89 / Max: 17.05Min: 16.87 / Avg: 17.07 / Max: 17.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet3213691215SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 310.2510.3410.50MIN: 8.43 / MAX: 24.69MIN: 8.42 / MAX: 16.24MIN: 8.45 / MAX: 18.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet3213691215Min: 10.15 / Avg: 10.25 / Max: 10.35Min: 10.24 / Avg: 10.34 / Max: 10.54Min: 10.4 / Avg: 10.5 / Max: 10.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21323691215SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 312.6512.7012.89MIN: 10.41 / MAX: 23.3MIN: 10.47 / MAX: 19.71MIN: 10.42 / MAX: 26.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v213248121620Min: 12.45 / Avg: 12.65 / Max: 12.85Min: 12.48 / Avg: 12.7 / Max: 12.96Min: 12.64 / Avg: 12.89 / Max: 13.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v32313691215SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 39.499.599.67MIN: 7.78 / MAX: 14.76MIN: 7.81 / MAX: 16.29MIN: 7.8 / MAX: 15.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v32313691215Min: 9.44 / Avg: 9.49 / Max: 9.59Min: 9.53 / Avg: 9.59 / Max: 9.64Min: 9.54 / Avg: 9.67 / Max: 9.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v22313691215SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 310.9111.0411.20MIN: 8.91 / MAX: 18.07MIN: 8.96 / MAX: 21.86MIN: 8.97 / MAX: 20.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v22313691215Min: 10.77 / Avg: 10.91 / Max: 11.05Min: 10.94 / Avg: 11.04 / Max: 11.13Min: 10.99 / Avg: 11.2 / Max: 11.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet3211122334455SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.46, N = 346.3246.4946.83MIN: 43.57 / MAX: 62.16MIN: 42.72 / MAX: 62.2MIN: 42.34 / MAX: 64.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet3211020304050Min: 46.22 / Avg: 46.32 / Max: 46.38Min: 46.4 / Avg: 46.49 / Max: 46.62Min: 46.24 / Avg: 46.83 / Max: 47.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080213300600900120015001199119911961. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search132306090120150SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.26, N = 3127.15127.39127.651. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search13220406080100Min: 127.11 / Avg: 127.15 / Max: 127.22Min: 127.27 / Avg: 127.39 / Max: 127.53Min: 127.21 / Avg: 127.65 / Max: 128.11. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark231246810SE +/- 0.03, N = 3SE +/- 0.09, N = 4SE +/- 0.08, N = 37.747.387.381. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark2313691215Min: 7.69 / Avg: 7.74 / Max: 7.8Min: 7.12 / Avg: 7.38 / Max: 7.56Min: 7.22 / Avg: 7.38 / Max: 7.461. Nodejs v12.18.2

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time321306090120150SE +/- 0.14, N = 3SE +/- 0.21, N = 3SE +/- 0.06, N = 3123.34123.41123.681. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time32120406080100Min: 123.06 / Avg: 123.34 / Max: 123.54Min: 123.05 / Avg: 123.4 / Max: 123.79Min: 123.58 / Avg: 123.68 / Max: 123.791. RawTherapee, version 5.8, command line.

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K3211.08682.17363.26044.34725.434SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.834.834.811. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K321246810Min: 4.8 / Avg: 4.83 / Max: 4.87Min: 4.82 / Avg: 4.83 / Max: 4.85Min: 4.79 / Avg: 4.81 / Max: 4.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 22138M16M24M32M40MSE +/- 208729.08, N = 3SE +/- 289585.60, N = 3SE +/- 503813.27, N = 335791748.535499453.035427649.2
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 22136M12M18M24M30MMin: 35378389.7 / Avg: 35791748.47 / Max: 36048968.8Min: 35091593.5 / Avg: 35499453 / Max: 36059497.1Min: 34420043.2 / Avg: 35427649.23 / Max: 35937019.7

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile312306090120150SE +/- 0.21, N = 3SE +/- 0.54, N = 3SE +/- 0.19, N = 3112.96113.52113.65
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile31220406080100Min: 112.66 / Avg: 112.96 / Max: 113.36Min: 112.85 / Avg: 113.52 / Max: 114.58Min: 113.29 / Avg: 113.65 / Max: 113.92

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10013220K40K60K80K100KSE +/- 106.80, N = 3SE +/- 223.03, N = 3SE +/- 38.96, N = 31100841101571103201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10013220K40K60K80K100KMin: 109887 / Avg: 110084 / Max: 110254Min: 109747 / Avg: 110157.33 / Max: 110514Min: 110276 / Avg: 110320.33 / Max: 1103981. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080321400800120016002000185218511849

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1232K4K6K8K10KSE +/- 11.25, N = 3SE +/- 85.05, N = 3SE +/- 30.28, N = 37721.137794.137837.72MIN: 7547.72MIN: 7534.49MIN: 7613.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12314002800420056007000Min: 7702.17 / Avg: 7721.13 / Max: 7741.11Min: 7708.43 / Avg: 7794.13 / Max: 7964.23Min: 7779.6 / Avg: 7837.72 / Max: 7881.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU2312K4K6K8K10KSE +/- 22.70, N = 3SE +/- 13.61, N = 3SE +/- 101.76, N = 37667.947750.497913.61MIN: 7509.56MIN: 7562.51MIN: 7617.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU23114002800420056007000Min: 7640.74 / Avg: 7667.94 / Max: 7713.03Min: 7723.31 / Avg: 7750.49 / Max: 7765.32Min: 7752.25 / Avg: 7913.61 / Max: 8101.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU32117003400510068008500SE +/- 16.23, N = 3SE +/- 27.04, N = 3SE +/- 38.84, N = 37701.757725.637746.87MIN: 7494.48MIN: 7520.66MIN: 7556.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU32113002600390052006500Min: 7674.81 / Avg: 7701.75 / Max: 7730.89Min: 7697.2 / Avg: 7725.63 / Max: 7779.68Min: 7698.52 / Avg: 7746.87 / Max: 7823.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU321816243240SE +/- 1.44, N = 12SE +/- 1.77, N = 15SE +/- 1.59, N = 1533.0635.8536.56MIN: 26.99MIN: 27.16MIN: 27.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU321816243240Min: 29.19 / Avg: 33.06 / Max: 43.28Min: 29.58 / Avg: 35.85 / Max: 44.96Min: 29.55 / Avg: 36.56 / Max: 45.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU2312K4K6K8K10KSE +/- 86.65, N = 3SE +/- 61.07, N = 3SE +/- 133.51, N = 38277.538342.738437.12MIN: 7767.68MIN: 7929.12MIN: 7874.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU23115003000450060007500Min: 8112.62 / Avg: 8277.53 / Max: 8406.14Min: 8220.67 / Avg: 8342.73 / Max: 8407.38Min: 8172.26 / Avg: 8437.12 / Max: 8598.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1232K4K6K8K10KSE +/- 128.90, N = 3SE +/- 144.66, N = 3SE +/- 28.77, N = 38193.618356.198419.82MIN: 7671.11MIN: 7776.27MIN: 8053.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12315003000450060007500Min: 8013.08 / Avg: 8193.61 / Max: 8443.25Min: 8101.02 / Avg: 8356.19 / Max: 8601.87Min: 8373.46 / Avg: 8419.82 / Max: 8472.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium213246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.516.516.501. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium2133691215Min: 6.49 / Avg: 6.51 / Max: 6.53Min: 6.49 / Avg: 6.51 / Max: 6.53Min: 6.47 / Avg: 6.5 / Max: 6.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough13220406080100SE +/- 0.21, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 384.3384.4984.611. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1321632486480Min: 83.94 / Avg: 84.33 / Max: 84.65Min: 84.41 / Avg: 84.49 / Max: 84.56Min: 84.2 / Avg: 84.61 / Max: 84.821. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast213246810SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 36.856.846.831. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast2133691215Min: 6.79 / Avg: 6.85 / Max: 6.93Min: 6.8 / Avg: 6.84 / Max: 6.86Min: 6.81 / Avg: 6.83 / Max: 6.871. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 231220406080100SE +/- 0.16, N = 3SE +/- 0.24, N = 3SE +/- 0.18, N = 386.3486.4886.541. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 23121632486480Min: 86.05 / Avg: 86.34 / Max: 86.59Min: 86.04 / Avg: 86.48 / Max: 86.86Min: 86.19 / Avg: 86.54 / Max: 86.741. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time13220406080100SE +/- 0.57, N = 3SE +/- 0.13, N = 3SE +/- 0.26, N = 382.0182.1983.58
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1321632486480Min: 81.07 / Avg: 82.01 / Max: 83.05Min: 81.94 / Avg: 82.19 / Max: 82.32Min: 83.18 / Avg: 83.58 / Max: 84.08

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S13220406080100SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 382.0682.0882.181. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1321632486480Min: 81.97 / Avg: 82.06 / Max: 82.14Min: 81.96 / Avg: 82.08 / Max: 82.19Min: 81.96 / Avg: 82.18 / Max: 82.451. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00012320406080100SE +/- 0.14, N = 3SE +/- 0.76, N = 3SE +/- 0.70, N = 381.2281.4281.931. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231632486480Min: 80.97 / Avg: 81.22 / Max: 81.44Min: 80.56 / Avg: 81.42 / Max: 82.94Min: 81.1 / Avg: 81.93 / Max: 83.331. (CC) gcc options: -O2 -ldl -lz -lpthread

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 10803214080120160200SE +/- 0.10, N = 3SE +/- 0.12, N = 3SE +/- 1.30, N = 3159.4159.4158.1
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080321306090120150Min: 159.3 / Avg: 159.4 / Max: 159.6Min: 159.2 / Avg: 159.4 / Max: 159.6Min: 155.5 / Avg: 158.1 / Max: 159.4

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1321.2M2.4M3.6M4.8M6MSE +/- 48644.36, N = 3SE +/- 39149.01, N = 3SE +/- 74806.22, N = 35718169564858956282201. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1321000K2000K3000K4000K5000KMin: 5629479 / Avg: 5718169.33 / Max: 5797146Min: 5574905 / Avg: 5648589.33 / Max: 5708364Min: 5499526 / Avg: 5628219.67 / Max: 57586451. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis2130.7431.4862.2292.9723.715SE +/- 0.032, N = 3SE +/- 0.041, N = 3SE +/- 0.028, N = 153.1483.2063.302MIN: 2.89 / MAX: 3.84MIN: 2.88 / MAX: 3.79MIN: 2.87 / MAX: 4.18
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis213246810Min: 3.1 / Avg: 3.15 / Max: 3.21Min: 3.15 / Avg: 3.21 / Max: 3.29Min: 3.09 / Avg: 3.3 / Max: 3.47

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K3211224364860SE +/- 0.44, N = 3SE +/- 0.37, N = 3SE +/- 0.35, N = 352.0952.0051.92MIN: 48.07 / MAX: 61.57MIN: 48.01 / MAX: 61.72MIN: 48.08 / MAX: 61.731. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4K3211020304050Min: 51.56 / Avg: 52.09 / Max: 52.96Min: 51.56 / Avg: 52 / Max: 52.74Min: 51.52 / Avg: 51.92 / Max: 52.611. (CC) gcc options: -pthread -ldl -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 51230.19080.38160.57240.76320.954SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.8480.8440.839
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5123246810Min: 0.85 / Avg: 0.85 / Max: 0.85Min: 0.84 / Avg: 0.84 / Max: 0.85Min: 0.84 / Avg: 0.84 / Max: 0.84

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1632160K120K180K240K300KSE +/- 1852.38, N = 3SE +/- 2051.97, N = 3SE +/- 3138.20, N = 3269044.48267212.95265074.451. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1632150K100K150K200K250KMin: 265340.35 / Avg: 269044.48 / Max: 270955.21Min: 264632.8 / Avg: 267212.95 / Max: 271266.88Min: 259875.13 / Avg: 265074.45 / Max: 270718.811. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p1234080120160200SE +/- 0.04, N = 3SE +/- 1.76, N = 3SE +/- 1.89, N = 3184.25184.17183.88MIN: 129.28 / MAX: 331.16MIN: 127.79 / MAX: 333.51MIN: 127.66 / MAX: 340.881. (CC) gcc options: -pthread -ldl -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p123306090120150Min: 184.21 / Avg: 184.25 / Max: 184.33Min: 180.81 / Avg: 184.17 / Max: 186.74Min: 180.11 / Avg: 183.88 / Max: 186.031. (CC) gcc options: -pthread -ldl -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1322K4K6K8K10KSE +/- 6.35, N = 3SE +/- 57.65, N = 3SE +/- 8.66, N = 38722.48690.18646.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed13215003000450060007500Min: 8712.3 / Avg: 8722.37 / Max: 8734.1Min: 8605 / Avg: 8690.07 / Max: 8800Min: 8636 / Avg: 8646.3 / Max: 8663.51. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed3212K4K6K8K10KSE +/- 92.88, N = 3SE +/- 41.71, N = 3SE +/- 51.24, N = 38022.488015.397994.301. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed32114002800420056007000Min: 7839.29 / Avg: 8022.48 / Max: 8140.72Min: 7940.01 / Avg: 8015.39 / Max: 8084.04Min: 7940.21 / Avg: 7994.3 / Max: 8096.731. (CC) gcc options: -O3

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Load3211224364860SE +/- 1.93, N = 25SE +/- 1.88, N = 25SE +/- 2.25, N = 2050.7052.7653.781. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under Load3211122334455Min: 30.2 / Avg: 50.7 / Max: 73.48Min: 33.2 / Avg: 52.76 / Max: 77.37Min: 33.81 / Avg: 53.77 / Max: 73.271. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No3211428425670SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 363.0063.0263.03
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No3211224364860Min: 62.96 / Avg: 63 / Max: 63.03Min: 62.98 / Avg: 63.02 / Max: 63.09Min: 62.99 / Avg: 63.03 / Max: 63.08

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom3210.11210.22420.33630.44840.5605SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4980.4940.494
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom321246810Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.5

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark321510152025SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 319.7919.6619.361. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark321510152025Min: 19.64 / Avg: 19.79 / Max: 20.03Min: 19.51 / Avg: 19.66 / Max: 19.87Min: 19.29 / Avg: 19.36 / Max: 19.461. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1320.24910.49820.74730.99641.2455SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.009, N = 31.1071.1061.098
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar132246810Min: 1.1 / Avg: 1.11 / Max: 1.11Min: 1.1 / Avg: 1.11 / Max: 1.11Min: 1.08 / Avg: 1.1 / Max: 1.11

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet231100K200K300K400K500KSE +/- 1459.42, N = 3SE +/- 564.75, N = 3SE +/- 158.13, N = 3461955467404467745
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet23180K160K240K320K400KMin: 460469 / Avg: 461955.33 / Max: 464874Min: 466822 / Avg: 467403.67 / Max: 468533Min: 467441 / Avg: 467745.33 / Max: 467972

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile31270K140K210K280K350KSE +/- 504.11, N = 3SE +/- 995.45, N = 3SE +/- 384.90, N = 3315790316112317213
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile31250K100K150K200K250KMin: 314867 / Avg: 315789.67 / Max: 316603Min: 314718 / Avg: 316112 / Max: 318040Min: 316460 / Avg: 317213 / Max: 317728

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float21370K140K210K280K350KSE +/- 1840.34, N = 3SE +/- 172.14, N = 3SE +/- 2441.23, N = 3306270309946313216
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float21350K100K150K200K250KMin: 302657 / Avg: 306270 / Max: 308685Min: 309629 / Avg: 309945.67 / Max: 310221Min: 308497 / Avg: 313215.67 / Max: 316661

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant23170K140K210K280K350KSE +/- 1025.34, N = 3SE +/- 1866.22, N = 3SE +/- 1183.02, N = 3318685327187328733
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant23160K120K180K240K300KMin: 316646 / Avg: 318684.67 / Max: 319896Min: 325104 / Avg: 327187.33 / Max: 330911Min: 326554 / Avg: 328733 / Max: 330621

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime3123691215SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 310.2510.1310.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime3123691215Min: 10.09 / Avg: 10.25 / Max: 10.5Min: 10.09 / Avg: 10.13 / Max: 10.21Min: 10.02 / Avg: 10.12 / Max: 10.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom3210.07880.15760.23640.31520.394SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.350.350.351. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom32112345Min: 0.34 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.351. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets3210.10130.20260.30390.40520.5065SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.450.450.451. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets32112345Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.44 / Avg: 0.45 / Max: 0.45