Core i5 6500 Xmas

Intel Core i5-6500 testing with a Gigabyte Z170M-D3H-CF (F22f BIOS) and Gigabyte Intel HD 530 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012230-HA-COREI565062
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
Bioinformatics 2 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 11 Tests
Creator Workloads 10 Tests
Database Test Suite 2 Tests
Encoding 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 9 Tests
Machine Learning 5 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 12 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 3 Tests
Scientific Computing 4 Tests
Server 5 Tests
Server CPU Tests 6 Tests
Single-Threaded 4 Tests
Texture Compression 2 Tests
Vulkan Compute 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
December 22 2020
  10 Hours, 2 Minutes
2
December 23 2020
  9 Hours, 59 Minutes
3
December 23 2020
  9 Hours, 58 Minutes
Invert Hiding All Results Option
  9 Hours, 59 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i5 6500 XmasProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLCompilerFile-SystemScreen Resolution123Intel Core i5-6500 @ 3.60GHz (4 Cores)Gigabyte Z170M-D3H-CF (F22f BIOS)Intel Xeon E3-1200 v5/E3-15008GB250GB Samsung SSD 850Gigabyte Intel HD 530 3GB (1050MHz)Realtek ALC892G237HLIntel I219-VUbuntu 20.045.9.0-050900rc7daily20200929-generic (x86_64) 20200928GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.6 Mesa 20.0.8OpenCL 2.1GCC 9.3.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe2 - Thermald 1.9.1 Python Details- Python 3.8.5Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT disabled

123Result OverviewPhoronix Test Suite100%104%108%111%VkResampleRedisBetsy GPU CompressorasmFishoneDNNeSpeak-NG Speech EngineGROMACSCoremarkSQLite SpeedtestLAMMPS Molecular Dynamics SimulatorStockfishTimed MAFFT AlignmentNode.js V8 Web Tooling BenchmarkAI Benchmark Alphayquake2Monkey Audio EncodingsimdjsonOpenVINOBuild2Timed Eigen CompilationCraftyLZ4 CompressionVkFFTNCNNOpus Codec EncodingIndigoBenchTimed HMMer SearchWavPack Audio EncodingBasis Universalrav1eVKMarkNumpy BenchmarkPHPBenchTimed FFmpeg CompilationCLOMP

Core i5 6500 Xmasredis: LPOPvkresample: 2x - Singleonednn: IP Shapes 3D - f32 - CPUredis: GETasmfish: 1024 Hash Memory, 26 Depthonednn: Deconvolution Batch shapes_3d - f32 - CPUsimdjson: PartialTweetsopenvino: Face Detection 0106 FP16 - CPUredis: LPUSHopenvino: Person Detection 0106 FP32 - CPUyquake2: OpenGL 3.x - 1920 x 1080openvino: Person Detection 0106 FP32 - CPUespeak: Text-To-Speech Synthesisredis: SETgromacs: Water Benchmarkopenvino: Age Gender Recognition Retail 0013 FP16 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondsqlite-speedtest: Timed Time - Size 1,000lammps: Rhodopsin Proteinopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUai-benchmark: Device Inference Scorencnn: Vulkan GPU - resnet18redis: SADDncnn: CPU - resnet18compress-lz4: 1 - Decompression Speedonednn: Recurrent Neural Network Inference - f32 - CPUncnn: CPU - efficientnet-b0onednn: IP Shapes 1D - f32 - CPUstockfish: Total Timemafft: Multiple Sequence Alignment - LSU RNAncnn: Vulkan GPU - yolov4-tinyopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUnode-web-tooling: onednn: Deconvolution Batch shapes_1d - f32 - CPUai-benchmark: Device AI Scoreonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUyquake2: Software CPU - 1920 x 1080ncnn: CPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdbuild2: Time To Compileencode-ape: WAV To APEbasis: UASTC Level 2 + RDO Post-Processingindigobench: CPU - Bedroomncnn: Vulkan GPU - resnet50ncnn: CPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - googlenetyquake2: OpenGL 1.x - 1920 x 1080ncnn: CPU - resnet50ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - blazefacencnn: CPU - blazefacencnn: Vulkan GPU - vgg16onednn: Recurrent Neural Network Training - f32 - CPUncnn: CPU - mobilenetonednn: Convolution Batch Shapes Auto - f32 - CPUbuild-eigen: Time To Compileopenvino: Face Detection 0106 FP16 - CPUncnn: CPU - googlenetcompress-lz4: 9 - Compression Speedai-benchmark: Device Training Scorerav1e: 1ncnn: CPU - vgg16ncnn: Vulkan GPU - regnety_400mncnn: CPU - shufflenet-v2compress-lz4: 1 - Compression Speedncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU - alexnetindigobench: CPU - Supercarncnn: CPU - squeezenet_ssdncnn: Vulkan GPU - efficientnet-b0crafty: Elapsed Timencnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - regnety_400mncnn: CPU - alexnetvkfft: compress-lz4: 9 - Decompression Speedrav1e: 5basis: ETC1Scompress-lz4: 3 - Compression Speedencode-opus: WAV To Opus Encoderav1e: 10vkmark: 1920 x 1080rav1e: 6ncnn: CPU - mnasnethmmer: Pfam Database Searchencode-wavpack: WAV To WavPackopenvino: Person Detection 0106 FP16 - CPUbasis: UASTC Level 0basis: UASTC Level 3basis: UASTC Level 2numpy: phpbench: PHP Benchmark Suitecompress-lz4: 3 - Decompression Speedbuild-ffmpeg: Time To Compileopenvino: Person Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU-v2-v2 - mobilenet-v2simdjson: DistinctUserIDsimdjson: LargeRandsimdjson: Kostyaclomp: Static OMP Speedupvkmark: 1280 x 1024betsy: ETC2 RGB - Highestbetsy: ETC1 - Highest1232297378.17563.06510.95202147999.5832146116.03950.531.051392673.460.63283.06304.4534.3291577856.910.4732364.7792339.76052679.0772.6062333.253788.5857024.821794550.9224.858287.04258.5611.489.35795586659012.76242.821.611.619.8412.483611955.2986587.742.8144.33366.37413.766932.0010.43152.436.956.9923.95287.852.4110.172.582.6096.697407.3631.1520.608994.7153760.5223.9639.826250.32096.7216.7710.156932.9431.1321.101.09244.3511.5069036017.9916.7821.0912908175.30.92792.21240.7910.7522.7516111.2357.27130.87418.1446201.8312.589185.45190.837290.246043638151.5164.2570.641.057.298.010.540.350.571.68977.65310.8061411549.46486.85610.59242013386.67837279415.71960.521.051400821.830.64287.36219.7734.7241598272.500.4692334.8191213.12396179.8442.6152348.813753.4157424.931788107.0424.948234.14234.3911.569.29323590658612.77742.891.601.69.8012.445012005.2821387.543.0444.40366.59613.696936.5280.43252.636.987.0124.02289.052.6210.172.592.6096.367392.8531.2220.572494.6533759.5024.0439.716260.32196.4216.8210.156912.7231.2221.161.08944.4711.5369200718.0116.7921.1412908172.60.92792.27140.7110.7332.7536111.2357.28131.04418.1566199.5512.581185.53490.859290.376043948153.3164.2580.641.057.298.010.540.350.571.689710.6921398436.25487.50710.10091999227.21818239815.90780.521.071378146.830.63287.06301.0334.7811588544.790.4672340.6892299.10766680.0152.5872326.133782.6656925.031779981.4625.058295.14228.2911.509.32407587672412.84743.091.601.609.8612.519811935.2678987.243.0044.56368.26913.706931.8330.43352.666.976.9824.05288.052.6210.132.582.5996.327381.2231.1120.536794.9803747.7824.0139.696240.3296.5016.7810.126922.7431.1321.121.09144.4311.5269214358.0016.7521.1212878156.80.92992.40040.7410.7542.7566121.2377.28130.99118.1336195.6712.592185.42390.809290.306042318152.6164.2730.641.057.298.010.540.350.571.689711.065OpenBenchmarking.org

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123500K1000K1500K2000K2500KSE +/- 4519.60, N = 3SE +/- 4433.16, N = 3SE +/- 13945.24, N = 32297378.171411549.461398436.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123400K800K1200K1600K2000KMin: 2288842 / Avg: 2297378.17 / Max: 2304221.25Min: 1402749 / Avg: 1411549.46 / Max: 1416883.88Min: 1371830 / Avg: 1398436.25 / Max: 1418984.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single123120240360480600SE +/- 0.89, N = 3SE +/- 1.30, N = 3SE +/- 2.01, N = 3563.07486.86487.511. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single123100200300400500Min: 561.38 / Avg: 563.06 / Max: 564.4Min: 484.3 / Avg: 486.86 / Max: 488.51Min: 484.82 / Avg: 487.51 / Max: 491.451. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 310.9510.5910.10MIN: 10.83MIN: 10.46MIN: 9.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215Min: 10.94 / Avg: 10.95 / Max: 10.96Min: 10.57 / Avg: 10.59 / Max: 10.63Min: 10.05 / Avg: 10.1 / Max: 10.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123500K1000K1500K2000K2500KSE +/- 13688.88, N = 3SE +/- 18018.77, N = 3SE +/- 13990.74, N = 32147999.502013386.671999227.211. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KMin: 2128340.5 / Avg: 2147999.5 / Max: 2174330.5Min: 1977423 / Avg: 2013386.67 / Max: 2033365.75Min: 1972828.25 / Avg: 1999227.21 / Max: 2020460.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1232M4M6M8M10MSE +/- 107973.49, N = 3SE +/- 79178.35, N = 3SE +/- 27175.55, N = 3832146183727948182398
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1231.5M3M4.5M6M7.5MMin: 8109832 / Avg: 8321460.67 / Max: 8464488Min: 8218133 / Avg: 8372794.33 / Max: 8479579Min: 8141170 / Avg: 8182397.67 / Max: 8233683

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU12348121620SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 316.0415.7215.91MIN: 15.86MIN: 15.52MIN: 15.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU12348121620Min: 15.98 / Avg: 16.04 / Max: 16.15Min: 15.64 / Avg: 15.72 / Max: 15.76Min: 15.76 / Avg: 15.91 / Max: 16.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets1230.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.520.521. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets123246810Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.531. (CXX) g++ options: -O3 -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU1230.24080.48160.72240.96321.204SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.051.051.07
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU123246810Min: 1.05 / Avg: 1.05 / Max: 1.05Min: 1.05 / Avg: 1.05 / Max: 1.06Min: 1.06 / Avg: 1.07 / Max: 1.07

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KSE +/- 11099.37, N = 3SE +/- 6852.68, N = 3SE +/- 19289.08, N = 31392673.461400821.831378146.831. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123200K400K600K800K1000KMin: 1379575.12 / Avg: 1392673.46 / Max: 1414744Min: 1392936 / Avg: 1400821.83 / Max: 1414472.38Min: 1340611.25 / Avg: 1378146.83 / Max: 1404629.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU1230.1440.2880.4320.5760.72SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.630.640.63
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU123246810Min: 0.63 / Avg: 0.63 / Max: 0.64Min: 0.64 / Avg: 0.64 / Max: 0.64Min: 0.63 / Avg: 0.63 / Max: 0.64

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 108012360120180240300SE +/- 0.82, N = 3SE +/- 1.37, N = 3SE +/- 1.51, N = 3283.0287.3287.01. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 108012350100150200250Min: 281.9 / Avg: 283 / Max: 284.6Min: 285.7 / Avg: 287.27 / Max: 290Min: 285.1 / Avg: 287.03 / Max: 2901. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU12314002800420056007000SE +/- 32.75, N = 3SE +/- 7.41, N = 3SE +/- 48.35, N = 36304.456219.776301.03
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU12311002200330044005500Min: 6247.75 / Avg: 6304.45 / Max: 6361.21Min: 6209.48 / Avg: 6219.77 / Max: 6234.16Min: 6219.22 / Avg: 6301.03 / Max: 6386.58

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123816243240SE +/- 0.31, N = 4SE +/- 0.05, N = 4SE +/- 0.04, N = 434.3334.7234.781. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835Min: 33.45 / Avg: 34.33 / Max: 34.82Min: 34.64 / Avg: 34.72 / Max: 34.88Min: 34.66 / Avg: 34.78 / Max: 34.841. (CC) gcc options: -O2 -std=c99

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KSE +/- 16728.26, N = 7SE +/- 19649.82, N = 3SE +/- 14556.44, N = 31577856.911598272.501588544.791. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KMin: 1517450.75 / Avg: 1577856.91 / Max: 1620953Min: 1560411.88 / Avg: 1598272.5 / Max: 1626328.38Min: 1562600 / Avg: 1588544.79 / Max: 1612954.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.10640.21280.31920.42560.532SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.4730.4690.4671. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark12312345Min: 0.47 / Avg: 0.47 / Max: 0.48Min: 0.47 / Avg: 0.47 / Max: 0.47Min: 0.46 / Avg: 0.47 / Max: 0.471. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU1235001000150020002500SE +/- 14.95, N = 3SE +/- 6.39, N = 3SE +/- 7.37, N = 32364.772334.812340.68
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU123400800120016002000Min: 2334.89 / Avg: 2364.77 / Max: 2380.54Min: 2323.94 / Avg: 2334.81 / Max: 2346.05Min: 2326.39 / Avg: 2340.68 / Max: 2350.96

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12320K40K60K80K100KSE +/- 895.01, N = 15SE +/- 787.86, N = 12SE +/- 874.87, N = 1592339.7691213.1292299.111. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12316K32K48K64K80KMin: 86225.48 / Avg: 92339.76 / Max: 98039.22Min: 86016.88 / Avg: 91213.12 / Max: 94809.2Min: 85841.52 / Avg: 92299.11 / Max: 97459.951. (CC) gcc options: -O2 -lrt" -lrt

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00012320406080100SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 379.0879.8480.021. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231530456075Min: 78.91 / Avg: 79.08 / Max: 79.4Min: 79.75 / Avg: 79.84 / Max: 79.99Min: 79.77 / Avg: 80.02 / Max: 80.31. (CC) gcc options: -O2 -ldl -lz -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1230.58841.17681.76522.35362.942SE +/- 0.008, N = 3SE +/- 0.001, N = 3SE +/- 0.009, N = 32.6062.6152.5871. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein123246810Min: 2.59 / Avg: 2.61 / Max: 2.62Min: 2.61 / Avg: 2.61 / Max: 2.62Min: 2.58 / Avg: 2.59 / Max: 2.61. (CXX) g++ options: -O3 -pthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU1235001000150020002500SE +/- 11.27, N = 3SE +/- 4.51, N = 3SE +/- 6.47, N = 32333.252348.812326.13
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU123400800120016002000Min: 2310.84 / Avg: 2333.25 / Max: 2346.48Min: 2343.63 / Avg: 2348.81 / Max: 2357.8Min: 2317.87 / Avg: 2326.13 / Max: 2338.9

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU1238001600240032004000SE +/- 16.72, N = 3SE +/- 4.07, N = 3SE +/- 11.32, N = 33788.583753.413782.66
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU1237001400210028003500Min: 3756.13 / Avg: 3788.58 / Max: 3811.81Min: 3747.48 / Avg: 3753.41 / Max: 3761.21Min: 3764.75 / Avg: 3782.66 / Max: 3803.61

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score123120240360480600570574569

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18123612182430SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 324.8224.9325.03MIN: 24.72 / MAX: 25.21MIN: 24.82 / MAX: 25.26MIN: 24.92 / MAX: 34.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18123612182430Min: 24.78 / Avg: 24.82 / Max: 24.9Min: 24.91 / Avg: 24.93 / Max: 24.94Min: 25 / Avg: 25.03 / Max: 25.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123400K800K1200K1600K2000KSE +/- 7511.17, N = 3SE +/- 5626.67, N = 3SE +/- 11495.06, N = 31794550.921788107.041779981.461. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123300K600K900K1200K1500KMin: 1779530.38 / Avg: 1794550.92 / Max: 1802263Min: 1779587.25 / Avg: 1788107.04 / Max: 1798733.88Min: 1764289.38 / Avg: 1779981.46 / Max: 1802378.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 324.8524.9425.05MIN: 24.73 / MAX: 25.1MIN: 24.84 / MAX: 25.22MIN: 24.88 / MAX: 34.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123612182430Min: 24.82 / Avg: 24.85 / Max: 24.9Min: 24.93 / Avg: 24.94 / Max: 24.96Min: 25 / Avg: 25.05 / Max: 25.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KSE +/- 15.44, N = 3SE +/- 8.75, N = 3SE +/- 19.61, N = 38287.08234.18295.11. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed12314002800420056007000Min: 8256.3 / Avg: 8286.97 / Max: 8305.5Min: 8216.6 / Avg: 8234.1 / Max: 8243Min: 8256.9 / Avg: 8295.07 / Max: 83221. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1239001800270036004500SE +/- 17.11, N = 3SE +/- 11.93, N = 3SE +/- 12.19, N = 34258.564234.394228.29MIN: 4223.4MIN: 4217.16MIN: 4211.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1237001400210028003500Min: 4224.99 / Avg: 4258.56 / Max: 4281.07Min: 4220.98 / Avg: 4234.39 / Max: 4258.18Min: 4216 / Avg: 4228.29 / Max: 4252.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 311.4811.5611.50MIN: 11.41 / MAX: 11.68MIN: 11.49 / MAX: 21.55MIN: 11.43 / MAX: 11.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215Min: 11.45 / Avg: 11.48 / Max: 11.51Min: 11.52 / Avg: 11.56 / Max: 11.63Min: 11.48 / Avg: 11.5 / Max: 11.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.01904, N = 3SE +/- 0.01064, N = 3SE +/- 0.01497, N = 39.357959.293239.32407MIN: 9.27MIN: 9.21MIN: 9.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1233691215Min: 9.32 / Avg: 9.36 / Max: 9.39Min: 9.27 / Avg: 9.29 / Max: 9.3Min: 9.3 / Avg: 9.32 / Max: 9.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1231.3M2.6M3.9M5.2M6.5MSE +/- 56096.76, N = 3SE +/- 40231.91, N = 3SE +/- 32753.49, N = 35866590590658658767241. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1231000K2000K3000K4000K5000KMin: 5766719 / Avg: 5866590 / Max: 5960796Min: 5828924 / Avg: 5906586.33 / Max: 5963645Min: 5822154 / Avg: 5876723.67 / Max: 59353931. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1233691215SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 312.7612.7812.851. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA12348121620Min: 12.63 / Avg: 12.76 / Max: 12.92Min: 12.73 / Avg: 12.78 / Max: 12.81Min: 12.79 / Avg: 12.85 / Max: 12.931. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny1231020304050SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 342.8242.8943.09MIN: 42.63 / MAX: 44.54MIN: 42.72 / MAX: 43.44MIN: 42.83 / MAX: 43.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny123918273645Min: 42.74 / Avg: 42.82 / Max: 42.9Min: 42.87 / Avg: 42.89 / Max: 42.91Min: 42.98 / Avg: 43.09 / Max: 43.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU1230.36230.72461.08691.44921.8115SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.611.601.60
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU123246810Min: 1.61 / Avg: 1.61 / Max: 1.62Min: 1.59 / Avg: 1.6 / Max: 1.6Min: 1.6 / Avg: 1.6 / Max: 1.61

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU1230.36230.72461.08691.44921.8115SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.611.601.60
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU123246810Min: 1.61 / Avg: 1.61 / Max: 1.62Min: 1.6 / Avg: 1.6 / Max: 1.6Min: 1.59 / Avg: 1.6 / Max: 1.61

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 39.849.809.861. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215Min: 9.67 / Avg: 9.84 / Max: 9.98Min: 9.73 / Avg: 9.8 / Max: 9.86Min: 9.64 / Avg: 9.86 / Max: 10.051. Nodejs v10.19.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 312.4812.4512.52MIN: 12.38MIN: 12.35MIN: 12.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU12348121620Min: 12.47 / Avg: 12.48 / Max: 12.51Min: 12.42 / Avg: 12.45 / Max: 12.46Min: 12.48 / Avg: 12.52 / Max: 12.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score12330060090012001500119512001193

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1231.19222.38443.57664.76885.961SE +/- 0.00409, N = 3SE +/- 0.00771, N = 3SE +/- 0.00465, N = 35.298655.282135.26789MIN: 5.23MIN: 5.2MIN: 5.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810Min: 5.29 / Avg: 5.3 / Max: 5.31Min: 5.27 / Avg: 5.28 / Max: 5.29Min: 5.26 / Avg: 5.27 / Max: 5.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100SE +/- 0.06, N = 3SE +/- 0.27, N = 3SE +/- 0.41, N = 387.787.587.21. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100Min: 87.6 / Avg: 87.7 / Max: 87.8Min: 87 / Avg: 87.53 / Max: 87.9Min: 86.5 / Avg: 87.23 / Max: 87.91. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1231020304050SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 342.8143.0443.00MIN: 42.52 / MAX: 88.54MIN: 42.87 / MAX: 43.54MIN: 42.78 / MAX: 45.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123918273645Min: 42.7 / Avg: 42.81 / Max: 42.97Min: 42.94 / Avg: 43.04 / Max: 43.18Min: 42.93 / Avg: 43 / Max: 43.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd1231020304050SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 344.3344.4044.56MIN: 44.17 / MAX: 54.22MIN: 44.31 / MAX: 45.14MIN: 44.4 / MAX: 80.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd123918273645Min: 44.32 / Avg: 44.33 / Max: 44.34Min: 44.36 / Avg: 44.4 / Max: 44.45Min: 44.45 / Avg: 44.56 / Max: 44.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12380160240320400SE +/- 0.28, N = 3SE +/- 0.89, N = 3SE +/- 0.75, N = 3366.37366.60368.27
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12370140210280350Min: 366.1 / Avg: 366.37 / Max: 366.93Min: 365.45 / Avg: 366.6 / Max: 368.35Min: 367.26 / Avg: 368.27 / Max: 369.73

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12348121620SE +/- 0.03, N = 5SE +/- 0.03, N = 5SE +/- 0.04, N = 513.7713.7013.711. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12348121620Min: 13.7 / Avg: 13.77 / Max: 13.88Min: 13.64 / Avg: 13.7 / Max: 13.8Min: 13.63 / Avg: 13.71 / Max: 13.841. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing1232004006008001000SE +/- 0.13, N = 3SE +/- 4.57, N = 3SE +/- 0.01, N = 3932.00936.53931.831. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123160320480640800Min: 931.85 / Avg: 932 / Max: 932.26Min: 931.87 / Avg: 936.53 / Max: 945.67Min: 931.82 / Avg: 931.83 / Max: 931.851. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1230.09740.19480.29220.38960.487SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.4310.4320.433
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom12312345Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.43 / Avg: 0.43 / Max: 0.43

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501231224364860SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 352.4352.6352.66MIN: 52.11 / MAX: 62.46MIN: 52.36 / MAX: 62.85MIN: 52.44 / MAX: 54.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501231122334455Min: 52.38 / Avg: 52.43 / Max: 52.52Min: 52.53 / Avg: 52.63 / Max: 52.74Min: 52.63 / Avg: 52.66 / Max: 52.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 36.956.986.97MIN: 6.86 / MAX: 8.6MIN: 6.87 / MAX: 8.4MIN: 6.86 / MAX: 8.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31233691215Min: 6.93 / Avg: 6.95 / Max: 6.97Min: 6.93 / Avg: 6.98 / Max: 7.01Min: 6.94 / Avg: 6.97 / Max: 7.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.997.016.98MIN: 6.89 / MAX: 8.47MIN: 6.89 / MAX: 8.2MIN: 6.86 / MAX: 8.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31233691215Min: 6.99 / Avg: 6.99 / Max: 6.99Min: 7 / Avg: 7.01 / Max: 7.02Min: 6.96 / Avg: 6.98 / Max: 71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet123612182430SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 323.9524.0224.05MIN: 23.84 / MAX: 24.25MIN: 23.9 / MAX: 24.3MIN: 23.91 / MAX: 34.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet123612182430Min: 23.92 / Avg: 23.95 / Max: 23.98Min: 23.97 / Avg: 24.02 / Max: 24.06Min: 24 / Avg: 24.05 / Max: 24.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 108012360120180240300SE +/- 0.38, N = 3SE +/- 0.55, N = 3SE +/- 0.78, N = 3287.8289.0288.01. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 108012350100150200250Min: 287.4 / Avg: 287.83 / Max: 288.6Min: 288.1 / Avg: 289.03 / Max: 290Min: 286.4 / Avg: 287.97 / Max: 288.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231224364860SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 352.4152.6252.62MIN: 52.22 / MAX: 62.55MIN: 52.34 / MAX: 54.88MIN: 52.43 / MAX: 52.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231122334455Min: 52.37 / Avg: 52.41 / Max: 52.46Min: 52.55 / Avg: 52.62 / Max: 52.68Min: 52.6 / Avg: 52.62 / Max: 52.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v21233691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 310.1710.1710.13MIN: 10.1 / MAX: 20.21MIN: 10.1 / MAX: 12.67MIN: 10.08 / MAX: 11.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v21233691215Min: 10.15 / Avg: 10.17 / Max: 10.19Min: 10.14 / Avg: 10.17 / Max: 10.19Min: 10.12 / Avg: 10.13 / Max: 10.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface1230.58281.16561.74842.33122.914SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.582.592.58MIN: 2.57 / MAX: 2.69MIN: 2.57 / MAX: 2.65MIN: 2.57 / MAX: 2.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface123246810Min: 2.58 / Avg: 2.58 / Max: 2.59Min: 2.59 / Avg: 2.59 / Max: 2.6Min: 2.58 / Avg: 2.58 / Max: 2.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1230.5851.171.7552.342.925SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.602.602.59MIN: 2.57 / MAX: 2.74MIN: 2.58 / MAX: 2.67MIN: 2.57 / MAX: 2.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810Min: 2.59 / Avg: 2.6 / Max: 2.61Min: 2.59 / Avg: 2.6 / Max: 2.6Min: 2.59 / Avg: 2.59 / Max: 2.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg1612320406080100SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 396.6996.3696.32MIN: 96.44 / MAX: 99.07MIN: 96.17 / MAX: 106.29MIN: 96.07 / MAX: 101.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg1612320406080100Min: 96.65 / Avg: 96.69 / Max: 96.72Min: 96.3 / Avg: 96.36 / Max: 96.46Min: 96.26 / Avg: 96.32 / Max: 96.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12316003200480064008000SE +/- 2.99, N = 3SE +/- 2.41, N = 3SE +/- 3.80, N = 37407.367392.857381.22MIN: 7387.72MIN: 7385.41MIN: 7370.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12313002600390052006500Min: 7402.83 / Avg: 7407.36 / Max: 7413Min: 7390.18 / Avg: 7392.85 / Max: 7397.67Min: 7373.64 / Avg: 7381.22 / Max: 7385.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123714212835SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 331.1531.2231.11MIN: 30.97 / MAX: 32.65MIN: 31.02 / MAX: 31.57MIN: 30.95 / MAX: 33.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123714212835Min: 31.07 / Avg: 31.15 / Max: 31.29Min: 31.16 / Avg: 31.22 / Max: 31.25Min: 31.05 / Avg: 31.11 / Max: 31.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 320.6120.5720.54MIN: 20.5MIN: 20.4MIN: 20.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025Min: 20.56 / Avg: 20.61 / Max: 20.65Min: 20.53 / Avg: 20.57 / Max: 20.6Min: 20.52 / Avg: 20.54 / Max: 20.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 394.7294.6594.98
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100Min: 94.69 / Avg: 94.72 / Max: 94.73Min: 94.6 / Avg: 94.65 / Max: 94.72Min: 94.98 / Avg: 94.98 / Max: 94.99

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU1238001600240032004000SE +/- 10.88, N = 3SE +/- 3.34, N = 3SE +/- 2.19, N = 33760.523759.503747.78
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU1237001400210028003500Min: 3748.75 / Avg: 3760.52 / Max: 3782.26Min: 3752.86 / Avg: 3759.5 / Max: 3763.45Min: 3743.95 / Avg: 3747.78 / Max: 3751.53

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 323.9624.0424.01MIN: 23.86 / MAX: 25.31MIN: 23.93 / MAX: 34.03MIN: 23.91 / MAX: 26.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123612182430Min: 23.94 / Avg: 23.96 / Max: 24.01Min: 24.01 / Avg: 24.04 / Max: 24.07Min: 24 / Avg: 24.01 / Max: 24.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed123918273645SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 339.8239.7139.691. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed123816243240Min: 39.8 / Avg: 39.82 / Max: 39.84Min: 39.64 / Avg: 39.71 / Max: 39.85Min: 39.56 / Avg: 39.69 / Max: 39.841. (CC) gcc options: -O3

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score123140280420560700625626624

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 11230.07220.14440.21660.28880.361SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.3200.3210.320
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 112312345Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1612320406080100SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.05, N = 396.7296.4296.50MIN: 96.23 / MAX: 106.95MIN: 96.04 / MAX: 98.54MIN: 96.25 / MAX: 106.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1612320406080100Min: 96.57 / Avg: 96.72 / Max: 96.98Min: 96.17 / Avg: 96.42 / Max: 96.59Min: 96.41 / Avg: 96.5 / Max: 96.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m12348121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 316.7716.8216.78MIN: 16.69 / MAX: 17.91MIN: 16.74 / MAX: 17.03MIN: 16.68 / MAX: 18.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m12348121620Min: 16.76 / Avg: 16.77 / Max: 16.78Min: 16.81 / Avg: 16.82 / Max: 16.83Min: 16.74 / Avg: 16.78 / Max: 16.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.1510.1510.12MIN: 10.1 / MAX: 11.59MIN: 10.1 / MAX: 12.67MIN: 10.08 / MAX: 11.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215Min: 10.14 / Avg: 10.15 / Max: 10.17Min: 10.14 / Avg: 10.15 / Max: 10.17Min: 10.1 / Avg: 10.12 / Max: 10.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12315003000450060007500SE +/- 12.83, N = 3SE +/- 11.80, N = 3SE +/- 10.01, N = 36932.946912.726922.741. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12312002400360048006000Min: 6908.18 / Avg: 6932.94 / Max: 6951.18Min: 6897.89 / Avg: 6912.72 / Max: 6936.04Min: 6902.76 / Avg: 6922.74 / Max: 6933.91. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet123714212835SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 331.1331.2231.13MIN: 30.96 / MAX: 31.38MIN: 31.06 / MAX: 31.55MIN: 30.98 / MAX: 31.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet123714212835Min: 31.07 / Avg: 31.13 / Max: 31.2Min: 31.18 / Avg: 31.22 / Max: 31.25Min: 31.09 / Avg: 31.13 / Max: 31.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 321.1021.1621.12MIN: 21.02 / MAX: 23.28MIN: 21.05 / MAX: 22.23MIN: 21.05 / MAX: 21.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123510152025Min: 21.08 / Avg: 21.1 / Max: 21.12Min: 21.14 / Avg: 21.16 / Max: 21.17Min: 21.11 / Avg: 21.12 / Max: 21.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1230.24570.49140.73710.98281.2285SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.004, N = 31.0921.0891.091
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar123246810Min: 1.09 / Avg: 1.09 / Max: 1.09Min: 1.08 / Avg: 1.09 / Max: 1.09Min: 1.08 / Avg: 1.09 / Max: 1.1

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd1231020304050SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 344.3544.4744.43MIN: 44.21 / MAX: 46.96MIN: 44.29 / MAX: 54.28MIN: 44.35 / MAX: 44.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123918273645Min: 44.3 / Avg: 44.35 / Max: 44.39Min: 44.43 / Avg: 44.47 / Max: 44.51Min: 44.4 / Avg: 44.43 / Max: 44.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b01233691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 311.5011.5311.52MIN: 11.45 / MAX: 11.78MIN: 11.48 / MAX: 11.76MIN: 11.45 / MAX: 21.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b01233691215Min: 11.49 / Avg: 11.5 / Max: 11.52Min: 11.53 / Avg: 11.53 / Max: 11.54Min: 11.49 / Avg: 11.52 / Max: 11.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.5M3M4.5M6M7.5MSE +/- 12299.78, N = 3SE +/- 8201.00, N = 3SE +/- 9587.81, N = 36903601692007169214351. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.2M2.4M3.6M4.8M6MMin: 6879578 / Avg: 6903601.33 / Max: 6920197Min: 6910213 / Avg: 6920071.33 / Max: 6936353Min: 6905135 / Avg: 6921434.67 / Max: 69383321. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 37.998.018.00MIN: 7.9 / MAX: 9.11MIN: 7.9 / MAX: 9.49MIN: 7.91 / MAX: 9.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 7.98 / Avg: 7.99 / Max: 8.02Min: 7.97 / Avg: 8.01 / Max: 8.03Min: 7.98 / Avg: 8 / Max: 8.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m12348121620SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 316.7816.7916.75MIN: 16.67 / MAX: 17.08MIN: 16.7 / MAX: 17.58MIN: 16.65 / MAX: 18.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m12348121620Min: 16.73 / Avg: 16.78 / Max: 16.81Min: 16.75 / Avg: 16.79 / Max: 16.82Min: 16.72 / Avg: 16.75 / Max: 16.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 321.0921.1421.12MIN: 21 / MAX: 21.21MIN: 21.04 / MAX: 21.38MIN: 21.03 / MAX: 30.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123510152025Min: 21.08 / Avg: 21.09 / Max: 21.12Min: 21.12 / Avg: 21.14 / Max: 21.19Min: 21.08 / Avg: 21.12 / Max: 21.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.112330060090012001500SE +/- 1.20, N = 31290129012871. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.11232004006008001000Min: 1288 / Avg: 1289.67 / Max: 12921. (CXX) g++ options: -O3 -pthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1232K4K6K8K10KSE +/- 3.90, N = 3SE +/- 6.11, N = 3SE +/- 8.20, N = 38175.38172.68156.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed12314002800420056007000Min: 8168.7 / Avg: 8175.33 / Max: 8182.2Min: 8164.3 / Avg: 8172.57 / Max: 8184.5Min: 8140.4 / Avg: 8156.8 / Max: 8165.31. (CC) gcc options: -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 51230.2090.4180.6270.8361.045SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.9270.9270.929
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123246810Min: 0.92 / Avg: 0.93 / Max: 0.93Min: 0.92 / Avg: 0.93 / Max: 0.93Min: 0.93 / Avg: 0.93 / Max: 0.93

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S12320406080100SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 392.2192.2792.401. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S12320406080100Min: 92.18 / Avg: 92.21 / Max: 92.24Min: 92.15 / Avg: 92.27 / Max: 92.46Min: 92.34 / Avg: 92.4 / Max: 92.461. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed123918273645SE +/- 0.00, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 340.7940.7140.741. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed123816243240Min: 40.79 / Avg: 40.79 / Max: 40.8Min: 40.62 / Avg: 40.71 / Max: 40.8Min: 40.63 / Avg: 40.74 / Max: 40.811. (CC) gcc options: -O3

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215SE +/- 0.03, N = 5SE +/- 0.01, N = 5SE +/- 0.03, N = 510.7510.7310.751. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215Min: 10.71 / Avg: 10.75 / Max: 10.87Min: 10.71 / Avg: 10.73 / Max: 10.78Min: 10.72 / Avg: 10.75 / Max: 10.861. (CXX) g++ options: -fvisibility=hidden -logg -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 101230.62011.24021.86032.48043.1005SE +/- 0.004, N = 3SE +/- 0.003, N = 3SE +/- 0.004, N = 32.7512.7532.756
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123246810Min: 2.74 / Avg: 2.75 / Max: 2.76Min: 2.75 / Avg: 2.75 / Max: 2.76Min: 2.75 / Avg: 2.76 / Max: 2.76

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080123130260390520650SE +/- 0.58, N = 3SE +/- 0.58, N = 36116116121. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080123110220330440550Min: 610 / Avg: 611 / Max: 612Min: 611 / Avg: 612 / Max: 6131. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 61230.27830.55660.83491.11321.3915SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.2351.2351.237
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123246810Min: 1.23 / Avg: 1.23 / Max: 1.24Min: 1.23 / Avg: 1.23 / Max: 1.24Min: 1.24 / Avg: 1.24 / Max: 1.24

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet123246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 37.277.287.28MIN: 7.23 / MAX: 7.42MIN: 7.23 / MAX: 8.32MIN: 7.23 / MAX: 7.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1233691215Min: 7.26 / Avg: 7.27 / Max: 7.29Min: 7.28 / Avg: 7.28 / Max: 7.29Min: 7.28 / Avg: 7.28 / Max: 7.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search123306090120150SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 3130.87131.04130.991. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100Min: 130.68 / Avg: 130.87 / Max: 131.13Min: 130.88 / Avg: 131.04 / Max: 131.27Min: 130.84 / Avg: 130.99 / Max: 131.211. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack12348121620SE +/- 0.06, N = 5SE +/- 0.08, N = 5SE +/- 0.05, N = 518.1418.1618.131. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack123510152025Min: 18.08 / Avg: 18.14 / Max: 18.4Min: 18.07 / Avg: 18.16 / Max: 18.48Min: 18.08 / Avg: 18.13 / Max: 18.321. (CXX) g++ options: -rdynamic

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU12313002600390052006500SE +/- 17.73, N = 3SE +/- 2.55, N = 3SE +/- 3.51, N = 36201.836199.556195.67
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU12311002200330044005500Min: 6174.47 / Avg: 6201.83 / Max: 6235.06Min: 6196.96 / Avg: 6199.55 / Max: 6204.65Min: 6190.09 / Avg: 6195.67 / Max: 6202.15

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 312.5912.5812.591. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 012348121620Min: 12.58 / Avg: 12.59 / Max: 12.59Min: 12.57 / Avg: 12.58 / Max: 12.59Min: 12.58 / Avg: 12.59 / Max: 12.61. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 31234080120160200SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3185.45185.53185.421. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3123306090120150Min: 185.31 / Avg: 185.45 / Max: 185.67Min: 185.47 / Avg: 185.53 / Max: 185.63Min: 185.36 / Avg: 185.42 / Max: 185.481. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 212320406080100SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 390.8490.8690.811. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 212320406080100Min: 90.77 / Avg: 90.84 / Max: 90.94Min: 90.78 / Avg: 90.86 / Max: 90.94Min: 90.79 / Avg: 90.81 / Max: 90.831. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12360120180240300SE +/- 0.14, N = 3SE +/- 0.74, N = 3SE +/- 0.35, N = 3290.24290.37290.30
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12350100150200250Min: 289.96 / Avg: 290.24 / Max: 290.38Min: 289.34 / Avg: 290.37 / Max: 291.8Min: 289.83 / Avg: 290.3 / Max: 290.98

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123130K260K390K520K650KSE +/- 530.95, N = 3SE +/- 403.85, N = 3SE +/- 721.53, N = 3604363604394604231
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123100K200K300K400K500KMin: 603686 / Avg: 604363 / Max: 605410Min: 603910 / Avg: 604394 / Max: 605196Min: 602790 / Avg: 604231.33 / Max: 605013

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KSE +/- 5.61, N = 3SE +/- 2.71, N = 3SE +/- 4.17, N = 38151.58153.38152.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed12314002800420056007000Min: 8141.5 / Avg: 8151.5 / Max: 8160.9Min: 8149 / Avg: 8153.27 / Max: 8158.3Min: 8147.3 / Avg: 8152.57 / Max: 8160.81. (CC) gcc options: -O3

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile1234080120160200SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 3164.26164.26164.27
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile123306090120150Min: 164.24 / Avg: 164.26 / Max: 164.27Min: 164.18 / Avg: 164.26 / Max: 164.34Min: 164.2 / Avg: 164.27 / Max: 164.37

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU1230.1440.2880.4320.5760.72SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.640.640.64
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU123246810Min: 0.64 / Avg: 0.64 / Max: 0.64Min: 0.64 / Avg: 0.64 / Max: 0.64Min: 0.64 / Avg: 0.64 / Max: 0.64

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU1230.23630.47260.70890.94521.1815SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.051.051.05
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU123246810Min: 1.04 / Avg: 1.05 / Max: 1.06Min: 1.05 / Avg: 1.05 / Max: 1.06Min: 1.05 / Avg: 1.05 / Max: 1.05

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet123246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 37.297.297.29MIN: 7.21 / MAX: 9.87MIN: 7.24 / MAX: 7.53MIN: 7.23 / MAX: 7.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet1233691215Min: 7.27 / Avg: 7.29 / Max: 7.3Min: 7.29 / Avg: 7.29 / Max: 7.3Min: 7.28 / Avg: 7.29 / Max: 7.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.018.018.01MIN: 7.9 / MAX: 10.56MIN: 7.92 / MAX: 9.29MIN: 7.92 / MAX: 9.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v21233691215Min: 7.98 / Avg: 8.01 / Max: 8.05Min: 8 / Avg: 8.01 / Max: 8.02Min: 8 / Avg: 8.01 / Max: 8.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID1230.12150.2430.36450.4860.6075SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.540.540.541. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID123246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.541. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom1230.07880.15760.23640.31520.394SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.350.350.351. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom12312345Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.351. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya1230.12830.25660.38490.51320.6415SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.570.570.571. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya123246810Min: 0.57 / Avg: 0.57 / Max: 0.57Min: 0.57 / Avg: 0.57 / Max: 0.57Min: 0.57 / Avg: 0.57 / Max: 0.571. (CXX) g++ options: -O3 -pthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1230.360.721.081.441.8SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.61.61.61. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup123246810Min: 1.6 / Avg: 1.6 / Max: 1.6Min: 1.6 / Avg: 1.6 / Max: 1.6Min: 1.6 / Avg: 1.6 / Max: 1.61. (CC) gcc options: -fopenmp -O3 -lm

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 10241232004006008001000SE +/- 1.76, N = 3SE +/- 1.00, N = 38978978971. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 1024123160320480640800Min: 894 / Avg: 896.67 / Max: 900Min: 896 / Avg: 897 / Max: 8991. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest12468107.6531. (CXX) g++ options: -O3 -O2 -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest1233691215SE +/- 0.20, N = 12SE +/- 0.35, N = 12SE +/- 0.04, N = 310.8110.6911.071. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest1233691215Min: 8.62 / Avg: 10.81 / Max: 11.2Min: 6.86 / Avg: 10.69 / Max: 11.31Min: 10.99 / Avg: 11.06 / Max: 11.11. (CXX) g++ options: -O3 -O2 -lpthread -ldl

108 Results Shown

Redis
VkResample
oneDNN
Redis
asmFish
oneDNN
simdjson
OpenVINO
Redis
OpenVINO
yquake2
OpenVINO
eSpeak-NG Speech Engine
Redis
GROMACS
OpenVINO
Coremark
SQLite Speedtest
LAMMPS Molecular Dynamics Simulator
OpenVINO:
  Age Gender Recognition Retail 0013 FP32 - CPU
  Face Detection 0106 FP32 - CPU
AI Benchmark Alpha
NCNN
Redis
NCNN
LZ4 Compression
oneDNN
NCNN
oneDNN
Stockfish
Timed MAFFT Alignment
NCNN
OpenVINO:
  Age Gender Recognition Retail 0013 FP32 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
Node.js V8 Web Tooling Benchmark
oneDNN
AI Benchmark Alpha
oneDNN
yquake2
NCNN:
  CPU - yolov4-tiny
  Vulkan GPU - squeezenet_ssd
Build2
Monkey Audio Encoding
Basis Universal
IndigoBench
NCNN:
  Vulkan GPU - resnet50
  CPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU - googlenet
yquake2
NCNN:
  CPU - resnet50
  Vulkan GPU - shufflenet-v2
  Vulkan GPU - blazeface
  CPU - blazeface
  Vulkan GPU - vgg16
oneDNN
NCNN
oneDNN
Timed Eigen Compilation
OpenVINO
NCNN
LZ4 Compression
AI Benchmark Alpha
rav1e
NCNN:
  CPU - vgg16
  Vulkan GPU - regnety_400m
  CPU - shufflenet-v2
LZ4 Compression
NCNN:
  Vulkan GPU - mobilenet
  Vulkan GPU - alexnet
IndigoBench
NCNN:
  CPU - squeezenet_ssd
  Vulkan GPU - efficientnet-b0
Crafty
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU - regnety_400m
  CPU - alexnet
VkFFT
LZ4 Compression
rav1e
Basis Universal
LZ4 Compression
Opus Codec Encoding
rav1e
VKMark
rav1e
NCNN
Timed HMMer Search
WavPack Audio Encoding
OpenVINO
Basis Universal:
  UASTC Level 0
  UASTC Level 3
  UASTC Level 2
Numpy Benchmark
PHPBench
LZ4 Compression
Timed FFmpeg Compilation
OpenVINO:
  Person Detection 0106 FP16 - CPU
  Face Detection 0106 FP32 - CPU
NCNN:
  Vulkan GPU - mnasnet
  Vulkan GPU-v2-v2 - mobilenet-v2
simdjson:
  DistinctUserID
  LargeRand
  Kostya
CLOMP
VKMark
Betsy GPU Compressor:
  ETC2 RGB - Highest
  ETC1 - Highest