Broadwell 2021

Intel Core i7-5600U testing with a LENOVO 20BSCTO1WW (N14ET49W 1.27 BIOS) and Intel HD 5500 3GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101027-HA-BROADWELL87
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
Bioinformatics 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 7 Tests
Creator Workloads 7 Tests
Encoding 4 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 2 Tests
Multi-Core 5 Tests
NVIDIA GPU Compute 5 Tests
Programmer / Developer System Benchmarks 7 Tests
Scientific Computing 2 Tests
Server 4 Tests
Server CPU Tests 2 Tests
Single-Threaded 2 Tests
Vulkan Compute 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
R1
January 01 2021
  10 Hours, 52 Minutes
2
January 01 2021
  10 Hours, 17 Minutes
3
January 02 2021
  10 Hours, 38 Minutes
Invert Hiding All Results Option
  10 Hours, 36 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Broadwell 2021ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionR123Intel Core i7-5600U @ 3.20GHz (2 Cores / 4 Threads)LENOVO 20BSCTO1WW (N14ET49W 1.27 BIOS)Intel Broadwell-U-OPI8GB128GB SAMSUNG MZNTE128Intel HD 5500 3GB (950MHz)Intel Broadwell-U AudioIntel I218-LM + Intel 7265Ubuntu 20.105.9.1-050901-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 21.0.0-devel (git-bd69765 2021-01-01 groovy-oibaf-ppa)OpenCL 3.01.2.145GCC 10.2.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0x2f - Thermald 2.3Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

R123Result OverviewPhoronix Test Suite100%102%104%106%CLOMPTimed MAFFT AlignmentVkResampleSQLite SpeedtestCoremarkNode.js V8 Web Tooling BenchmarkBuild2NCNNoneDNNWarsowTimed Eigen CompilationTimed FFmpeg CompilationBRL-CADVKMarkBetsy GPU CompressorUnpacking FirefoxVkFFTPHPBenchCryptsetupOpus Codec EncodingMonkey Audio EncodingLibplaceboTimed HMMer SearchOgg Audio EncodingsimdjsonWavPack Audio Encoding

Broadwell 2021vkfft: build2: Time To Compilebuild-ffmpeg: Time To Compileclomp: Static OMP Speedupbrl-cad: VGR Performance Metricncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUwarsow: 1280 x 1024vkmark: 1920 x 1080vkmark: 1280 x 1024vkmark: 800 x 600vkmark: 1024 x 768hmmer: Pfam Database Searchonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUbuild-eigen: Time To Compilenode-web-tooling: sqlite-speedtest: Timed Time - Size 1,000unpack-firefox: firefox-84.0.source.tar.xzwarsow: 1920 x 1080simdjson: Kostyasimdjson: LargeRandlibplacebo: av1_grain_laplibplacebo: hdr_peakdetectlibplacebo: polar_nocomputelibplacebo: deband_heavysimdjson: PartialTweetssimdjson: DistinctUserIDphpbench: PHP Benchmark Suiteonednn: IP Shapes 3D - u8s8f32 - CPUbetsy: ETC1 - Highestbetsy: ETC2 RGB - Highestencode-wavpack: WAV To WavPackcryptsetup: Twofish-XTS 512b Decryptioncryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Serpent-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Encryptioncryptsetup: AES-XTS 512b Decryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Twofish-XTS 256b Encryptioncryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptioncryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 256b Encryptioncryptsetup: PBKDF2-whirlpoolcryptsetup: PBKDF2-sha512encode-ape: WAV To APEencode-ogg: WAV To Oggcoremark: CoreMark Size 666 - Iterations Per Secondonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUvkresample: 2x - Doublevkresample: 2x - Singleencode-opus: WAV To Opus Encodemafft: Multiple Sequence Alignment - LSU RNAonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUR1231126919.504341.9281.31458027.8271.3379.44101.1136.8445.51153.1847.185.6422.5014.2621.4012.6314.5561.4927.8171.2379.35101.5537.0845.64153.0047.335.6522.4614.4521.5312.6914.5661.5520853.020903.320830.570.746569316761096129.00611261.411309.111297.4116.4137.2495.58127.46245.60.490.33537.1132875.8923.7338.650.580.595291007.0195216.14816.18518.946315.0314.9488.7505.21285.11304.9314.8313.2489.0505.11577.41577.5513672126334816.84927.95560703.73328932.113729.1499152.680152.24010.89417.00821.004013.294710.109415.453220.908033.884027.796626.201435.35401122918.462340.6391.21456227.7271.0379.04101.1737.0245.65152.8847.015.6222.1114.0521.3412.5614.4461.3827.1370.9778.90101.2236.7045.41152.9846.265.4821.9714.1321.3612.5714.4761.4120912.720853.420852.070.546168516911097129.09211332.211320.111316.9116.8607.3094.57727.40445.80.490.33534.5132911.3323.7238.610.580.595284476.8080916.17716.22118.948315.5315.7490.5506.81289.31287.5316.0314.3488.6505.41586.71581.4513001126284316.84327.94260082.67301931.695629.4045155.244152.37710.90317.19621.087513.33259.1228715.544820.517233.500327.746326.222535.63731124906.249340.2011.31450427.6471.2678.88101.0136.8145.51152.9147.165.6322.3014.1621.3012.6514.5361.6127.2571.1879.12101.2336.7645.73152.9346.705.4722.0113.9821.2112.6114.5461.4720847.720858.020812.269.546368916701092129.22011243.511214.911200.4116.0357.3195.18727.36445.80.490.33534.1532916.0823.7338.630.580.595302306.8142316.20816.24418.946315.4314.4487.8506.41273.41285.9315.9315.2488.7503.41559.61590.4513336126334516.86827.92460345.61161631.969129.4752156.008153.24510.92217.36921.077213.460110.0792815.563820.625133.473627.301026.248635.8033OpenBenchmarking.org

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.123R12004006008001000SE +/- 2.00, N = 3SE +/- 1.45, N = 3SE +/- 2.52, N = 31122112411261. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.123R12004006008001000Min: 1118 / Avg: 1122 / Max: 1124Min: 1122 / Avg: 1124.33 / Max: 1127Min: 1123 / Avg: 1126 / Max: 11311. (CXX) g++ options: -O3 -pthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileR1232004006008001000SE +/- 0.32, N = 3SE +/- 0.55, N = 3SE +/- 0.82, N = 3919.50918.46906.25
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileR123160320480640800Min: 919.14 / Avg: 919.5 / Max: 920.15Min: 917.48 / Avg: 918.46 / Max: 919.39Min: 904.95 / Avg: 906.25 / Max: 907.77

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileR12370140210280350SE +/- 0.25, N = 3SE +/- 0.15, N = 3SE +/- 0.31, N = 3341.93340.64340.20
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileR12360120180240300Min: 341.53 / Avg: 341.93 / Max: 342.38Min: 340.42 / Avg: 340.64 / Max: 340.94Min: 339.61 / Avg: 340.2 / Max: 340.65

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup2R130.29250.5850.87751.171.4625SE +/- 0.00, N = 3SE +/- 0.01, N = 12SE +/- 0.00, N = 31.21.31.31. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup2R13246810Min: 1.2 / Avg: 1.2 / Max: 1.2Min: 1.2 / Avg: 1.26 / Max: 1.3Min: 1.3 / Avg: 1.3 / Max: 1.31. (CC) gcc options: -fopenmp -O3 -lm

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric32R13K6K9K12K15K1450414562145801. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mR123714212835SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 327.8227.7227.64MIN: 27.57 / MAX: 37.34MIN: 27.48 / MAX: 31MIN: 27.49 / MAX: 29.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mR123612182430Min: 27.77 / Avg: 27.82 / Max: 27.89Min: 27.6 / Avg: 27.72 / Max: 27.81Min: 27.63 / Avg: 27.64 / Max: 27.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdR1321632486480SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 371.3371.2671.03MIN: 70.87 / MAX: 74.75MIN: 70.52 / MAX: 78.34MIN: 70.28 / MAX: 76.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdR1321428425670Min: 71.09 / Avg: 71.33 / Max: 71.45Min: 71.18 / Avg: 71.26 / Max: 71.38Min: 70.95 / Avg: 71.03 / Max: 71.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyR12320406080100SE +/- 0.27, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 379.4479.0478.88MIN: 78.27 / MAX: 87.18MIN: 78.34 / MAX: 85.24MIN: 78.2 / MAX: 90.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyR1231530456075Min: 79 / Avg: 79.44 / Max: 79.94Min: 78.89 / Avg: 79.04 / Max: 79.32Min: 78.61 / Avg: 78.88 / Max: 79.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet502R1320406080100SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3101.17101.11101.01MIN: 100.65 / MAX: 111.1MIN: 100.49 / MAX: 113.37MIN: 100.5 / MAX: 114.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet502R1320406080100Min: 101.03 / Avg: 101.17 / Max: 101.41Min: 100.92 / Avg: 101.11 / Max: 101.34Min: 100.91 / Avg: 101.01 / Max: 101.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet2R13918273645SE +/- 0.15, N = 3SE +/- 0.18, N = 3SE +/- 0.09, N = 337.0236.8436.81MIN: 35.32 / MAX: 103.85MIN: 35.27 / MAX: 39.39MIN: 34.96 / MAX: 79.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet2R13816243240Min: 36.82 / Avg: 37.02 / Max: 37.32Min: 36.52 / Avg: 36.84 / Max: 37.15Min: 36.64 / Avg: 36.81 / Max: 36.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet1823R11020304050SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 345.6545.5145.51MIN: 45.26 / MAX: 56.06MIN: 45.23 / MAX: 48.14MIN: 45.16 / MAX: 47.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet1823R1918273645Min: 45.51 / Avg: 45.65 / Max: 45.91Min: 45.48 / Avg: 45.51 / Max: 45.57Min: 45.42 / Avg: 45.51 / Max: 45.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16R132306090120150SE +/- 0.18, N = 3SE +/- 0.19, N = 3SE +/- 0.24, N = 3153.18152.91152.88MIN: 152.21 / MAX: 161.01MIN: 151.95 / MAX: 164.13MIN: 151.28 / MAX: 160.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16R132306090120150Min: 152.89 / Avg: 153.18 / Max: 153.5Min: 152.55 / Avg: 152.91 / Max: 153.18Min: 152.4 / Avg: 152.88 / Max: 153.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetR1321122334455SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 347.1847.1647.01MIN: 46.85 / MAX: 56.58MIN: 46.7 / MAX: 59.52MIN: 46.74 / MAX: 49.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetR1321020304050Min: 47.01 / Avg: 47.18 / Max: 47.29Min: 46.92 / Avg: 47.16 / Max: 47.29Min: 46.97 / Avg: 47.01 / Max: 47.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceR1321.2692.5383.8075.0766.345SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.645.635.62MIN: 5.57 / MAX: 5.86MIN: 5.5 / MAX: 5.84MIN: 5.54 / MAX: 6.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceR132246810Min: 5.63 / Avg: 5.64 / Max: 5.65Min: 5.62 / Avg: 5.63 / Max: 5.64Min: 5.61 / Avg: 5.62 / Max: 5.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0R132510152025SE +/- 0.41, N = 3SE +/- 0.39, N = 3SE +/- 0.35, N = 322.5022.3022.11MIN: 20.99 / MAX: 25.9MIN: 21.02 / MAX: 25.56MIN: 20.59 / MAX: 33.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0R132510152025Min: 21.69 / Avg: 22.5 / Max: 22.99Min: 21.53 / Avg: 22.3 / Max: 22.77Min: 21.4 / Avg: 22.11 / Max: 22.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetR13248121620SE +/- 0.37, N = 3SE +/- 0.40, N = 3SE +/- 0.41, N = 314.2614.1614.05MIN: 13.24 / MAX: 17.01MIN: 13.06 / MAX: 28.01MIN: 13.01 / MAX: 18.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetR13248121620Min: 13.51 / Avg: 14.26 / Max: 14.64Min: 13.35 / Avg: 14.16 / Max: 14.6Min: 13.23 / Avg: 14.05 / Max: 14.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2R123510152025SE +/- 0.57, N = 3SE +/- 0.63, N = 3SE +/- 0.57, N = 321.4021.3421.30MIN: 20.12 / MAX: 23.22MIN: 20 / MAX: 24.44MIN: 20.01 / MAX: 23.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2R123510152025Min: 20.25 / Avg: 21.4 / Max: 21.97Min: 20.09 / Avg: 21.34 / Max: 21.99Min: 20.18 / Avg: 21.3 / Max: 22.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v33R123691215SE +/- 0.19, N = 3SE +/- 0.19, N = 3SE +/- 0.13, N = 312.6512.6312.56MIN: 12.18 / MAX: 14.68MIN: 12.19 / MAX: 14.68MIN: 12.21 / MAX: 15.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v33R1248121620Min: 12.27 / Avg: 12.65 / Max: 12.87Min: 12.26 / Avg: 12.63 / Max: 12.84Min: 12.29 / Avg: 12.56 / Max: 12.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2R13248121620SE +/- 0.19, N = 3SE +/- 0.18, N = 3SE +/- 0.13, N = 314.5514.5314.44MIN: 14.04 / MAX: 28.86MIN: 13.97 / MAX: 26.42MIN: 14.04 / MAX: 16.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2R13248121620Min: 14.17 / Avg: 14.55 / Max: 14.79Min: 14.18 / Avg: 14.53 / Max: 14.72Min: 14.17 / Avg: 14.44 / Max: 14.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet3R121428425670SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 361.6161.4961.38MIN: 61.11 / MAX: 113.17MIN: 61.05 / MAX: 63.53MIN: 61.05 / MAX: 87.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet3R121224364860Min: 61.36 / Avg: 61.61 / Max: 61.9Min: 61.37 / Avg: 61.49 / Max: 61.65Min: 61.29 / Avg: 61.38 / Max: 61.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mR132714212835SE +/- 0.07, N = 3SE +/- 0.53, N = 3SE +/- 0.60, N = 327.8127.2527.13MIN: 26.38 / MAX: 30.33MIN: 26 / MAX: 29.97MIN: 25.79 / MAX: 37.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mR132612182430Min: 27.72 / Avg: 27.81 / Max: 27.94Min: 26.19 / Avg: 27.25 / Max: 27.82Min: 25.93 / Avg: 27.13 / Max: 27.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdR1321632486480SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 371.2371.1870.97MIN: 70.8 / MAX: 121.35MIN: 70.63 / MAX: 78.41MIN: 70.58 / MAX: 80.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdR1321428425670Min: 71.11 / Avg: 71.23 / Max: 71.42Min: 71.02 / Avg: 71.18 / Max: 71.34Min: 70.79 / Avg: 70.97 / Max: 71.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyR13220406080100SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 379.3579.1278.90MIN: 78.46 / MAX: 91.83MIN: 78.31 / MAX: 86.73MIN: 78.21 / MAX: 91.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyR1321530456075Min: 79.3 / Avg: 79.35 / Max: 79.41Min: 78.95 / Avg: 79.12 / Max: 79.3Min: 78.79 / Avg: 78.9 / Max: 79.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50R13220406080100SE +/- 0.16, N = 3SE +/- 0.28, N = 3SE +/- 0.21, N = 3101.55101.23101.22MIN: 100.72 / MAX: 112.98MIN: 100.55 / MAX: 114.34MIN: 100.01 / MAX: 110.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50R13220406080100Min: 101.36 / Avg: 101.55 / Max: 101.86Min: 100.83 / Avg: 101.23 / Max: 101.77Min: 100.95 / Avg: 101.22 / Max: 101.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetR132918273645SE +/- 0.27, N = 3SE +/- 0.12, N = 3SE +/- 0.23, N = 337.0836.7636.70MIN: 34.94 / MAX: 39.37MIN: 35.25 / MAX: 46.5MIN: 35.28 / MAX: 38.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetR132816243240Min: 36.54 / Avg: 37.08 / Max: 37.41Min: 36.53 / Avg: 36.76 / Max: 36.91Min: 36.43 / Avg: 36.7 / Max: 37.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet183R121020304050SE +/- 0.29, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 345.7345.6445.41MIN: 45.08 / MAX: 48.32MIN: 45.25 / MAX: 47.78MIN: 45.11 / MAX: 48.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet183R12918273645Min: 45.36 / Avg: 45.73 / Max: 46.31Min: 45.51 / Avg: 45.64 / Max: 45.87Min: 45.35 / Avg: 45.41 / Max: 45.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16R123306090120150SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.31, N = 3153.00152.98152.93MIN: 151.95 / MAX: 164.37MIN: 152.01 / MAX: 164.13MIN: 151.38 / MAX: 166.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16R123306090120150Min: 152.88 / Avg: 153 / Max: 153.11Min: 152.77 / Avg: 152.98 / Max: 153.3Min: 152.32 / Avg: 152.93 / Max: 153.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetR1321122334455SE +/- 0.16, N = 3SE +/- 0.77, N = 3SE +/- 0.81, N = 347.3346.7046.26MIN: 46.92 / MAX: 49.74MIN: 44.45 / MAX: 57.93MIN: 43.6 / MAX: 59.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetR1321020304050Min: 47.07 / Avg: 47.33 / Max: 47.61Min: 45.18 / Avg: 46.7 / Max: 47.65Min: 44.63 / Avg: 46.26 / Max: 47.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceR1231.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.15, N = 3SE +/- 0.16, N = 35.655.485.47MIN: 5.56 / MAX: 7.88MIN: 5.14 / MAX: 5.72MIN: 5.12 / MAX: 5.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceR123246810Min: 5.63 / Avg: 5.65 / Max: 5.66Min: 5.17 / Avg: 5.48 / Max: 5.64Min: 5.15 / Avg: 5.47 / Max: 5.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0R132510152025SE +/- 0.30, N = 3SE +/- 0.59, N = 3SE +/- 0.66, N = 322.4622.0121.97MIN: 21.52 / MAX: 24.07MIN: 20.61 / MAX: 33.2MIN: 20.55 / MAX: 27.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0R132510152025Min: 21.87 / Avg: 22.46 / Max: 22.87Min: 20.83 / Avg: 22.01 / Max: 22.7Min: 20.64 / Avg: 21.97 / Max: 22.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetR12348121620SE +/- 0.26, N = 3SE +/- 0.47, N = 3SE +/- 0.45, N = 314.4514.1313.98MIN: 13.46 / MAX: 16.94MIN: 13.1 / MAX: 14.93MIN: 13.03 / MAX: 14.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetR12348121620Min: 13.93 / Avg: 14.45 / Max: 14.76Min: 13.19 / Avg: 14.13 / Max: 14.67Min: 13.09 / Avg: 13.98 / Max: 14.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2R123510152025SE +/- 0.37, N = 3SE +/- 0.58, N = 3SE +/- 0.57, N = 321.5321.3621.21MIN: 20.55 / MAX: 22.52MIN: 19.98 / MAX: 35.94MIN: 19.96 / MAX: 25.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2R123510152025Min: 20.79 / Avg: 21.53 / Max: 21.93Min: 20.2 / Avg: 21.36 / Max: 21.97Min: 20.11 / Avg: 21.21 / Max: 22.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3R1323691215SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 312.6912.6112.57MIN: 12.21 / MAX: 24.64MIN: 12.23 / MAX: 13.54MIN: 12.24 / MAX: 14.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3R13248121620Min: 12.57 / Avg: 12.69 / Max: 12.8Min: 12.33 / Avg: 12.61 / Max: 12.79Min: 12.3 / Avg: 12.57 / Max: 12.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2R13248121620SE +/- 0.14, N = 3SE +/- 0.20, N = 3SE +/- 0.15, N = 314.5614.5414.47MIN: 14.07 / MAX: 17.44MIN: 14.01 / MAX: 17.8MIN: 14.06 / MAX: 28.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2R13248121620Min: 14.3 / Avg: 14.56 / Max: 14.77Min: 14.15 / Avg: 14.54 / Max: 14.75Min: 14.18 / Avg: 14.47 / Max: 14.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetR1321428425670SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 361.5561.4761.41MIN: 61.05 / MAX: 64.2MIN: 61.15 / MAX: 64.55MIN: 61.02 / MAX: 63.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetR1321224364860Min: 61.43 / Avg: 61.55 / Max: 61.65Min: 61.44 / Avg: 61.47 / Max: 61.5Min: 61.31 / Avg: 61.41 / Max: 61.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU2R134K8K12K16K20KSE +/- 36.91, N = 3SE +/- 16.92, N = 3SE +/- 5.57, N = 320912.720853.020847.7MIN: 20792.3MIN: 20801.4MIN: 20792.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU2R134K8K12K16K20KMin: 20843.4 / Avg: 20912.7 / Max: 20969.4Min: 20836 / Avg: 20852.97 / Max: 20886.8Min: 20837 / Avg: 20847.73 / Max: 20855.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUR1324K8K12K16K20KSE +/- 20.57, N = 3SE +/- 3.46, N = 3SE +/- 9.04, N = 320903.320858.020853.4MIN: 20835.5MIN: 20799.8MIN: 20778.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUR1324K8K12K16K20KMin: 20863.2 / Avg: 20903.3 / Max: 20931.3Min: 20852.4 / Avg: 20857.97 / Max: 20864.3Min: 20842.7 / Avg: 20853.43 / Max: 20871.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU2R134K8K12K16K20KSE +/- 51.92, N = 3SE +/- 29.30, N = 3SE +/- 37.82, N = 320852.020830.520812.2MIN: 20691.5MIN: 20740.9MIN: 20694.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU2R134K8K12K16K20KMin: 20748.4 / Avg: 20852.03 / Max: 20909.4Min: 20772.5 / Avg: 20830.47 / Max: 20866.9Min: 20738.8 / Avg: 20812.17 / Max: 20864.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1280 x 102432R11632486480SE +/- 0.63, N = 13SE +/- 0.44, N = 3SE +/- 0.09, N = 369.570.570.7
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1280 x 102432R11428425670Min: 62.1 / Avg: 69.5 / Max: 70.7Min: 70 / Avg: 70.53 / Max: 71.4Min: 70.5 / Avg: 70.67 / Max: 70.8

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 108023R1100200300400500SE +/- 0.67, N = 3SE +/- 2.33, N = 3SE +/- 1.76, N = 34614634651. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 108023R180160240320400Min: 460 / Avg: 461.33 / Max: 462Min: 459 / Avg: 462.67 / Max: 467Min: 462 / Avg: 464.67 / Max: 4681. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 102423R1150300450600750SE +/- 2.19, N = 3SE +/- 1.53, N = 3SE +/- 0.67, N = 36856896931. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 102423R1120240360480600Min: 681 / Avg: 685.33 / Max: 688Min: 686 / Avg: 689 / Max: 691Min: 692 / Avg: 693.33 / Max: 6941. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 800 x 6003R12400800120016002000SE +/- 10.20, N = 3SE +/- 7.54, N = 3SE +/- 0.88, N = 31670167616911. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 800 x 6003R1230060090012001500Min: 1658 / Avg: 1669.67 / Max: 1690Min: 1664 / Avg: 1676.33 / Max: 1690Min: 1690 / Avg: 1691.33 / Max: 16931. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1024 x 7683R122004006008001000SE +/- 4.18, N = 3SE +/- 6.69, N = 3SE +/- 6.11, N = 31092109610971. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1024 x 7683R122004006008001000Min: 1087 / Avg: 1091.67 / Max: 1100Min: 1088 / Avg: 1095.67 / Max: 1109Min: 1089 / Avg: 1097 / Max: 11091. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search32R1306090120150SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.14, N = 3129.22129.09129.011. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search32R120406080100Min: 129.01 / Avg: 129.22 / Max: 129.45Min: 128.9 / Avg: 129.09 / Max: 129.42Min: 128.74 / Avg: 129.01 / Max: 129.181. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU2R132K4K6K8K10KSE +/- 15.69, N = 3SE +/- 48.62, N = 3SE +/- 44.20, N = 311332.211261.411243.5MIN: 11227.9MIN: 11146.6MIN: 11158.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU2R132K4K6K8K10KMin: 11300.8 / Avg: 11332.17 / Max: 11348.8Min: 11168.4 / Avg: 11261.4 / Max: 11332.5Min: 11195.2 / Avg: 11243.53 / Max: 11331.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU2R132K4K6K8K10KSE +/- 44.81, N = 3SE +/- 65.97, N = 3SE +/- 13.64, N = 311320.111309.111214.9MIN: 11234.3MIN: 11154.3MIN: 11171.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU2R132K4K6K8K10KMin: 11258.3 / Avg: 11320.1 / Max: 11407.2Min: 11184.2 / Avg: 11309.07 / Max: 11408.4Min: 11198.4 / Avg: 11214.93 / Max: 112421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU2R132K4K6K8K10KSE +/- 15.94, N = 3SE +/- 25.05, N = 3SE +/- 18.25, N = 311316.911297.411200.4MIN: 11169.5MIN: 11225.5MIN: 11129.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU2R132K4K6K8K10KMin: 11293 / Avg: 11316.87 / Max: 11347.1Min: 11255.8 / Avg: 11297.43 / Max: 11342.4Min: 11166.5 / Avg: 11200.37 / Max: 11229.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile2R13306090120150SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 3116.86116.41116.04
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile2R1320406080100Min: 116.74 / Avg: 116.86 / Max: 116.98Min: 116.3 / Avg: 116.41 / Max: 116.54Min: 115.84 / Avg: 116.04 / Max: 116.26

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkR123246810SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 37.247.307.311. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkR1233691215Min: 7.2 / Avg: 7.24 / Max: 7.27Min: 7.2 / Avg: 7.3 / Max: 7.44Min: 7.27 / Avg: 7.31 / Max: 7.331. Nodejs v12.18.2

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000R13220406080100SE +/- 0.57, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 395.5895.1994.581. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000R13220406080100Min: 94.77 / Avg: 95.58 / Max: 96.69Min: 95.05 / Avg: 95.19 / Max: 95.26Min: 94.46 / Avg: 94.58 / Max: 94.731. (CC) gcc options: -O2 -ldl -lz -lpthread

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzR123612182430SE +/- 0.33, N = 6SE +/- 0.32, N = 6SE +/- 0.16, N = 1927.4627.4027.36
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzR123612182430Min: 26.82 / Avg: 27.46 / Max: 29.06Min: 26.98 / Avg: 27.4 / Max: 29Min: 26.71 / Avg: 27.36 / Max: 29.89

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080R1231020304050SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.20, N = 345.645.845.8
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080R123918273645Min: 45.5 / Avg: 45.63 / Max: 45.9Min: 45.7 / Avg: 45.83 / Max: 46.1Min: 45.6 / Avg: 45.8 / Max: 46.2

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaR1230.11030.22060.33090.44120.5515SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.490.490.491. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaR123246810Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.49 / Avg: 0.49 / Max: 0.491. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomR1230.07430.14860.22290.29720.3715SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.330.330.331. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomR12312345Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.331. (CXX) g++ options: -O3 -pthread

Libplacebo

Libplacebo is a multimedia rendering library based on the core rendering code of the MPV player. The libplacebo benchmark relies on the Vulkan API and tests various primitives. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: av1_grain_lap32R1120240360480600SE +/- 1.17, N = 3SE +/- 1.77, N = 3SE +/- 0.34, N = 3534.15534.51537.111. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: av1_grain_lap32R1100200300400500Min: 532.83 / Avg: 534.15 / Max: 536.49Min: 531.04 / Avg: 534.51 / Max: 536.9Min: 536.76 / Avg: 537.11 / Max: 537.81. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: hdr_peakdetectR1237K14K21K28K35KSE +/- 290.63, N = 3SE +/- 358.35, N = 3SE +/- 401.65, N = 332875.8932911.3332916.081. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: hdr_peakdetectR1236K12K18K24K30KMin: 32453.94 / Avg: 32875.89 / Max: 33433.07Min: 32537.26 / Avg: 32911.33 / Max: 33627.79Min: 32344.81 / Avg: 32916.08 / Max: 33690.791. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: polar_nocompute2R13612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 323.7223.7323.731. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: polar_nocompute2R13612182430Min: 23.71 / Avg: 23.72 / Max: 23.74Min: 23.72 / Avg: 23.73 / Max: 23.75Min: 23.71 / Avg: 23.73 / Max: 23.741. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: deband_heavy23R1918273645SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 338.6138.6338.651. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: deband_heavy23R1816243240Min: 38.61 / Avg: 38.61 / Max: 38.61Min: 38.6 / Avg: 38.63 / Max: 38.65Min: 38.62 / Avg: 38.65 / Max: 38.671. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsR1230.13050.2610.39150.5220.6525SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.580.580.581. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsR123246810Min: 0.58 / Avg: 0.58 / Max: 0.58Min: 0.58 / Avg: 0.58 / Max: 0.58Min: 0.58 / Avg: 0.58 / Max: 0.581. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDR1230.13280.26560.39840.53120.664SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.590.590.591. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDR123246810Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.591. (CXX) g++ options: -O3 -pthread

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite2R13110K220K330K440K550KSE +/- 388.18, N = 3SE +/- 498.65, N = 3SE +/- 222.67, N = 3528447529100530230
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite2R1390K180K270K360K450KMin: 527998 / Avg: 528447 / Max: 529220Min: 528108 / Avg: 529100 / Max: 529685Min: 529799 / Avg: 530230.33 / Max: 530542

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUR132246810SE +/- 0.06965, N = 15SE +/- 0.10395, N = 15SE +/- 0.09974, N = 37.019526.814236.80809MIN: 6.61MIN: 6.35MIN: 6.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUR1323691215Min: 6.83 / Avg: 7.02 / Max: 7.75Min: 6.5 / Avg: 6.81 / Max: 7.81Min: 6.61 / Avg: 6.81 / Max: 6.921. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest32R148121620SE +/- 0.27, N = 3SE +/- 0.23, N = 4SE +/- 0.14, N = 1316.2116.1816.151. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest32R148121620Min: 15.9 / Avg: 16.21 / Max: 16.75Min: 15.9 / Avg: 16.18 / Max: 16.87Min: 15.95 / Avg: 16.15 / Max: 17.81. (CXX) g++ options: -O3 -O2 -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest32R148121620SE +/- 0.28, N = 3SE +/- 0.26, N = 3SE +/- 0.20, N = 1316.2416.2216.191. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest32R148121620Min: 15.96 / Avg: 16.24 / Max: 16.8Min: 15.94 / Avg: 16.22 / Max: 16.74Min: 15.94 / Avg: 16.18 / Max: 18.531. (CXX) g++ options: -O3 -O2 -lpthread -ldl

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack23R1510152025SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 518.9518.9518.951. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack23R1510152025Min: 18.94 / Avg: 18.95 / Max: 18.97Min: 18.94 / Avg: 18.95 / Max: 18.96Min: 18.94 / Avg: 18.95 / Max: 18.951. (CXX) g++ options: -rdynamic

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionR13270140210280350SE +/- 0.58, N = 3SE +/- 0.48, N = 3SE +/- 1.09, N = 3315.0315.4315.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionR13260120180240300Min: 314.4 / Avg: 315.03 / Max: 316.2Min: 314.5 / Avg: 315.43 / Max: 316.1Min: 313.9 / Avg: 315.53 / Max: 317.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryption3R1270140210280350SE +/- 0.15, N = 3SE +/- 0.73, N = 3SE +/- 0.57, N = 3314.4314.9315.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryption3R1260120180240300Min: 314.2 / Avg: 314.4 / Max: 314.7Min: 313.5 / Avg: 314.93 / Max: 315.9Min: 314.6 / Avg: 315.7 / Max: 316.5

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryption3R12110220330440550SE +/- 0.51, N = 3SE +/- 1.22, N = 3SE +/- 1.01, N = 3487.8488.7490.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryption3R1290180270360450Min: 486.8 / Avg: 487.8 / Max: 488.5Min: 487.1 / Avg: 488.7 / Max: 491.1Min: 488.8 / Avg: 490.47 / Max: 492.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionR132110220330440550SE +/- 0.87, N = 3SE +/- 1.01, N = 3SE +/- 0.47, N = 3505.2506.4506.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionR13290180270360450Min: 503.7 / Avg: 505.17 / Max: 506.7Min: 505.1 / Avg: 506.4 / Max: 508.4Min: 506.2 / Avg: 506.77 / Max: 507.7

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryption3R1230060090012001500SE +/- 6.22, N = 3SE +/- 7.00, N = 3SE +/- 13.32, N = 31273.41285.11289.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryption3R122004006008001000Min: 1261.4 / Avg: 1273.43 / Max: 1282.2Min: 1271.9 / Avg: 1285.13 / Max: 1295.7Min: 1271.6 / Avg: 1289.3 / Max: 1315.4

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryption32R130060090012001500SE +/- 2.40, N = 3SE +/- 6.69, N = 3SE +/- 6.50, N = 31285.91287.51304.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryption32R12004006008001000Min: 1283.4 / Avg: 1285.9 / Max: 1290.7Min: 1274.1 / Avg: 1287.47 / Max: 1294.8Min: 1292.9 / Avg: 1304.93 / Max: 1315.2

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionR13270140210280350SE +/- 0.48, N = 3SE +/- 0.86, N = 3SE +/- 0.26, N = 3314.8315.9316.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionR13260120180240300Min: 314.3 / Avg: 314.83 / Max: 315.8Min: 314.6 / Avg: 315.87 / Max: 317.5Min: 315.5 / Avg: 316 / Max: 316.4

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionR12370140210280350SE +/- 0.84, N = 3SE +/- 0.43, N = 3SE +/- 0.35, N = 3313.2314.3315.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionR12360120180240300Min: 311.7 / Avg: 313.23 / Max: 314.6Min: 313.5 / Avg: 314.27 / Max: 315Min: 314.6 / Avg: 315.2 / Max: 315.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryption23R1110220330440550SE +/- 1.59, N = 3SE +/- 1.51, N = 3SE +/- 0.20, N = 3488.6488.7489.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryption23R190180270360450Min: 485.5 / Avg: 488.6 / Max: 490.8Min: 485.7 / Avg: 488.67 / Max: 490.6Min: 488.6 / Avg: 488.97 / Max: 489.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryption3R12110220330440550SE +/- 1.12, N = 3SE +/- 0.99, N = 3SE +/- 0.92, N = 3503.4505.1505.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryption3R1290180270360450Min: 501.2 / Avg: 503.43 / Max: 504.6Min: 503.2 / Avg: 505.13 / Max: 506.5Min: 503.8 / Avg: 505.4 / Max: 507

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryption3R1230060090012001500SE +/- 11.68, N = 3SE +/- 11.32, N = 3SE +/- 15.21, N = 31559.61577.41586.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryption3R1230060090012001500Min: 1538.7 / Avg: 1559.57 / Max: 1579.1Min: 1554.8 / Avg: 1577.37 / Max: 1590.3Min: 1556.6 / Avg: 1586.67 / Max: 1605.7

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionR12330060090012001500SE +/- 19.58, N = 3SE +/- 15.55, N = 3SE +/- 1.02, N = 31577.51581.41590.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionR12330060090012001500Min: 1539.1 / Avg: 1577.47 / Max: 1603.4Min: 1550.3 / Avg: 1581.4 / Max: 1597.1Min: 1588.5 / Avg: 1590.4 / Max: 1592

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpool23R1110K220K330K440K550KSE +/- 335.33, N = 3SE +/- 335.33, N = 3513001513336513672
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpool23R190K180K270K360K450KMin: 513001 / Avg: 513336.33 / Max: 514007Min: 513001 / Avg: 513671.67 / Max: 514007

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha51223R1300K600K900K1200K1500KSE +/- 2024.67, N = 3SE +/- 878.73, N = 3SE +/- 1520.33, N = 3126284312633451263348
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha51223R1200K400K600K800K1000KMin: 1258794 / Avg: 1262843.33 / Max: 1264868Min: 1261824 / Avg: 1263345.33 / Max: 1264868Min: 1260307 / Avg: 1263347.67 / Max: 1264868

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE3R1248121620SE +/- 0.04, N = 5SE +/- 0.06, N = 5SE +/- 0.06, N = 516.8716.8516.841. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE3R1248121620Min: 16.77 / Avg: 16.87 / Max: 17Min: 16.73 / Avg: 16.85 / Max: 17.01Min: 16.75 / Avg: 16.84 / Max: 17.091. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggR123714212835SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 327.9627.9427.921. (CC) gcc options: -O2 -ffast-math -fsigned-char
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggR123612182430Min: 27.92 / Avg: 27.96 / Max: 27.99Min: 27.92 / Avg: 27.94 / Max: 27.96Min: 27.9 / Avg: 27.92 / Max: 27.951. (CC) gcc options: -O2 -ffast-math -fsigned-char

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second23R113K26K39K52K65KSE +/- 445.62, N = 3SE +/- 283.31, N = 3SE +/- 220.80, N = 360082.6760345.6160703.731. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second23R111K22K33K44K55KMin: 59447.14 / Avg: 60082.67 / Max: 60941.55Min: 59838.44 / Avg: 60345.61 / Max: 60818Min: 60262.14 / Avg: 60703.73 / Max: 60926.081. (CC) gcc options: -O2 -lrt" -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUR132714212835SE +/- 0.29, N = 3SE +/- 0.27, N = 3SE +/- 0.34, N = 332.1131.9731.70MIN: 30.06MIN: 30.65MIN: 30.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUR132714212835Min: 31.59 / Avg: 32.11 / Max: 32.57Min: 31.5 / Avg: 31.97 / Max: 32.41Min: 31.06 / Avg: 31.7 / Max: 32.211. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU32R1714212835SE +/- 0.24, N = 3SE +/- 0.29, N = 3SE +/- 0.32, N = 329.4829.4029.15MIN: 27.97MIN: 28MIN: 27.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU32R1714212835Min: 29.09 / Avg: 29.48 / Max: 29.92Min: 28.87 / Avg: 29.4 / Max: 29.85Min: 28.54 / Avg: 29.15 / Max: 29.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Double32R1306090120150SE +/- 0.61, N = 3SE +/- 1.45, N = 3SE +/- 1.71, N = 3156.01155.24152.681. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Double32R1306090120150Min: 155.36 / Avg: 156.01 / Max: 157.23Min: 152.46 / Avg: 155.24 / Max: 157.35Min: 150.93 / Avg: 152.68 / Max: 156.091. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single32R1306090120150SE +/- 1.40, N = 3SE +/- 1.86, N = 3SE +/- 0.49, N = 3153.25152.38152.241. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single32R1306090120150Min: 151.81 / Avg: 153.25 / Max: 156.05Min: 149.37 / Avg: 152.38 / Max: 155.79Min: 151.75 / Avg: 152.24 / Max: 153.231. (CXX) g++ options: -O3 -pthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode32R13691215SE +/- 0.04, N = 5SE +/- 0.02, N = 5SE +/- 0.03, N = 510.9210.9010.891. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode32R13691215Min: 10.87 / Avg: 10.92 / Max: 11.06Min: 10.85 / Avg: 10.9 / Max: 10.99Min: 10.85 / Avg: 10.89 / Max: 11.031. (CXX) g++ options: -fvisibility=hidden -logg -lm

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA32R148121620SE +/- 0.01, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 317.3717.2017.011. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA32R148121620Min: 17.36 / Avg: 17.37 / Max: 17.4Min: 16.98 / Avg: 17.2 / Max: 17.46Min: 16.79 / Avg: 17.01 / Max: 17.131. (CC) gcc options: -std=c99 -O3 -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU23R1510152025SE +/- 0.27, N = 3SE +/- 0.23, N = 3SE +/- 0.31, N = 321.0921.0821.00MIN: 20.08MIN: 19.85MIN: 19.611. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU23R1510152025Min: 20.66 / Avg: 21.09 / Max: 21.57Min: 20.66 / Avg: 21.08 / Max: 21.43Min: 20.55 / Avg: 21 / Max: 21.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU32R13691215SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 313.4613.3313.29MIN: 13.01MIN: 13.01MIN: 12.991. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU32R148121620Min: 13.3 / Avg: 13.46 / Max: 13.55Min: 13.17 / Avg: 13.33 / Max: 13.44Min: 13.13 / Avg: 13.29 / Max: 13.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUR1323691215SE +/- 0.04347, N = 3SE +/- 0.13687, N = 4SE +/- 0.02320, N = 310.1094010.079289.12287MIN: 9.65MIN: 9.14MIN: 8.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUR1323691215Min: 10.03 / Avg: 10.11 / Max: 10.18Min: 9.73 / Avg: 10.08 / Max: 10.4Min: 9.08 / Avg: 9.12 / Max: 9.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU32R148121620SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 315.5615.5415.45MIN: 14.82MIN: 14.82MIN: 14.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU32R148121620Min: 15.54 / Avg: 15.56 / Max: 15.59Min: 15.46 / Avg: 15.54 / Max: 15.66Min: 15.38 / Avg: 15.45 / Max: 15.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUR132510152025SE +/- 0.18, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 320.9120.6320.52MIN: 20.39MIN: 20.29MIN: 20.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUR132510152025Min: 20.57 / Avg: 20.91 / Max: 21.2Min: 20.57 / Avg: 20.63 / Max: 20.72Min: 20.28 / Avg: 20.52 / Max: 20.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUR123816243240SE +/- 0.13, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 333.8833.5033.47MIN: 33.42MIN: 33.31MIN: 33.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUR123714212835Min: 33.64 / Avg: 33.88 / Max: 34.1Min: 33.43 / Avg: 33.5 / Max: 33.61Min: 33.41 / Avg: 33.47 / Max: 33.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUR123714212835SE +/- 0.13, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 327.8027.7527.30MIN: 27.49MIN: 27.23MIN: 26.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUR123612182430Min: 27.63 / Avg: 27.8 / Max: 28.06Min: 27.56 / Avg: 27.75 / Max: 27.84Min: 27.27 / Avg: 27.3 / Max: 27.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU32R1612182430SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 326.2526.2226.20MIN: 26.08MIN: 26.12MIN: 26.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU32R1612182430Min: 26.22 / Avg: 26.25 / Max: 26.29Min: 26.2 / Avg: 26.22 / Max: 26.25Min: 26.14 / Avg: 26.2 / Max: 26.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU32R1816243240SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 335.8035.6435.35MIN: 33.94MIN: 33.73MIN: 33.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU32R1816243240Min: 35.77 / Avg: 35.8 / Max: 35.86Min: 35.56 / Avg: 35.64 / Max: 35.69Min: 35.26 / Avg: 35.35 / Max: 35.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

97 Results Shown

VkFFT
Build2
Timed FFmpeg Compilation
CLOMP
BRL-CAD
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
  Vulkan GPU - regnety_400m
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
Warsow
VKMark:
  1920 x 1080
  1280 x 1024
  800 x 600
  1024 x 768
Timed HMMer Search
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
Timed Eigen Compilation
Node.js V8 Web Tooling Benchmark
SQLite Speedtest
Unpacking Firefox
Warsow
simdjson:
  Kostya
  LargeRand
Libplacebo:
  av1_grain_lap
  hdr_peakdetect
  polar_nocompute
  deband_heavy
simdjson:
  PartialTweets
  DistinctUserID
PHPBench
oneDNN
Betsy GPU Compressor:
  ETC1 - Highest
  ETC2 RGB - Highest
WavPack Audio Encoding
Cryptsetup:
  Twofish-XTS 512b Decryption
  Twofish-XTS 512b Encryption
  Serpent-XTS 512b Decryption
  Serpent-XTS 512b Encryption
  AES-XTS 512b Decryption
  AES-XTS 512b Encryption
  Twofish-XTS 256b Decryption
  Twofish-XTS 256b Encryption
  Serpent-XTS 256b Decryption
  Serpent-XTS 256b Encryption
  AES-XTS 256b Decryption
  AES-XTS 256b Encryption
  PBKDF2-whirlpool
  PBKDF2-sha512
Monkey Audio Encoding
Ogg Audio Encoding
Coremark
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
VkResample:
  2x - Double
  2x - Single
Opus Codec Encoding
Timed MAFFT Alignment
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  IP Shapes 3D - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU