Ryzen 7 1700 + RX 480

AMD Ryzen 7 1700 Eight-Core testing with a MSI B350 TOMAHAWK (MS-7A34) v1.0 (1.H0 BIOS) and AMD Radeon RX 470/480/570/570X/580/580X/590 8GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012315-HA-RYZEN717043
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 5 Tests
AV1 2 Tests
Bioinformatics 2 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 14 Tests
Compression Tests 3 Tests
CPU Massive 20 Tests
Creator Workloads 22 Tests
Database Test Suite 2 Tests
Encoding 7 Tests
Fortran Tests 3 Tests
Game Development 3 Tests
HPC - High Performance Computing 14 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 9 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 18 Tests
NVIDIA GPU Compute 5 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 11 Tests
Python 2 Tests
Renderers 3 Tests
Scientific Computing 5 Tests
Server 5 Tests
Server CPU Tests 12 Tests
Single-Threaded 10 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 2 Tests
Video Encoding 2 Tests
Vulkan Compute 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Default Kernel
December 29 2020
  19 Hours
Linux 5.10.4
December 30 2020
  23 Hours, 5 Minutes
Linux 5.11-rc1
December 31 2020
  17 Hours, 14 Minutes
Invert Hiding All Results Option
  19 Hours, 46 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 7 1700 + RX 480OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 1700 Eight-Core @ 3.00GHz (8 Cores / 16 Threads)MSI B350 TOMAHAWK (MS-7A34) v1.0 (1.H0 BIOS)AMD 17h16GB120GB Samsung SSD 840AMD Radeon RX 470/480/570/570X/580/580X/590 8GB (1266/2000MHz)AMD Ellesmere HDMI AudioVA2431Realtek RTL8111/8168/8411Ubuntu 20.105.8.0-33-generic (x86_64)5.10.4-051004-generic (x86_64)5.11.0-rc1-phx (x86_64) 20201228GNOME Shell 3.38.1X Server 1.20.9amdgpu 19.1.04.6 Mesa 20.2.1 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionRyzen 7 1700 + RX 480 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096- Default Kernel: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8001137 - Linux 5.10.4: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 - Linux 5.11-rc1: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 - GLAMOR- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Default KernelLinux 5.10.4Linux 5.11-rc1Result OverviewPhoronix Test Suite100%105%111%116%121%RedisTimed HMMer SearchCLOMPDeepSpeechNCNNTensorFlow LiteLAMMPS Molecular Dynamics SimulatorWebP Image EncodeTimed Eigen CompilationCryptsetupRNNoiseOpus Codec EncodingBYTE Unix BenchmarkLAME MP3 EncodingWavPack Audio EncodingPHPBenchMonkey Audio EncodingSQLite SpeedtestFLAC Audio EncodingGLmark2Hierarchical INTegrationMobile Neural NetworkTNNoneDNNNumpy BenchmarkFFTENode.js V8 Web Tooling BenchmarkPolyhedron Fortran BenchmarksLZ4 CompressioneSpeak-NG Speech Enginelibavif avifencTimed MAFFT AlignmentStockfishGIMPlibrsvgUnpacking The Linux KernelVKMarksimdjsonBasis UniversalPyPerformanceXZ CompressionVkFFTSunflow Rendering SystemGROMACSUnpacking FirefoxWireGuard + Linux Networking Stack Stress TestasmFishAI Benchmark AlphaTimed FFmpeg Compilationyquake2RawTherapeeASTC EncoderDarktableBlenderCoremarkZstd CompressionIndigoBenchTimed Linux Kernel CompilationAppleseedEmbreeTimed LLVM CompilationBuild2rav1e

Ryzen 7 1700 + RX 480redis: LPOPpyperformance: python_startuponednn: IP Shapes 3D - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - resnet50ncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - googlenetncnn: CPU - googlenetncnn: CPU - resnet18hmmer: Pfam Database Searchncnn: CPU - mobilenetncnn: CPU - yolov4-tinyncnn: Vulkan GPU - mobilenetclomp: Static OMP Speedupncnn: CPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: CPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - squeezenet_ssdncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - squeezenet_ssddeepspeech: CPUncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - blazefacepolyhedron: tfft2ncnn: Vulkan GPU - efficientnet-b0ncnn: CPU - vgg16redis: GETncnn: Vulkan GPU - vgg16tensorflow-lite: NASNet Mobileonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUncnn: CPU - regnety_400mncnn: Vulkan GPU-v3-v3 - mobilenet-v3webp: Quality 100, Lossless, Highest Compressionncnn: Vulkan GPU - shufflenet-v2tensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4ncnn: Vulkan GPU - regnety_400mcryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Decryptiontensorflow-lite: Mobilenet Quantcryptsetup: Twofish-XTS 512b Encryptiononednn: Deconvolution Batch shapes_1d - f32 - CPUcryptsetup: Serpent-XTS 512b Decryptiontensorflow-lite: Mobilenet Floatcryptsetup: Twofish-XTS 512b Decryptionwebp: Quality 100, Losslessmnn: inception-v3webp: Quality 100cryptsetup: Serpent-XTS 512b Encryptionpyperformance: pathlibgimp: resizebuild-eigen: Time To Compilecryptsetup: Twofish-XTS 256b Encryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: PBKDF2-whirlpoolcryptsetup: Serpent-XTS 256b Encryptionredis: SETcryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 512b Decryptionrnnoise: mnn: SqueezeNetV1.0encode-opus: WAV To Opus Encodepyperformance: json_loadscryptsetup: AES-XTS 256b Encryptionbyte: Dhrystone 2encode-mp3: WAV To MP3compress-lz4: 9 - Compression Speedencode-wavpack: WAV To WavPackglmark2: 1280 x 1024polyhedron: mdbxtnn: CPU - SqueezeNet v1.1polyhedron: acpolyhedron: fatigue2pyperformance: chaosphpbench: PHP Benchmark Suitemnn: MobileNetV2_224encode-ape: WAV To APEpolyhedron: induct2onednn: IP Shapes 3D - u8s8f32 - CPUpolyhedron: proteinpolyhedron: doducwebp: Defaultpolyhedron: mp_prop_designpyperformance: raytracepyperformance: floatpolyhedron: airpyperformance: crypto_pyaespyperformance: regex_compilecompress-lz4: 3 - Compression Speedpolyhedron: test_fpu2simdjson: Kostyapyperformance: nbodysqlite-speedtest: Timed Time - Size 1,000encode-flac: WAV To FLACwebp: Quality 100, Highest Compressiononednn: Convolution Batch Shapes Auto - f32 - CPUhint: FLOATpolyhedron: aermodncnn: Vulkan GPU - alexnetcryptsetup: PBKDF2-sha512pyperformance: pickle_pure_pythonpolyhedron: rnflowredis: SADDpyperformance: gobasis: UASTC Level 0ncnn: CPU - alexnetdarktable: Server Rack - CPU-onlyglmark2: 1920 x 1080pyperformance: django_templatemnn: resnet-v2-50numpy: ffte: N=256, 3D Complex FFT Routineonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUtensorflow-lite: SqueezeNetavifenc: 8tnn: CPU - MobileNet v2simdjson: LargeRandonednn: Recurrent Neural Network Inference - u8s8f32 - CPUnode-web-tooling: basis: ETC1Sgimp: auto-levelsonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUcompress-lz4: 1 - Decompression Speedonednn: Recurrent Neural Network Training - u8s8f32 - CPUdarktable: Masskrug - CPU-onlydarktable: Server Room - CPU-onlyavifenc: 10avifenc: 0mnn: mobilenet-v1-1.0compress-lz4: 1 - Compression Speedastcenc: Fastpyperformance: 2to3onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUespeak: Text-To-Speech Synthesismafft: Multiple Sequence Alignment - LSU RNApolyhedron: gas_dyn2stockfish: Total Timevkmark: 1280 x 1024rsvg: SVG Files To PNGunpack-linux: linux-4.15.tar.xzyquake2: Software CPU - 1920 x 1080appleseed: Disney Materialvkmark: 1920 x 1080ai-benchmark: Device Training Scorecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9darktable: Boat - CPU-onlyvkfft: sunflow: Global Illumination + Image Synthesisgromacs: Water Benchmarkunpack-firefox: firefox-84.0.source.tar.xzasmfish: 1024 Hash Memory, 26 Depthwireguard: avifenc: 2yquake2: OpenGL 1.x - 1920 x 1080onednn: IP Shapes 1D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragononednn: Deconvolution Batch shapes_3d - f32 - CPUai-benchmark: Device AI Scoregimp: unsharp-maskbuild-ffmpeg: Time To Compilepolyhedron: channel2compress-zstd: 19blender: BMW27 - CPU-Onlybasis: UASTC Level 2rawtherapee: Total Benchmark Timegimp: rotaterav1e: 1polyhedron: linpkrav1e: 5indigobench: CPU - Bedroomyquake2: OpenGL 3.x - 1920 x 1080astcenc: Thoroughembree: Pathtracer - Asian Dragonastcenc: Exhaustivecoremark: CoreMark Size 666 - Iterations Per Secondredis: LPUSHastcenc: Mediumbasis: UASTC Level 2 + RDO Post-Processingrav1e: 10indigobench: CPU - Supercarappleseed: Material Testerblender: Fishy Cat - CPU-Onlybuild-linux-kernel: Time To Compilebuild-llvm: Time To Compileembree: Pathtracer - Crownai-benchmark: Device Inference Scorecompress-lz4: 9 - Decompression Speedcompress-zstd: 3embree: Pathtracer ISPC - Crownbasis: UASTC Level 3embree: Pathtracer - Asian Dragon Objonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUappleseed: Emilycompress-lz4: 3 - Decompression Speedembree: Pathtracer ISPC - Asian Dragon Objbuild2: Time To Compilerav1e: 6polyhedron: capacitasimdjson: DistinctUserIDsimdjson: PartialTweetsncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet18ncnn: CPU - blazefaceonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUlammps: Rhodopsin ProteinDefault KernelLinux 5.10.4Linux 5.11-rc12014169.2912.211.56735.3314658.3859.3131.6031.7725.46151.89338.4749.9238.457.810.0911.6211.9137.3111.5910.8515.5537.1594.0023810.563.8525.0915.2385.701863679.1185.392234266.6075323.828333.499.5950.89011.003299807364970032.86331.7320.6177091333.311.3777321.6169941334.424.46477.1432.831326.125.010.750117.955333.21356.0550533325.11363650.751531.11359.724.18012.6909.54237.41521.633310090.410.37036.0316.20860385.88308.4957.7463.321604751376.96517.20825.162.9420317.029.341.77474.877141641.7815423336.9437.10.3617083.92811.5949.47822.7874282568835.298867.5817.09130011569716.771540737.6735310.45116.960.218443280.971.661246.2629782.8821339564749.062514997.048324.2680.334737.797.2464.56316.0329027.634742.118129.29028.506.7455.2106.622131.13813.6247192.297.724619002.539.3492738.90814.56461.7313361457618434.3177.40385.4322.771845466466747.56513.747102741.5810.52426.47119306833300.1976.654631.87.031977.975917.0012124819.01685.70859.5122.5237.5343.97778.65215.4070.2564.810.7651.335944.336.958.3970303.53267555.2329461065654.106.11867.8082.3492.828287.576718329.39131.3151057.6307.33125817828.52849.06.768982.3887.801413.3370539.9550627835.57.2063232.9861.02216.180.460.4550.0626.053.6114.78234.3111075736.6620.78.810624.2415146.3747.2126.2825.8320.97128.72632.2141.8032.799.38.5010.1410.0931.619.829.2213.2431.80109.729309.063.4629.0213.2874.431611434.0874.532089525.7824820.986429.378.4544.8549.713174873351243029.49367.1354.9170472369.912.6415356.7163162368.222.49669.8662.575359.322.811.832109.471364.81485.4603098356.41243969.701668.21488.722.11311.6598.72134.31658.935909522.39.49039.1914.87065425.42282.8397.158.151515170246.40215.83923.152.8786515.658.611.63968.886611511.6614321540.0134.240.3915977.51710.7108.87221.0535305670769.362877.0915.81140372765115.651429878.043369.86015.780.233472275.967.179262.1531684.7444600844488.102445046.700305.6420.354482.277.6760.99816.9038564.544497.068560.18575.956.3924.9406.312126.02712.9567525.637.354448588.649.8046040.27214.59959.1513559816635035.6527.13188.2316.758583473268946.20513.32299721.5370.53925.74419840743308.46774.657641.46.859758.173917.3787127719.19383.80658.2122.9232.4643.09277.08215.2840.2614.730.7791.354960.536.588.5188300.62270279.6862071049901.526.05855.3552.3162.867291.466605325.24129.8251044.4597.34975887895.52838.66.793481.4757.872213.2398535.014137899.37.2702231.0051.02016.240.460.4542.4620.923.3814.31144.7201114626.2520.78.613724.1685546.4047.6525.2726.2921.96127.07132.6343.1932.209.18.609.8110.3131.769.869.3513.5131.73108.333029.053.3029.2213.0974.001727854.7574.672392035.8771320.869529.608.6945.15910.043580783394770329.35369.8357.3189637370.612.4213355.3180798369.322.15669.8982.56835822.711.833107.161366.21488.7603787356.51297244.291678.31485.422.09211.5968.82734.21663.436408424.29.49639.3614.85365875.39282.8157.158.151475137406.42915.81923.132.7049815.658.591.63268.936571511.6414221540.0334.310.3915778.38710.9308.75621.0906305438957.416097.0115.86140497364615.551465965.543289.71715.870.234475575.566.976262.7531043.7259141444470.712596456.639310.4220.354468.087.6260.95116.9818540.484486.348592.38545.336.3974.9596.284124.54212.9397572.237.424398583.129.4935438.41915.25159.0813897759642634.9987.16088.6311.487833482768546.08513.34399671.5820.53525.98619661267308.38075.056625.36.922078.102217.4095126919.45283.88358.2123.0236.5743.27778.29015.1010.2594.720.7741.359947.337.28.3775305.64271886.2541011053853.026.14857.2282.3472.843289.579019326.50129.6771050.4457.42245847920.92872.06.845881.6207.882413.2091536.8426687906.87.2554231.5351.02816.20.460.4541.0519.903.4714.45104.762OpenBenchmarking.org

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPDefault KernelLinux 5.11-rc1Linux 5.10.4400K800K1200K1600K2000KSE +/- 30740.73, N = 12SE +/- 5120.03, N = 3SE +/- 2684.98, N = 32014169.291114626.251075736.661. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPDefault KernelLinux 5.11-rc1Linux 5.10.4300K600K900K1200K1500KMin: 1876172.62 / Avg: 2014169.29 / Max: 2156207Min: 1105007.75 / Avg: 1114626.25 / Max: 1122478.12Min: 1070766.62 / Avg: 1075736.66 / Max: 1079982.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefault KernelLinux 5.10.4Linux 5.11-rc1510152025SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 312.220.720.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefault KernelLinux 5.10.4Linux 5.11-rc1510152025Min: 12.2 / Avg: 12.2 / Max: 12.2Min: 20.6 / Avg: 20.67 / Max: 20.8Min: 20.6 / Avg: 20.73 / Max: 20.8

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel3691215SE +/- 0.01700, N = 3SE +/- 0.02920, N = 3SE +/- 0.03621, N = 38.613728.8106211.56730MIN: 8.27MIN: 8.51MIN: 10.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 8.58 / Avg: 8.61 / Max: 8.64Min: 8.77 / Avg: 8.81 / Max: 8.87Min: 11.5 / Avg: 11.57 / Max: 11.611. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel1.19962.39923.59884.79845.998SE +/- 0.00397, N = 3SE +/- 0.00670, N = 3SE +/- 0.00649, N = 34.168554.241515.33146MIN: 4.06MIN: 4.14MIN: 4.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel246810Min: 4.16 / Avg: 4.17 / Max: 4.17Min: 4.23 / Avg: 4.24 / Max: 4.25Min: 5.32 / Avg: 5.33 / Max: 5.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Linux 5.10.4Linux 5.11-rc1Default Kernel1326395265SE +/- 0.32, N = 15SE +/- 0.09, N = 3SE +/- 0.59, N = 346.3746.4058.38MIN: 45.13 / MAX: 83.86MIN: 46.13 / MAX: 46.99MIN: 48.11 / MAX: 109.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Linux 5.10.4Linux 5.11-rc1Default Kernel1224364860Min: 45.25 / Avg: 46.37 / Max: 49.23Min: 46.23 / Avg: 46.4 / Max: 46.49Min: 57.77 / Avg: 58.38 / Max: 59.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Linux 5.10.4Linux 5.11-rc1Default Kernel1326395265SE +/- 1.01, N = 3SE +/- 1.19, N = 3SE +/- 0.58, N = 347.2147.6559.31MIN: 45.3 / MAX: 81.34MIN: 46.22 / MAX: 82.46MIN: 48.12 / MAX: 109.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Linux 5.10.4Linux 5.11-rc1Default Kernel1224364860Min: 45.51 / Avg: 47.21 / Max: 49.01Min: 46.32 / Avg: 47.65 / Max: 50.02Min: 58.63 / Avg: 59.31 / Max: 60.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetLinux 5.11-rc1Linux 5.10.4Default Kernel714212835SE +/- 0.76, N = 3SE +/- 0.11, N = 3SE +/- 0.32, N = 325.2726.2831.60MIN: 22.73 / MAX: 45.56MIN: 25.18 / MAX: 37.2MIN: 24.42 / MAX: 69.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetLinux 5.11-rc1Linux 5.10.4Default Kernel714212835Min: 23.85 / Avg: 25.27 / Max: 26.47Min: 26.1 / Avg: 26.28 / Max: 26.49Min: 30.97 / Avg: 31.6 / Max: 31.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetLinux 5.10.4Linux 5.11-rc1Default Kernel714212835SE +/- 0.32, N = 15SE +/- 0.11, N = 3SE +/- 0.18, N = 325.8326.2931.77MIN: 22.76 / MAX: 58.24MIN: 23.04 / MAX: 29.98MIN: 23.25 / MAX: 69.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetLinux 5.10.4Linux 5.11-rc1Default Kernel714212835Min: 22.84 / Avg: 25.83 / Max: 26.74Min: 26.08 / Avg: 26.29 / Max: 26.45Min: 31.44 / Avg: 31.77 / Max: 32.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Linux 5.10.4Linux 5.11-rc1Default Kernel612182430SE +/- 0.28, N = 15SE +/- 0.20, N = 3SE +/- 0.22, N = 320.9721.9625.46MIN: 18.91 / MAX: 67.27MIN: 19.15 / MAX: 54.43MIN: 19.81 / MAX: 59.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Linux 5.10.4Linux 5.11-rc1Default Kernel612182430Min: 19 / Avg: 20.97 / Max: 21.72Min: 21.57 / Avg: 21.96 / Max: 22.24Min: 25.22 / Avg: 25.46 / Max: 25.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.27, N = 3127.07128.73151.891. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150Min: 127 / Avg: 127.07 / Max: 127.14Min: 128.71 / Avg: 128.73 / Max: 128.77Min: 151.43 / Avg: 151.89 / Max: 152.381. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetLinux 5.10.4Linux 5.11-rc1Default Kernel918273645SE +/- 0.29, N = 15SE +/- 0.27, N = 3SE +/- 0.44, N = 332.2132.6338.47MIN: 30.06 / MAX: 44.1MIN: 31.52 / MAX: 114.19MIN: 33.17 / MAX: 75.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetLinux 5.10.4Linux 5.11-rc1Default Kernel816243240Min: 30.4 / Avg: 32.21 / Max: 34.12Min: 32.32 / Avg: 32.63 / Max: 33.17Min: 37.72 / Avg: 38.47 / Max: 39.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLinux 5.10.4Linux 5.11-rc1Default Kernel1122334455SE +/- 0.49, N = 15SE +/- 0.01, N = 3SE +/- 0.08, N = 341.8043.1949.92MIN: 36.06 / MAX: 63.79MIN: 41.97 / MAX: 48.03MIN: 41.04 / MAX: 67.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLinux 5.10.4Linux 5.11-rc1Default Kernel1020304050Min: 36.87 / Avg: 41.8 / Max: 43.7Min: 43.17 / Avg: 43.19 / Max: 43.22Min: 49.77 / Avg: 49.92 / Max: 50.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetLinux 5.11-rc1Linux 5.10.4Default Kernel918273645SE +/- 0.20, N = 3SE +/- 0.19, N = 3SE +/- 0.03, N = 332.2032.7938.45MIN: 30.63 / MAX: 39MIN: 30.27 / MAX: 45.97MIN: 33.4 / MAX: 77.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetLinux 5.11-rc1Linux 5.10.4Default Kernel816243240Min: 32 / Avg: 32.2 / Max: 32.61Min: 32.42 / Avg: 32.79 / Max: 33.02Min: 38.41 / Avg: 38.45 / Max: 38.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.11, N = 15SE +/- 0.12, N = 3SE +/- 0.09, N = 49.39.17.81. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 8.7 / Avg: 9.35 / Max: 10Min: 8.9 / Avg: 9.1 / Max: 9.3Min: 7.6 / Avg: 7.83 / Max: 81. (CC) gcc options: -fopenmp -O3 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.03, N = 15SE +/- 0.09, N = 3SE +/- 0.20, N = 38.508.6010.09MIN: 8.19 / MAX: 20.56MIN: 8.28 / MAX: 22.71MIN: 8.08 / MAX: 57.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 8.28 / Avg: 8.5 / Max: 8.73Min: 8.42 / Avg: 8.6 / Max: 8.73Min: 9.8 / Avg: 10.09 / Max: 10.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Linux 5.11-rc1Linux 5.10.4Default Kernel3691215SE +/- 0.04, N = 3SE +/- 0.21, N = 3SE +/- 0.19, N = 39.8110.1411.62MIN: 9.63 / MAX: 13.77MIN: 9.38 / MAX: 12.94MIN: 9.25 / MAX: 43.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Linux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 9.75 / Avg: 9.81 / Max: 9.89Min: 9.91 / Avg: 10.14 / Max: 10.56Min: 11.25 / Avg: 11.62 / Max: 11.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.11, N = 15SE +/- 0.21, N = 3SE +/- 0.15, N = 310.0910.3111.91MIN: 9.47 / MAX: 59.9MIN: 9.57 / MAX: 22.62MIN: 9.52 / MAX: 44.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 9.65 / Avg: 10.09 / Max: 10.87Min: 10.04 / Avg: 10.31 / Max: 10.72Min: 11.63 / Avg: 11.91 / Max: 12.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdLinux 5.10.4Linux 5.11-rc1Default Kernel918273645SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 331.6131.7637.31MIN: 30.89 / MAX: 38.84MIN: 30.84 / MAX: 39.08MIN: 32.73 / MAX: 94.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdLinux 5.10.4Linux 5.11-rc1Default Kernel816243240Min: 31.58 / Avg: 31.61 / Max: 31.65Min: 31.7 / Avg: 31.76 / Max: 31.82Min: 37.21 / Avg: 37.31 / Max: 37.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Linux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.03, N = 15SE +/- 0.05, N = 3SE +/- 0.14, N = 39.829.8611.59MIN: 9.61 / MAX: 11.52MIN: 9.73 / MAX: 10.13MIN: 9.62 / MAX: 57.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Linux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 9.65 / Avg: 9.82 / Max: 9.99Min: 9.77 / Avg: 9.86 / Max: 9.93Min: 11.4 / Avg: 11.59 / Max: 11.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetLinux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.07, N = 15SE +/- 0.16, N = 3SE +/- 0.27, N = 39.229.3510.85MIN: 8.76 / MAX: 13.65MIN: 9.12 / MAX: 10.89MIN: 8.7 / MAX: 50.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 8.79 / Avg: 9.22 / Max: 9.67Min: 9.16 / Avg: 9.35 / Max: 9.68Min: 10.51 / Avg: 10.85 / Max: 11.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Linux 5.10.4Linux 5.11-rc1Default Kernel48121620SE +/- 0.11, N = 15SE +/- 0.29, N = 3SE +/- 0.25, N = 313.2413.5115.55MIN: 12.68 / MAX: 56.97MIN: 13.14 / MAX: 14.47MIN: 12.82 / MAX: 44.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Linux 5.10.4Linux 5.11-rc1Default Kernel48121620Min: 12.72 / Avg: 13.24 / Max: 14.02Min: 13.2 / Avg: 13.51 / Max: 14.08Min: 15.12 / Avg: 15.55 / Max: 15.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLinux 5.11-rc1Linux 5.10.4Default Kernel918273645SE +/- 0.04, N = 3SE +/- 0.14, N = 15SE +/- 0.35, N = 331.7331.8037.15MIN: 30.96 / MAX: 37.38MIN: 30.81 / MAX: 42.83MIN: 32.74 / MAX: 88.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLinux 5.11-rc1Linux 5.10.4Default Kernel816243240Min: 31.65 / Avg: 31.73 / Max: 31.79Min: 31.49 / Avg: 31.8 / Max: 33.66Min: 36.52 / Avg: 37.15 / Max: 37.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefault KernelLinux 5.11-rc1Linux 5.10.420406080100SE +/- 0.16, N = 3SE +/- 0.94, N = 8SE +/- 0.78, N = 1594.00108.33109.73
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefault KernelLinux 5.11-rc1Linux 5.10.420406080100Min: 93.73 / Avg: 94 / Max: 94.27Min: 103.35 / Avg: 108.33 / Max: 110.59Min: 103.53 / Avg: 109.73 / Max: 112.6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetLinux 5.11-rc1Linux 5.10.4Default Kernel3691215SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.25, N = 39.059.0610.56MIN: 8.77 / MAX: 32.06MIN: 8.8 / MAX: 9.55MIN: 8.6 / MAX: 49.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetLinux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 8.81 / Avg: 9.05 / Max: 9.23Min: 8.84 / Avg: 9.06 / Max: 9.26Min: 10.06 / Avg: 10.56 / Max: 10.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceLinux 5.11-rc1Linux 5.10.4Default Kernel0.86631.73262.59893.46524.3315SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 33.303.463.85MIN: 3.22 / MAX: 3.77MIN: 3.33 / MAX: 7.35MIN: 3.16 / MAX: 16.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceLinux 5.11-rc1Linux 5.10.4Default Kernel246810Min: 3.23 / Avg: 3.3 / Max: 3.42Min: 3.42 / Avg: 3.46 / Max: 3.5Min: 3.75 / Avg: 3.85 / Max: 3.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Default KernelLinux 5.10.4Linux 5.11-rc171421283525.0929.0229.22

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Linux 5.11-rc1Linux 5.10.4Default Kernel48121620SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 313.0913.2815.23MIN: 12.94 / MAX: 14.95MIN: 13.12 / MAX: 14.99MIN: 12.91 / MAX: 49.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Linux 5.11-rc1Linux 5.10.4Default Kernel48121620Min: 12.98 / Avg: 13.09 / Max: 13.28Min: 13.22 / Avg: 13.28 / Max: 13.31Min: 14.99 / Avg: 15.23 / Max: 15.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Linux 5.11-rc1Linux 5.10.4Default Kernel20406080100SE +/- 0.12, N = 3SE +/- 0.16, N = 15SE +/- 0.20, N = 374.0074.4385.70MIN: 73.3 / MAX: 82.28MIN: 73.09 / MAX: 107.82MIN: 76.58 / MAX: 109.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Linux 5.11-rc1Linux 5.10.4Default Kernel1632486480Min: 73.85 / Avg: 74 / Max: 74.23Min: 73.66 / Avg: 74.43 / Max: 75.5Min: 85.3 / Avg: 85.7 / Max: 85.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETDefault KernelLinux 5.11-rc1Linux 5.10.4400K800K1200K1600K2000KSE +/- 22737.24, N = 15SE +/- 16571.48, N = 3SE +/- 11369.12, N = 31863679.111727854.751611434.081. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETDefault KernelLinux 5.11-rc1Linux 5.10.4300K600K900K1200K1500KMin: 1689405.38 / Avg: 1863679.11 / Max: 1996071.88Min: 1709894.12 / Avg: 1727854.75 / Max: 1760957.75Min: 1597648.62 / Avg: 1611434.08 / Max: 16339871. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Linux 5.10.4Linux 5.11-rc1Default Kernel20406080100SE +/- 0.36, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 374.5374.6785.39MIN: 73.33 / MAX: 84.41MIN: 73.34 / MAX: 83.99MIN: 76.96 / MAX: 108.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Linux 5.10.4Linux 5.11-rc1Default Kernel1632486480Min: 73.8 / Avg: 74.53 / Max: 74.95Min: 74.18 / Avg: 74.67 / Max: 75.29Min: 84.86 / Avg: 85.39 / Max: 861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.10.4Default KernelLinux 5.11-rc150K100K150K200K250KSE +/- 207.90, N = 3SE +/- 1532.37, N = 3SE +/- 1205.42, N = 3208952223426239203
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.10.4Default KernelLinux 5.11-rc140K80K120K160K200KMin: 208537 / Avg: 208952.33 / Max: 209177Min: 221478 / Avg: 223426 / Max: 226449Min: 236841 / Avg: 239202.67 / Max: 240803

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.05897, N = 15SE +/- 0.01520, N = 3SE +/- 0.07065, N = 55.782485.877136.60753MIN: 5.23MIN: 5.59MIN: 5.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 5.32 / Avg: 5.78 / Max: 5.95Min: 5.85 / Avg: 5.88 / Max: 5.9Min: 6.33 / Avg: 6.61 / Max: 6.711. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel612182430SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 320.8720.9923.83MIN: 20.57MIN: 20.72MIN: 22.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel612182430Min: 20.8 / Avg: 20.87 / Max: 20.93Min: 20.94 / Avg: 20.99 / Max: 21.03Min: 23.8 / Avg: 23.83 / Max: 23.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mLinux 5.10.4Linux 5.11-rc1Default Kernel816243240SE +/- 0.09, N = 15SE +/- 0.06, N = 3SE +/- 0.53, N = 329.3729.6033.49MIN: 28.47 / MAX: 73.28MIN: 29.39 / MAX: 40.11MIN: 30.17 / MAX: 68.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mLinux 5.10.4Linux 5.11-rc1Default Kernel714212835Min: 28.55 / Avg: 29.37 / Max: 29.75Min: 29.52 / Avg: 29.6 / Max: 29.73Min: 32.69 / Avg: 33.49 / Max: 34.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Linux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.14, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 38.458.699.59MIN: 8.09 / MAX: 10.09MIN: 8.29 / MAX: 26.28MIN: 8.1 / MAX: 72.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Linux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 8.17 / Avg: 8.45 / Max: 8.62Min: 8.55 / Avg: 8.69 / Max: 8.93Min: 9.42 / Avg: 9.59 / Max: 9.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.4Linux 5.11-rc1Default Kernel1122334455SE +/- 0.02, N = 3SE +/- 0.15, N = 3SE +/- 0.27, N = 344.8545.1650.891. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.4Linux 5.11-rc1Default Kernel1020304050Min: 44.83 / Avg: 44.85 / Max: 44.89Min: 44.9 / Avg: 45.16 / Max: 45.42Min: 50.38 / Avg: 50.89 / Max: 51.281. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Linux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.30, N = 39.7110.0411.00MIN: 9.51 / MAX: 10.13MIN: 9.81 / MAX: 10.2MIN: 9.52 / MAX: 47.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Linux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 9.55 / Avg: 9.71 / Max: 9.87Min: 9.85 / Avg: 10.04 / Max: 10.15Min: 10.58 / Avg: 11 / Max: 11.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.10.4Default KernelLinux 5.11-rc1800K1600K2400K3200K4000KSE +/- 713.33, N = 3SE +/- 1161.41, N = 3SE +/- 2970.22, N = 3317487332998073580783
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.10.4Default KernelLinux 5.11-rc1600K1200K1800K2400K3000KMin: 3174160 / Avg: 3174873.33 / Max: 3176300Min: 3297750 / Avg: 3299806.67 / Max: 3301770Min: 3577470 / Avg: 3580783.33 / Max: 3586710

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.10.4Default KernelLinux 5.11-rc1800K1600K2400K3200K4000KSE +/- 2619.34, N = 3SE +/- 2124.36, N = 3SE +/- 912.82, N = 3351243036497003947703
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.10.4Default KernelLinux 5.11-rc1700K1400K2100K2800K3500KMin: 3508490 / Avg: 3512430 / Max: 3517390Min: 3646130 / Avg: 3649700 / Max: 3653480Min: 3946370 / Avg: 3947703.33 / Max: 3949450

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mLinux 5.11-rc1Linux 5.10.4Default Kernel816243240SE +/- 0.32, N = 3SE +/- 0.25, N = 3SE +/- 0.30, N = 329.3529.4932.86MIN: 28.95 / MAX: 100.31MIN: 28.88 / MAX: 31.63MIN: 30.37 / MAX: 71.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mLinux 5.11-rc1Linux 5.10.4Default Kernel714212835Min: 29.03 / Avg: 29.35 / Max: 29.98Min: 28.99 / Avg: 29.49 / Max: 29.76Min: 32.32 / Avg: 32.86 / Max: 33.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel80160240320400SE +/- 0.24, N = 3SE +/- 1.64, N = 3SE +/- 1.40, N = 3369.8367.1331.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel70140210280350Min: 369.5 / Avg: 369.83 / Max: 370.3Min: 365.2 / Avg: 367.13 / Max: 370.4Min: 329 / Avg: 331.73 / Max: 333.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel80160240320400SE +/- 1.32, N = 3SE +/- 1.70, N = 3SE +/- 2.23, N = 3357.3354.9320.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel60120180240300Min: 354.7 / Avg: 357.33 / Max: 358.7Min: 352.9 / Avg: 354.93 / Max: 358.3Min: 318 / Avg: 320.57 / Max: 325

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.10.4Default KernelLinux 5.11-rc140K80K120K160K200KSE +/- 30.12, N = 3SE +/- 75.51, N = 3SE +/- 424.39, N = 3170472177091189637
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.10.4Default KernelLinux 5.11-rc130K60K90K120K150KMin: 170413 / Avg: 170472.33 / Max: 170511Min: 176947 / Avg: 177091.33 / Max: 177202Min: 188790 / Avg: 189637.33 / Max: 190104

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel80160240320400SE +/- 0.25, N = 3SE +/- 1.00, N = 3SE +/- 1.79, N = 3370.6369.9333.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel70140210280350Min: 370.1 / Avg: 370.6 / Max: 370.9Min: 367.9 / Avg: 369.9 / Max: 371Min: 330.6 / Avg: 333.33 / Max: 336.7

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.11-rc1Linux 5.10.43691215SE +/- 0.05, N = 3SE +/- 0.16, N = 15SE +/- 0.12, N = 1511.3812.4212.64MIN: 10.79MIN: 11.07MIN: 11.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.11-rc1Linux 5.10.448121620Min: 11.31 / Avg: 11.38 / Max: 11.49Min: 11.37 / Avg: 12.42 / Max: 13.52Min: 11.65 / Avg: 12.64 / Max: 13.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionLinux 5.10.4Linux 5.11-rc1Default Kernel80160240320400SE +/- 0.91, N = 3SE +/- 1.71, N = 3SE +/- 2.22, N = 3356.7355.3321.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionLinux 5.10.4Linux 5.11-rc1Default Kernel60120180240300Min: 355.6 / Avg: 356.7 / Max: 358.5Min: 352.4 / Avg: 355.27 / Max: 358.3Min: 317.2 / Avg: 321.6 / Max: 324.3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.10.4Default KernelLinux 5.11-rc140K80K120K160K200KSE +/- 89.09, N = 3SE +/- 82.73, N = 3SE +/- 51.97, N = 3163162169941180798
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.10.4Default KernelLinux 5.11-rc130K60K90K120K150KMin: 162985 / Avg: 163162.33 / Max: 163266Min: 169803 / Avg: 169940.67 / Max: 170089Min: 180695 / Avg: 180797.67 / Max: 180863

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel80160240320400SE +/- 0.61, N = 3SE +/- 0.35, N = 3SE +/- 1.69, N = 3369.3368.2334.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel70140210280350Min: 368.3 / Avg: 369.33 / Max: 370.4Min: 367.7 / Avg: 368.23 / Max: 368.9Min: 332.1 / Avg: 334.4 / Max: 337.7

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.11-rc1Linux 5.10.4Default Kernel612182430SE +/- 0.19, N = 3SE +/- 0.10, N = 3SE +/- 0.26, N = 322.1622.5024.461. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.11-rc1Linux 5.10.4Default Kernel612182430Min: 21.91 / Avg: 22.16 / Max: 22.53Min: 22.32 / Avg: 22.5 / Max: 22.65Min: 23.98 / Avg: 24.46 / Max: 24.861. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.10.4Linux 5.11-rc1Default Kernel20406080100SE +/- 0.35, N = 3SE +/- 0.07, N = 3SE +/- 0.46, N = 369.8769.9077.14MIN: 68.99 / MAX: 81.83MIN: 69.61 / MAX: 94.45MIN: 69.93 / MAX: 103.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.10.4Linux 5.11-rc1Default Kernel1530456075Min: 69.17 / Avg: 69.87 / Max: 70.25Min: 69.77 / Avg: 69.9 / Max: 69.98Min: 76.51 / Avg: 77.14 / Max: 78.041. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.11-rc1Linux 5.10.4Default Kernel0.6371.2741.9112.5483.185SE +/- 0.006, N = 3SE +/- 0.009, N = 3SE +/- 0.034, N = 32.5682.5752.8311. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.11-rc1Linux 5.10.4Default Kernel246810Min: 2.56 / Avg: 2.57 / Max: 2.58Min: 2.56 / Avg: 2.58 / Max: 2.59Min: 2.77 / Avg: 2.83 / Max: 2.891. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionLinux 5.10.4Linux 5.11-rc1Default Kernel80160240320400SE +/- 1.57, N = 3SE +/- 2.00, N = 2SE +/- 2.94, N = 3359.3358.0326.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionLinux 5.10.4Linux 5.11-rc1Default Kernel60120180240300Min: 356.2 / Avg: 359.33 / Max: 361.1Min: 356 / Avg: 358 / Max: 360Min: 321 / Avg: 326.07 / Max: 331.2

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibLinux 5.11-rc1Linux 5.10.4Default Kernel612182430SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 322.722.825.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibLinux 5.11-rc1Linux 5.10.4Default Kernel612182430Min: 22.7 / Avg: 22.7 / Max: 22.7Min: 22.7 / Avg: 22.77 / Max: 22.8Min: 24.9 / Avg: 25 / Max: 25.1

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 310.7511.8311.83
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 10.65 / Avg: 10.75 / Max: 10.94Min: 11.65 / Avg: 11.83 / Max: 11.97Min: 11.76 / Avg: 11.83 / Max: 11.89

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.40, N = 3107.16109.47117.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileLinux 5.11-rc1Linux 5.10.4Default Kernel20406080100Min: 107 / Avg: 107.16 / Max: 107.43Min: 109.42 / Avg: 109.47 / Max: 109.51Min: 117.29 / Avg: 117.96 / Max: 118.67

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel80160240320400SE +/- 4.26, N = 3SE +/- 5.33, N = 3SE +/- 1.62, N = 3366.2364.8333.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel70140210280350Min: 357.9 / Avg: 366.2 / Max: 372Min: 354.3 / Avg: 364.8 / Max: 371.6Min: 330.1 / Avg: 333.17 / Max: 335.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel30060090012001500SE +/- 0.68, N = 3SE +/- 3.09, N = 3SE +/- 11.60, N = 31488.71485.41356.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel30060090012001500Min: 1487.4 / Avg: 1488.7 / Max: 1489.7Min: 1479.3 / Avg: 1485.43 / Max: 1489.2Min: 1333.9 / Avg: 1355.97 / Max: 1373.2

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolLinux 5.11-rc1Linux 5.10.4Default Kernel130K260K390K520K650KSE +/- 613.12, N = 3SE +/- 1406.03, N = 3SE +/- 1020.69, N = 3603787603098550533
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolLinux 5.11-rc1Linux 5.10.4Default Kernel100K200K300K400K500KMin: 602629 / Avg: 603787.33 / Max: 604715Min: 600558 / Avg: 603098 / Max: 605413Min: 548992 / Avg: 550533 / Max: 552463

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel80160240320400SE +/- 4.86, N = 3SE +/- 2.62, N = 3SE +/- 1.68, N = 3356.5356.4325.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel60120180240300Min: 346.8 / Avg: 356.5 / Max: 362Min: 351.5 / Avg: 356.37 / Max: 360.5Min: 321.8 / Avg: 325.13 / Max: 327.1

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETDefault KernelLinux 5.11-rc1Linux 5.10.4300K600K900K1200K1500KSE +/- 16871.12, N = 4SE +/- 2028.99, N = 3SE +/- 8535.72, N = 151363650.751297244.291243969.701. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETDefault KernelLinux 5.11-rc1Linux 5.10.4200K400K600K800K1000KMin: 1317776 / Avg: 1363650.75 / Max: 1389066.62Min: 1293826.62 / Avg: 1297244.29 / Max: 1300847.88Min: 1169628 / Avg: 1243969.7 / Max: 1285593.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel400800120016002000SE +/- 1.79, N = 3SE +/- 1.07, N = 3SE +/- 7.30, N = 31678.31668.21531.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionLinux 5.11-rc1Linux 5.10.4Default Kernel30060090012001500Min: 1675.1 / Avg: 1678.3 / Max: 1681.3Min: 1666.6 / Avg: 1668.17 / Max: 1670.2Min: 1517.9 / Avg: 1531.1 / Max: 1543.1

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionLinux 5.10.4Linux 5.11-rc1Default Kernel30060090012001500SE +/- 2.77, N = 3SE +/- 3.76, N = 3SE +/- 9.75, N = 31488.71485.41359.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionLinux 5.10.4Linux 5.11-rc1Default Kernel30060090012001500Min: 1483.3 / Avg: 1488.73 / Max: 1492.4Min: 1479.4 / Avg: 1485.37 / Max: 1492.3Min: 1342.3 / Avg: 1359.73 / Max: 1376

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.11-rc1Linux 5.10.4Default Kernel612182430SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.17, N = 322.0922.1124.181. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.11-rc1Linux 5.10.4Default Kernel612182430Min: 21.95 / Avg: 22.09 / Max: 22.18Min: 21.98 / Avg: 22.11 / Max: 22.22Min: 23.91 / Avg: 24.18 / Max: 24.51. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.11-rc1Linux 5.10.4Default Kernel3691215SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 311.6011.6612.69MIN: 11.41 / MAX: 23.49MIN: 11.43 / MAX: 23.12MIN: 11.22 / MAX: 25.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.11-rc1Linux 5.10.4Default Kernel48121620Min: 11.55 / Avg: 11.6 / Max: 11.65Min: 11.54 / Avg: 11.66 / Max: 11.8Min: 12.64 / Avg: 12.69 / Max: 12.741. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLinux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.015, N = 5SE +/- 0.063, N = 12SE +/- 0.037, N = 58.7218.8279.5421. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 8.7 / Avg: 8.72 / Max: 8.78Min: 8.71 / Avg: 8.83 / Max: 9.29Min: 9.45 / Avg: 9.54 / Max: 9.641. (CXX) g++ options: -fvisibility=hidden -logg -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsLinux 5.11-rc1Linux 5.10.4Default Kernel918273645SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 334.234.337.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsLinux 5.11-rc1Linux 5.10.4Default Kernel816243240Min: 34.1 / Avg: 34.17 / Max: 34.2Min: 34.2 / Avg: 34.33 / Max: 34.4Min: 37.3 / Avg: 37.37 / Max: 37.4

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel400800120016002000SE +/- 9.69, N = 3SE +/- 12.75, N = 3SE +/- 2.80, N = 31663.41658.91521.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionLinux 5.11-rc1Linux 5.10.4Default Kernel30060090012001500Min: 1644.5 / Avg: 1663.4 / Max: 1676.6Min: 1638.3 / Avg: 1658.87 / Max: 1682.2Min: 1517.2 / Avg: 1521.63 / Max: 1526.8

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.11-rc1Linux 5.10.4Default Kernel8M16M24M32M40MSE +/- 77245.63, N = 3SE +/- 207274.15, N = 3SE +/- 91702.31, N = 336408424.235909522.333310090.4
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.11-rc1Linux 5.10.4Default Kernel6M12M18M24M30MMin: 36313928.7 / Avg: 36408424.23 / Max: 36561519.1Min: 35639526.9 / Avg: 35909522.27 / Max: 36316943.5Min: 33126709.9 / Avg: 33310090.4 / Max: 33404356.4

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Linux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.005, N = 3SE +/- 0.014, N = 3SE +/- 0.054, N = 39.4909.49610.3701. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Linux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 9.48 / Avg: 9.49 / Max: 9.5Min: 9.47 / Avg: 9.5 / Max: 9.51Min: 10.29 / Avg: 10.37 / Max: 10.471. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel918273645SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.41, N = 339.3639.1936.031. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel816243240Min: 39.27 / Avg: 39.36 / Max: 39.51Min: 39.15 / Avg: 39.19 / Max: 39.22Min: 35.29 / Avg: 36.03 / Max: 36.71. (CC) gcc options: -O3

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLinux 5.11-rc1Linux 5.10.4Default Kernel48121620SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.10, N = 514.8514.8716.211. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLinux 5.11-rc1Linux 5.10.4Default Kernel48121620Min: 14.83 / Avg: 14.85 / Max: 14.9Min: 14.85 / Avg: 14.87 / Max: 14.91Min: 15.99 / Avg: 16.21 / Max: 16.441. (CXX) g++ options: -rdynamic

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1280 x 1024Linux 5.11-rc1Linux 5.10.4Default Kernel14002800420056007000658765426038

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxLinux 5.11-rc1Linux 5.10.4Default Kernel1.3232.6463.9695.2926.6155.395.425.88

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.11-rc1Linux 5.10.4Default Kernel70140210280350SE +/- 0.09, N = 3SE +/- 0.18, N = 3SE +/- 1.02, N = 3282.82282.84308.50MIN: 281.29 / MAX: 287.16MIN: 281.31 / MAX: 287.35MIN: 297.18 / MAX: 319.31. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.11-rc1Linux 5.10.4Default Kernel60120180240300Min: 282.64 / Avg: 282.81 / Max: 282.92Min: 282.49 / Avg: 282.84 / Max: 283.11Min: 307.02 / Avg: 308.5 / Max: 310.461. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acLinux 5.10.4Linux 5.11-rc1Default Kernel2468107.107.107.74

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Linux 5.10.4Linux 5.11-rc1Default Kernel142842567058.1558.1563.32

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosLinux 5.11-rc1Linux 5.10.4Default Kernel4080120160200SE +/- 0.33, N = 3SE +/- 0.67, N = 3147151160
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150Min: 146 / Avg: 146.67 / Max: 147Min: 159 / Avg: 160.33 / Max: 161

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteLinux 5.10.4Linux 5.11-rc1Default Kernel110K220K330K440K550KSE +/- 700.71, N = 3SE +/- 1896.86, N = 3SE +/- 1118.13, N = 3517024513740475137
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteLinux 5.10.4Linux 5.11-rc1Default Kernel90K180K270K360K450KMin: 515722 / Avg: 517024 / Max: 518124Min: 511165 / Avg: 513739.67 / Max: 517440Min: 472917 / Avg: 475136.67 / Max: 476482

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.054, N = 3SE +/- 0.017, N = 3SE +/- 0.024, N = 36.4026.4296.965MIN: 6.26 / MAX: 15.65MIN: 6.35 / MAX: 8.72MIN: 6.27 / MAX: 22.381. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 6.3 / Avg: 6.4 / Max: 6.48Min: 6.4 / Avg: 6.43 / Max: 6.46Min: 6.92 / Avg: 6.97 / Max: 71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APELinux 5.11-rc1Linux 5.10.4Default Kernel48121620SE +/- 0.02, N = 5SE +/- 0.04, N = 5SE +/- 0.07, N = 515.8215.8417.211. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APELinux 5.11-rc1Linux 5.10.4Default Kernel48121620Min: 15.78 / Avg: 15.82 / Max: 15.91Min: 15.78 / Avg: 15.84 / Max: 15.96Min: 17.05 / Avg: 17.21 / Max: 17.431. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Linux 5.11-rc1Linux 5.10.4Default Kernel61218243023.1323.1525.16

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel0.6621.3241.9862.6483.31SE +/- 0.03607, N = 3SE +/- 0.02712, N = 3SE +/- 0.01399, N = 32.704982.878652.94203MIN: 2.43MIN: 2.53MIN: 2.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel246810Min: 2.65 / Avg: 2.7 / Max: 2.77Min: 2.83 / Avg: 2.88 / Max: 2.93Min: 2.92 / Avg: 2.94 / Max: 2.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinLinux 5.10.4Linux 5.11-rc1Default Kernel4812162015.6515.6517.02

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducLinux 5.11-rc1Linux 5.10.4Default Kernel36912158.598.619.34

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.11-rc1Linux 5.10.4Default Kernel0.39920.79841.19761.59681.996SE +/- 0.001, N = 3SE +/- 0.011, N = 3SE +/- 0.008, N = 31.6321.6391.7741. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.11-rc1Linux 5.10.4Default Kernel246810Min: 1.63 / Avg: 1.63 / Max: 1.63Min: 1.63 / Avg: 1.64 / Max: 1.66Min: 1.76 / Avg: 1.77 / Max: 1.791. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designLinux 5.10.4Linux 5.11-rc1Default Kernel2040608010068.8868.9374.87

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceLinux 5.11-rc1Linux 5.10.4Default Kernel150300450600750SE +/- 1.73, N = 3SE +/- 1.20, N = 3657661714
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceLinux 5.11-rc1Linux 5.10.4Default Kernel130260390520650Min: 658 / Avg: 661 / Max: 664Min: 712 / Avg: 713.67 / Max: 716

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatLinux 5.10.4Linux 5.11-rc1Default Kernel4080120160200151151164

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airLinux 5.11-rc1Linux 5.10.4Default Kernel0.40050.8011.20151.6022.00251.641.661.78

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150SE +/- 0.33, N = 3142143154
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150Min: 142 / Avg: 142.67 / Max: 143

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileLinux 5.10.4Linux 5.11-rc1Default Kernel50100150200250SE +/- 0.33, N = 3215215233
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileLinux 5.10.4Linux 5.11-rc1Default Kernel4080120160200Min: 232 / Avg: 232.67 / Max: 233

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel918273645SE +/- 0.01, N = 3SE +/- 0.15, N = 3SE +/- 0.26, N = 340.0340.0136.941. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel816243240Min: 40.01 / Avg: 40.03 / Max: 40.05Min: 39.8 / Avg: 40.01 / Max: 40.29Min: 36.43 / Avg: 36.94 / Max: 37.271. (CC) gcc options: -O3

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Linux 5.10.4Linux 5.11-rc1Default Kernel91827364534.2434.3137.10

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaLinux 5.11-rc1Linux 5.10.4Default Kernel0.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.390.390.361. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaLinux 5.11-rc1Linux 5.10.4Default Kernel12345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.36 / Avg: 0.36 / Max: 0.361. (CXX) g++ options: -O3 -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyLinux 5.11-rc1Linux 5.10.4Default Kernel4080120160200SE +/- 0.67, N = 3SE +/- 0.67, N = 3157159170
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150Min: 156 / Avg: 157.33 / Max: 158Min: 169 / Avg: 170.33 / Max: 171

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Linux 5.10.4Linux 5.11-rc1Default Kernel20406080100SE +/- 0.20, N = 3SE +/- 0.35, N = 3SE +/- 0.66, N = 377.5278.3983.931. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Linux 5.10.4Linux 5.11-rc1Default Kernel1632486480Min: 77.12 / Avg: 77.52 / Max: 77.73Min: 77.8 / Avg: 78.39 / Max: 79.02Min: 83.13 / Avg: 83.93 / Max: 85.231. (CC) gcc options: -O2 -ldl -lz -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACLinux 5.10.4Linux 5.11-rc1Default Kernel3691215SE +/- 0.03, N = 5SE +/- 0.08, N = 5SE +/- 0.06, N = 510.7110.9311.591. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 10.6 / Avg: 10.71 / Max: 10.79Min: 10.77 / Avg: 10.93 / Max: 11.22Min: 11.44 / Avg: 11.59 / Max: 11.811. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.11-rc1Linux 5.10.4Default Kernel3691215SE +/- 0.008, N = 3SE +/- 0.094, N = 5SE +/- 0.033, N = 38.7568.8729.4781. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 8.74 / Avg: 8.76 / Max: 8.77Min: 8.74 / Avg: 8.87 / Max: 9.24Min: 9.42 / Avg: 9.48 / Max: 9.541. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel510152025SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.02, N = 321.0521.0922.79MIN: 20.7MIN: 20.58MIN: 21.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel510152025Min: 20.86 / Avg: 21.05 / Max: 21.18Min: 20.83 / Avg: 21.09 / Max: 21.22Min: 22.75 / Avg: 22.79 / Max: 22.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.10.4Linux 5.11-rc1Default Kernel70M140M210M280M350MSE +/- 80446.54, N = 3SE +/- 262122.57, N = 3SE +/- 1313862.73, N = 3305670769.36305438957.42282568835.301. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.10.4Linux 5.11-rc1Default Kernel50M100M150M200M250MMin: 305546362.9 / Avg: 305670769.36 / Max: 305821331.03Min: 305139599.04 / Avg: 305438957.42 / Max: 305961347.15Min: 279963270.52 / Avg: 282568835.3 / Max: 284166541.351. (CC) gcc options: -O3 -march=native -lm

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodLinux 5.11-rc1Linux 5.10.4Default Kernel2468107.017.097.58

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetLinux 5.10.4Linux 5.11-rc1Default Kernel48121620SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 315.8115.8617.09MIN: 15.64 / MAX: 24.91MIN: 15.71 / MAX: 16.75MIN: 15.85 / MAX: 45.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetLinux 5.10.4Linux 5.11-rc1Default Kernel48121620Min: 15.78 / Avg: 15.81 / Max: 15.82Min: 15.83 / Avg: 15.86 / Max: 15.92Min: 16.97 / Avg: 17.09 / Max: 17.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Linux 5.11-rc1Linux 5.10.4Default Kernel300K600K900K1200K1500KSE +/- 1659.14, N = 3SE +/- 2865.42, N = 3SE +/- 12175.94, N = 3140497314037271300115
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Linux 5.11-rc1Linux 5.10.4Default Kernel200K400K600K800K1000KMin: 1401839 / Avg: 1404973.33 / Max: 1407484Min: 1398101 / Avg: 1403727.33 / Max: 1407484Min: 1280312 / Avg: 1300114.67 / Max: 1322290

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonLinux 5.11-rc1Linux 5.10.4Default Kernel150300450600750SE +/- 0.67, N = 3646651697
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonLinux 5.11-rc1Linux 5.10.4Default Kernel120240360480600Min: 650 / Avg: 650.67 / Max: 652

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowLinux 5.11-rc1Linux 5.10.4Default Kernel4812162015.5515.6516.77

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDDefault KernelLinux 5.11-rc1Linux 5.10.4300K600K900K1200K1500KSE +/- 21666.11, N = 3SE +/- 11716.13, N = 3SE +/- 19698.88, N = 31540737.671465965.541429878.041. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDDefault KernelLinux 5.11-rc1Linux 5.10.4300K600K900K1200K1500KMin: 1499298.38 / Avg: 1540737.67 / Max: 1572427.75Min: 1443093.75 / Avg: 1465965.54 / Max: 1481813.25Min: 1404494.38 / Avg: 1429878.04 / Max: 1468663.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goLinux 5.11-rc1Linux 5.10.4Default Kernel80160240320400SE +/- 0.33, N = 3328336353
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goLinux 5.11-rc1Linux 5.10.4Default Kernel60120180240300Min: 353 / Avg: 353.33 / Max: 354

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Linux 5.11-rc1Linux 5.10.4Default Kernel3691215SE +/- 0.040, N = 3SE +/- 0.010, N = 3SE +/- 0.048, N = 39.7179.86010.4511. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Linux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 9.65 / Avg: 9.72 / Max: 9.79Min: 9.84 / Avg: 9.86 / Max: 9.87Min: 10.36 / Avg: 10.45 / Max: 10.521. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLinux 5.10.4Linux 5.11-rc1Default Kernel48121620SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.10, N = 315.7815.8716.96MIN: 15.63 / MAX: 27.03MIN: 15.7 / MAX: 18.04MIN: 15.86 / MAX: 33.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLinux 5.10.4Linux 5.11-rc1Default Kernel48121620Min: 15.7 / Avg: 15.78 / Max: 15.88Min: 15.84 / Avg: 15.87 / Max: 15.93Min: 16.8 / Avg: 16.96 / Max: 17.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc10.05270.10540.15810.21080.2635SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 30.2180.2330.234
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc112345Min: 0.22 / Avg: 0.22 / Max: 0.22Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.23 / Max: 0.24

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Linux 5.11-rc1Linux 5.10.4Default Kernel10002000300040005000475547224432

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateLinux 5.11-rc1Linux 5.10.4Default Kernel20406080100SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.35, N = 375.575.980.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateLinux 5.11-rc1Linux 5.10.4Default Kernel1530456075Min: 75.4 / Avg: 75.53 / Max: 75.6Min: 75.8 / Avg: 75.93 / Max: 76.1Min: 80.2 / Avg: 80.87 / Max: 81.4

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.11-rc1Linux 5.10.4Default Kernel1632486480SE +/- 0.14, N = 3SE +/- 0.30, N = 3SE +/- 0.91, N = 366.9867.1871.66MIN: 66.5 / MAX: 78.24MIN: 66.68 / MAX: 78.44MIN: 63.38 / MAX: 108.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.11-rc1Linux 5.10.4Default Kernel1428425670Min: 66.72 / Avg: 66.98 / Max: 67.2Min: 66.83 / Avg: 67.18 / Max: 67.78Min: 70.18 / Avg: 71.66 / Max: 73.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkLinux 5.11-rc1Linux 5.10.4Default Kernel60120180240300SE +/- 0.21, N = 3SE +/- 0.10, N = 3SE +/- 2.11, N = 3262.75262.15246.26
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkLinux 5.11-rc1Linux 5.10.4Default Kernel50100150200250Min: 262.35 / Avg: 262.75 / Max: 263.05Min: 261.96 / Avg: 262.15 / Max: 262.32Min: 242.24 / Avg: 246.26 / Max: 249.37

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10.4Linux 5.11-rc1Default Kernel7K14K21K28K35KSE +/- 40.89, N = 3SE +/- 1.43, N = 3SE +/- 35.57, N = 331684.7431043.7329782.881. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10.4Linux 5.11-rc1Default Kernel5K10K15K20K25KMin: 31613.38 / Avg: 31684.74 / Max: 31755.02Min: 31041 / Avg: 31043.73 / Max: 31045.84Min: 29742.09 / Avg: 29782.88 / Max: 29853.751. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel10002000300040005000SE +/- 9.99, N = 3SE +/- 5.53, N = 3SE +/- 2.03, N = 34470.714488.104749.06MIN: 4441.33MIN: 4464.06MIN: 4699.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel8001600240032004000Min: 4452.75 / Avg: 4470.71 / Max: 4487.28Min: 4477.08 / Avg: 4488.1 / Max: 4494.37Min: 4745.34 / Avg: 4749.06 / Max: 4752.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.10.4Default KernelLinux 5.11-rc160K120K180K240K300KSE +/- 290.99, N = 3SE +/- 272.99, N = 3SE +/- 354.62, N = 3244504251499259645
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.10.4Default KernelLinux 5.11-rc150K100K150K200K250KMin: 244028 / Avg: 244504 / Max: 245032Min: 251144 / Avg: 251499.33 / Max: 252036Min: 259246 / Avg: 259644.67 / Max: 260352

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Linux 5.11-rc1Linux 5.10.4Default Kernel246810SE +/- 0.006, N = 3SE +/- 0.018, N = 3SE +/- 0.031, N = 36.6396.7007.0481. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Linux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 6.63 / Avg: 6.64 / Max: 6.65Min: 6.67 / Avg: 6.7 / Max: 6.73Min: 6.99 / Avg: 7.05 / Max: 7.11. (CXX) g++ options: -O3 -fPIC

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.10.4Linux 5.11-rc1Default Kernel70140210280350SE +/- 0.20, N = 3SE +/- 0.45, N = 3SE +/- 0.50, N = 3305.64310.42324.27MIN: 291.26 / MAX: 314.5MIN: 290.92 / MAX: 337.97MIN: 310.87 / MAX: 349.121. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.10.4Linux 5.11-rc1Default Kernel60120180240300Min: 305.39 / Avg: 305.64 / Max: 306.03Min: 309.53 / Avg: 310.42 / Max: 310.87Min: 323.56 / Avg: 324.27 / Max: 325.231. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomLinux 5.11-rc1Linux 5.10.4Default Kernel0.07880.15760.23640.31520.394SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.350.350.331. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomLinux 5.11-rc1Linux 5.10.4Default Kernel12345Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.33 / Avg: 0.33 / Max: 0.331. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel10002000300040005000SE +/- 13.33, N = 3SE +/- 5.62, N = 3SE +/- 8.86, N = 34468.084482.274737.79MIN: 4427.82MIN: 4460.82MIN: 4680.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel8001600240032004000Min: 4444.05 / Avg: 4468.08 / Max: 4490.1Min: 4473.52 / Avg: 4482.27 / Max: 4492.76Min: 4720.16 / Avg: 4737.79 / Max: 4748.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkLinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 37.677.627.241. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 7.61 / Avg: 7.67 / Max: 7.72Min: 7.49 / Avg: 7.62 / Max: 7.71Min: 7.23 / Avg: 7.24 / Max: 7.261. Nodejs v12.18.2

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SLinux 5.11-rc1Linux 5.10.4Default Kernel1428425670SE +/- 0.22, N = 3SE +/- 0.24, N = 3SE +/- 0.35, N = 360.9561.0064.561. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SLinux 5.11-rc1Linux 5.10.4Default Kernel1326395265Min: 60.52 / Avg: 60.95 / Max: 61.26Min: 60.69 / Avg: 61 / Max: 61.48Min: 64 / Avg: 64.56 / Max: 65.191. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 316.0316.9016.98
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 16.01 / Avg: 16.03 / Max: 16.05Min: 16.78 / Avg: 16.9 / Max: 17.01Min: 16.79 / Avg: 16.98 / Max: 17.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel2K4K6K8K10KSE +/- 17.59, N = 3SE +/- 14.45, N = 3SE +/- 9.60, N = 38540.488564.549027.63MIN: 8490.77MIN: 8529.89MIN: 8975.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel16003200480064008000Min: 8518.57 / Avg: 8540.48 / Max: 8575.28Min: 8544.13 / Avg: 8564.54 / Max: 8592.46Min: 9009.65 / Avg: 9027.63 / Max: 9042.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel10002000300040005000SE +/- 8.44, N = 3SE +/- 7.62, N = 3SE +/- 20.61, N = 34486.344497.064742.11MIN: 4457.53MIN: 4465.18MIN: 4659.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel8001600240032004000Min: 4469.46 / Avg: 4486.34 / Max: 4494.95Min: 4482.06 / Avg: 4497.06 / Max: 4506.9Min: 4702.32 / Avg: 4742.11 / Max: 4771.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel2K4K6K8K10KSE +/- 40.00, N = 3SE +/- 9.29, N = 3SE +/- 55.92, N = 38592.38560.18129.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel15003000450060007500Min: 8528.5 / Avg: 8592.3 / Max: 8666Min: 8550.1 / Avg: 8560.13 / Max: 8578.7Min: 8028.9 / Avg: 8129.2 / Max: 8222.21. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel2K4K6K8K10KSE +/- 17.36, N = 3SE +/- 15.87, N = 3SE +/- 23.24, N = 38545.338575.959028.50MIN: 8489.41MIN: 8526.04MIN: 8965.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel16003200480064008000Min: 8519.22 / Avg: 8545.33 / Max: 8578.21Min: 8544.37 / Avg: 8575.95 / Max: 8594.51Min: 8998.4 / Avg: 9028.5 / Max: 9074.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyLinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.004, N = 3SE +/- 0.006, N = 3SE +/- 0.017, N = 36.3926.3976.745
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 6.38 / Avg: 6.39 / Max: 6.4Min: 6.39 / Avg: 6.4 / Max: 6.41Min: 6.72 / Avg: 6.75 / Max: 6.78

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyLinux 5.10.4Linux 5.11-rc1Default Kernel1.17232.34463.51694.68925.8615SE +/- 0.006, N = 3SE +/- 0.006, N = 3SE +/- 0.017, N = 34.9404.9595.210
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyLinux 5.10.4Linux 5.11-rc1Default Kernel246810Min: 4.93 / Avg: 4.94 / Max: 4.95Min: 4.95 / Avg: 4.96 / Max: 4.97Min: 5.19 / Avg: 5.21 / Max: 5.24

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Linux 5.11-rc1Linux 5.10.4Default Kernel246810SE +/- 0.005, N = 3SE +/- 0.017, N = 3SE +/- 0.026, N = 36.2846.3126.6221. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Linux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 6.28 / Avg: 6.28 / Max: 6.29Min: 6.28 / Avg: 6.31 / Max: 6.34Min: 6.6 / Avg: 6.62 / Max: 6.671. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Linux 5.11-rc1Linux 5.10.4Default Kernel306090120150SE +/- 0.15, N = 3SE +/- 0.38, N = 3SE +/- 1.72, N = 3124.54126.03131.141. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Linux 5.11-rc1Linux 5.10.4Default Kernel20406080100Min: 124.29 / Avg: 124.54 / Max: 124.82Min: 125.48 / Avg: 126.03 / Max: 126.75Min: 128.97 / Avg: 131.14 / Max: 134.541. (CXX) g++ options: -O3 -fPIC

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.11-rc1Linux 5.10.4Default Kernel48121620SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 312.9412.9613.62MIN: 12.76 / MAX: 24.47MIN: 12.85 / MAX: 22.3MIN: 12.87 / MAX: 26.351. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.11-rc1Linux 5.10.4Default Kernel48121620Min: 12.89 / Avg: 12.94 / Max: 12.97Min: 12.89 / Avg: 12.96 / Max: 13.04Min: 13.35 / Avg: 13.62 / Max: 13.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel16003200480064008000SE +/- 46.06, N = 3SE +/- 92.18, N = 3SE +/- 73.87, N = 37572.237525.637192.291. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel13002600390052006500Min: 7480.69 / Avg: 7572.23 / Max: 7626.85Min: 7363.7 / Avg: 7525.63 / Max: 7682.93Min: 7087.4 / Avg: 7192.29 / Max: 7334.851. (CC) gcc options: -O3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastLinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.00, N = 3SE +/- 0.06, N = 8SE +/- 0.01, N = 37.357.427.721. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 7.34 / Avg: 7.35 / Max: 7.35Min: 7.33 / Avg: 7.42 / Max: 7.87Min: 7.71 / Avg: 7.72 / Max: 7.731. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Linux 5.11-rc1Linux 5.10.4Default Kernel100200300400500SE +/- 0.67, N = 3SE +/- 0.33, N = 3439444461
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Linux 5.11-rc1Linux 5.10.4Default Kernel80160240320400Min: 438 / Avg: 439.33 / Max: 440Min: 460 / Avg: 460.67 / Max: 461

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel2K4K6K8K10KSE +/- 10.93, N = 3SE +/- 9.21, N = 3SE +/- 4.57, N = 38583.128588.649002.53MIN: 8546.55MIN: 8557.28MIN: 8960.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel16003200480064008000Min: 8570.25 / Avg: 8583.12 / Max: 8604.85Min: 8576.93 / Avg: 8588.64 / Max: 8606.82Min: 8993.4 / Avg: 9002.53 / Max: 9007.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDefault KernelLinux 5.11-rc1Linux 5.10.43691215SE +/- 0.06535, N = 3SE +/- 0.04049, N = 3SE +/- 0.12964, N = 39.349279.493549.80460MIN: 8.51MIN: 9MIN: 9.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDefault KernelLinux 5.11-rc1Linux 5.10.43691215Min: 9.24 / Avg: 9.35 / Max: 9.46Min: 9.42 / Avg: 9.49 / Max: 9.56Min: 9.65 / Avg: 9.8 / Max: 10.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.11-rc1Default KernelLinux 5.10.4918273645SE +/- 0.38, N = 4SE +/- 0.24, N = 16SE +/- 0.54, N = 2038.4238.9140.271. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.11-rc1Default KernelLinux 5.10.4816243240Min: 37.66 / Avg: 38.42 / Max: 39.18Min: 37.3 / Avg: 38.91 / Max: 40.49Min: 37.72 / Avg: 40.27 / Max: 44.461. (CC) gcc options: -O2 -std=c99

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNADefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.20, N = 314.5614.6015.251. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNADefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 14.47 / Avg: 14.56 / Max: 14.69Min: 14.47 / Avg: 14.6 / Max: 14.79Min: 15.04 / Avg: 15.25 / Max: 15.661. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Linux 5.11-rc1Linux 5.10.4Default Kernel142842567059.0859.1561.73

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeLinux 5.11-rc1Linux 5.10.4Default Kernel3M6M9M12M15MSE +/- 38144.94, N = 3SE +/- 168497.64, N = 3SE +/- 170465.83, N = 31389775913559816133614571. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeLinux 5.11-rc1Linux 5.10.4Default Kernel2M4M6M8M10MMin: 13853639 / Avg: 13897759.33 / Max: 13973719Min: 13230986 / Avg: 13559815.67 / Max: 13788087Min: 13101996 / Avg: 13361457.33 / Max: 136827231. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 1024Linux 5.11-rc1Linux 5.10.4Default Kernel14002800420056007000SE +/- 8.41, N = 3SE +/- 13.48, N = 3SE +/- 3.28, N = 36426635061841. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 1024Linux 5.11-rc1Linux 5.10.4Default Kernel11002200330044005500Min: 6416 / Avg: 6426.33 / Max: 6443Min: 6329 / Avg: 6349.67 / Max: 6375Min: 6179 / Avg: 6183.67 / Max: 61901. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefault KernelLinux 5.11-rc1Linux 5.10.4816243240SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.31, N = 334.3235.0035.651. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefault KernelLinux 5.11-rc1Linux 5.10.4816243240Min: 34.06 / Avg: 34.32 / Max: 34.57Min: 34.97 / Avg: 35 / Max: 35.05Min: 35.2 / Avg: 35.65 / Max: 36.251. rsvg-convert version 2.50.1

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzLinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.040, N = 20SE +/- 0.037, N = 4SE +/- 0.047, N = 47.1317.1607.403
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 6.96 / Avg: 7.13 / Max: 7.81Min: 7.1 / Avg: 7.16 / Max: 7.27Min: 7.28 / Avg: 7.4 / Max: 7.51

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080Linux 5.11-rc1Linux 5.10.4Default Kernel20406080100SE +/- 0.29, N = 3SE +/- 0.27, N = 3SE +/- 0.52, N = 388.688.285.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080Linux 5.11-rc1Linux 5.10.4Default Kernel20406080100Min: 88.1 / Avg: 88.6 / Max: 89.1Min: 87.7 / Avg: 88.23 / Max: 88.6Min: 84.4 / Avg: 85.43 / Max: 86.11. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialLinux 5.11-rc1Linux 5.10.4Default Kernel70140210280350311.49316.76322.77

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080Linux 5.11-rc1Linux 5.10.4Default Kernel10002000300040005000SE +/- 3.61, N = 3SE +/- 3.71, N = 3SE +/- 4.93, N = 34827473246641. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080Linux 5.11-rc1Linux 5.10.4Default Kernel8001600240032004000Min: 4820 / Avg: 4827 / Max: 4832Min: 4727 / Avg: 4731.67 / Max: 4739Min: 4656 / Avg: 4664 / Max: 46731. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreLinux 5.10.4Linux 5.11-rc1Default Kernel150300450600750689685667

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Linux 5.11-rc1Linux 5.10.4Default Kernel1122334455SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.30, N = 346.0946.2147.571. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Linux 5.11-rc1Linux 5.10.4Default Kernel1020304050Min: 45.91 / Avg: 46.08 / Max: 46.32Min: 46.08 / Avg: 46.21 / Max: 46.34Min: 47.15 / Avg: 47.57 / Max: 48.161. (CC) gcc options: -pthread -fvisibility=hidden -O2

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyLinux 5.10.4Linux 5.11-rc1Default Kernel48121620SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 313.3213.3413.75
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyLinux 5.10.4Linux 5.11-rc1Default Kernel48121620Min: 13.3 / Avg: 13.32 / Max: 13.35Min: 13.3 / Avg: 13.34 / Max: 13.39Min: 13.71 / Avg: 13.75 / Max: 13.79

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Default KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 23.03, N = 3SE +/- 67.59, N = 3SE +/- 46.59, N = 310274997299671. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Default KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KMin: 10228 / Avg: 10274 / Max: 10299Min: 9844 / Avg: 9971.67 / Max: 10074Min: 9907 / Avg: 9967.33 / Max: 100591. (CXX) g++ options: -O3 -pthread

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.10.4Default KernelLinux 5.11-rc10.3560.7121.0681.4241.78SE +/- 0.013, N = 3SE +/- 0.014, N = 8SE +/- 0.022, N = 31.5371.5811.582MIN: 1.4 / MAX: 2.36MIN: 1.38 / MAX: 2.33MIN: 1.4 / MAX: 2.3
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.10.4Default KernelLinux 5.11-rc1246810Min: 1.52 / Avg: 1.54 / Max: 1.56Min: 1.53 / Avg: 1.58 / Max: 1.64Min: 1.55 / Avg: 1.58 / Max: 1.62

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.10.4Linux 5.11-rc1Default Kernel0.12130.24260.36390.48520.6065SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.005, N = 30.5390.5350.5241. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.10.4Linux 5.11-rc1Default Kernel246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.53 / Avg: 0.53 / Max: 0.54Min: 0.52 / Avg: 0.52 / Max: 0.531. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzLinux 5.10.4Linux 5.11-rc1Default Kernel612182430SE +/- 0.14, N = 20SE +/- 0.16, N = 20SE +/- 0.22, N = 825.7425.9926.47
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzLinux 5.10.4Linux 5.11-rc1Default Kernel612182430Min: 25.28 / Avg: 25.74 / Max: 28.2Min: 25.45 / Avg: 25.99 / Max: 28.75Min: 26.16 / Avg: 26.47 / Max: 28

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthLinux 5.10.4Linux 5.11-rc1Default Kernel4M8M12M16M20MSE +/- 76230.74, N = 3SE +/- 97740.42, N = 3SE +/- 237774.54, N = 3198407431966126719306833
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthLinux 5.10.4Linux 5.11-rc1Default Kernel3M6M9M12M15MMin: 19733463 / Avg: 19840743 / Max: 19988200Min: 19553676 / Avg: 19661267 / Max: 19856405Min: 18877888 / Avg: 19306833.33 / Max: 19699111

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefault KernelLinux 5.11-rc1Linux 5.10.470140210280350SE +/- 1.15, N = 3SE +/- 0.88, N = 3SE +/- 3.52, N = 3300.19308.38308.47
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefault KernelLinux 5.11-rc1Linux 5.10.460120180240300Min: 298.6 / Avg: 300.19 / Max: 302.43Min: 306.74 / Avg: 308.38 / Max: 309.78Min: 302.31 / Avg: 308.47 / Max: 314.5

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Linux 5.10.4Linux 5.11-rc1Default Kernel20406080100SE +/- 0.50, N = 3SE +/- 0.18, N = 3SE +/- 0.08, N = 374.6675.0676.651. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Linux 5.10.4Linux 5.11-rc1Default Kernel1530456075Min: 73.84 / Avg: 74.66 / Max: 75.57Min: 74.75 / Avg: 75.06 / Max: 75.37Min: 76.52 / Avg: 76.65 / Max: 76.791. (CXX) g++ options: -O3 -fPIC

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080Linux 5.10.4Default KernelLinux 5.11-rc1140280420560700SE +/- 8.16, N = 3SE +/- 3.40, N = 3SE +/- 7.06, N = 3641.4631.8625.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080Linux 5.10.4Default KernelLinux 5.11-rc1110220330440550Min: 625.1 / Avg: 641.4 / Max: 650.2Min: 627 / Avg: 631.83 / Max: 638.4Min: 614.2 / Avg: 625.3 / Max: 638.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.03024, N = 3SE +/- 0.04602, N = 3SE +/- 0.00187, N = 36.859756.922077.03197MIN: 6.65MIN: 6.7MIN: 6.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 6.83 / Avg: 6.86 / Max: 6.92Min: 6.85 / Avg: 6.92 / Max: 7.01Min: 7.03 / Avg: 7.03 / Max: 7.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonLinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.0788, N = 3SE +/- 0.0259, N = 3SE +/- 0.0448, N = 38.17398.10227.9759MIN: 8.02 / MAX: 8.44MIN: 8.03 / MAX: 8.26MIN: 7.86 / MAX: 8.15
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 8.06 / Avg: 8.17 / Max: 8.33Min: 8.06 / Avg: 8.1 / Max: 8.15Min: 7.89 / Avg: 7.98 / Max: 8.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 317.0017.3817.41MIN: 16.89MIN: 17.2MIN: 17.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 17 / Avg: 17 / Max: 17Min: 17.3 / Avg: 17.38 / Max: 17.51Min: 17.39 / Avg: 17.41 / Max: 17.421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreLinux 5.10.4Linux 5.11-rc1Default Kernel30060090012001500127712691248

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefault KernelLinux 5.10.4Linux 5.11-rc1510152025SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 319.0219.1919.45
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefault KernelLinux 5.10.4Linux 5.11-rc1510152025Min: 19.01 / Avg: 19.02 / Max: 19.03Min: 19.07 / Avg: 19.19 / Max: 19.33Min: 19.4 / Avg: 19.45 / Max: 19.5

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileLinux 5.10.4Linux 5.11-rc1Default Kernel20406080100SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.07, N = 383.8183.8885.71
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileLinux 5.10.4Linux 5.11-rc1Default Kernel1632486480Min: 83.61 / Avg: 83.81 / Max: 84.03Min: 83.71 / Avg: 83.88 / Max: 84.21Min: 85.61 / Avg: 85.71 / Max: 85.86

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Linux 5.10.4Linux 5.11-rc1Default Kernel132639526558.2158.2159.51

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.11-rc1Linux 5.10.4Default Kernel612182430SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 323.022.922.51. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.11-rc1Linux 5.10.4Default Kernel510152025Min: 22.9 / Avg: 22.97 / Max: 23Min: 22.9 / Avg: 22.9 / Max: 22.9Min: 22.5 / Avg: 22.5 / Max: 22.51. (CC) gcc options: -O3 -pthread -lz

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.4Linux 5.11-rc1Default Kernel50100150200250SE +/- 0.81, N = 3SE +/- 1.16, N = 3SE +/- 2.78, N = 4232.46236.57237.53
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.4Linux 5.11-rc1Default Kernel4080120160200Min: 231.5 / Avg: 232.46 / Max: 234.07Min: 234.42 / Avg: 236.57 / Max: 238.39Min: 234.75 / Avg: 237.53 / Max: 245.86

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Linux 5.10.4Linux 5.11-rc1Default Kernel1020304050SE +/- 0.01, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 343.0943.2843.981. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Linux 5.10.4Linux 5.11-rc1Default Kernel918273645Min: 43.08 / Avg: 43.09 / Max: 43.12Min: 43.02 / Avg: 43.28 / Max: 43.51Min: 43.78 / Avg: 43.98 / Max: 44.151. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeLinux 5.10.4Linux 5.11-rc1Default Kernel20406080100SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 377.0878.2978.651. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeLinux 5.10.4Linux 5.11-rc1Default Kernel1530456075Min: 77.02 / Avg: 77.08 / Max: 77.15Min: 78.18 / Avg: 78.29 / Max: 78.46Min: 78.51 / Avg: 78.65 / Max: 78.761. RawTherapee, version 5.8, command line.

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateLinux 5.11-rc1Linux 5.10.4Default Kernel48121620SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 315.1015.2815.41
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateLinux 5.11-rc1Linux 5.10.4Default Kernel48121620Min: 14.99 / Avg: 15.1 / Max: 15.18Min: 15.12 / Avg: 15.28 / Max: 15.39Min: 15.3 / Avg: 15.41 / Max: 15.47

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Linux 5.10.4Linux 5.11-rc1Default Kernel0.05870.11740.17610.23480.2935SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.2610.2590.256
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Linux 5.10.4Linux 5.11-rc1Default Kernel12345Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.26 / Avg: 0.26 / Max: 0.26

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkLinux 5.11-rc1Linux 5.10.4Default Kernel1.08232.16463.24694.32925.41154.724.734.81

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Linux 5.10.4Linux 5.11-rc1Default Kernel0.17530.35060.52590.70120.8765SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.7790.7740.765
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Linux 5.10.4Linux 5.11-rc1Default Kernel246810Min: 0.77 / Avg: 0.78 / Max: 0.79Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.76 / Avg: 0.77 / Max: 0.77

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomLinux 5.11-rc1Linux 5.10.4Default Kernel0.30580.61160.91741.22321.529SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.001, N = 31.3591.3541.335
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomLinux 5.11-rc1Linux 5.10.4Default Kernel246810Min: 1.36 / Avg: 1.36 / Max: 1.36Min: 1.35 / Avg: 1.35 / Max: 1.36Min: 1.33 / Avg: 1.33 / Max: 1.34

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080Linux 5.10.4Linux 5.11-rc1Default Kernel2004006008001000SE +/- 2.23, N = 3SE +/- 9.60, N = 3SE +/- 7.08, N = 3960.5947.3944.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080Linux 5.10.4Linux 5.11-rc1Default Kernel2004006008001000Min: 956.1 / Avg: 960.47 / Max: 963.4Min: 936.3 / Avg: 947.27 / Max: 966.4Min: 930.8 / Avg: 944.33 / Max: 954.71. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughLinux 5.10.4Default KernelLinux 5.11-rc1918273645SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 336.5836.9537.201. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughLinux 5.10.4Default KernelLinux 5.11-rc1816243240Min: 36.56 / Avg: 36.58 / Max: 36.6Min: 36.93 / Avg: 36.95 / Max: 36.97Min: 37.2 / Avg: 37.2 / Max: 37.21. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonLinux 5.10.4Default KernelLinux 5.11-rc1246810SE +/- 0.0500, N = 3SE +/- 0.0725, N = 3SE +/- 0.0404, N = 38.51888.39708.3775MIN: 8.4 / MAX: 8.7MIN: 8.23 / MAX: 8.61MIN: 8.3 / MAX: 8.55
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonLinux 5.10.4Default KernelLinux 5.11-rc13691215Min: 8.43 / Avg: 8.52 / Max: 8.6Min: 8.29 / Avg: 8.4 / Max: 8.53Min: 8.33 / Avg: 8.38 / Max: 8.46

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.10.4Default KernelLinux 5.11-rc170140210280350SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3300.62303.53305.641. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.10.4Default KernelLinux 5.11-rc150100150200250Min: 300.54 / Avg: 300.62 / Max: 300.71Min: 303.43 / Avg: 303.53 / Max: 303.68Min: 305.48 / Avg: 305.64 / Max: 305.811. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondLinux 5.11-rc1Linux 5.10.4Default Kernel60K120K180K240K300KSE +/- 802.31, N = 3SE +/- 872.17, N = 3SE +/- 1434.22, N = 3271886.25270279.69267555.231. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondLinux 5.11-rc1Linux 5.10.4Default Kernel50K100K150K200K250KMin: 270281.68 / Avg: 271886.25 / Max: 272700.16Min: 269269.61 / Avg: 270279.69 / Max: 272016.32Min: 265273.98 / Avg: 267555.23 / Max: 270201.811. (CC) gcc options: -O2 -lrt" -lrt

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHDefault KernelLinux 5.11-rc1Linux 5.10.4200K400K600K800K1000KSE +/- 9875.40, N = 15SE +/- 11649.30, N = 3SE +/- 14750.43, N = 31065654.101053853.021049901.521. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHDefault KernelLinux 5.11-rc1Linux 5.10.4200K400K600K800K1000KMin: 1006229.38 / Avg: 1065654.1 / Max: 1117390Min: 1037775.94 / Avg: 1053853.02 / Max: 1076495.12Min: 1022560.31 / Avg: 1049901.52 / Max: 1073167.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumLinux 5.10.4Default KernelLinux 5.11-rc1246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.056.116.141. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumLinux 5.10.4Default KernelLinux 5.11-rc1246810Min: 6.04 / Avg: 6.05 / Max: 6.06Min: 6.1 / Avg: 6.11 / Max: 6.12Min: 6.13 / Avg: 6.14 / Max: 6.141. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingLinux 5.10.4Linux 5.11-rc1Default Kernel2004006008001000SE +/- 0.82, N = 3SE +/- 1.15, N = 3SE +/- 0.27, N = 3855.36857.23867.811. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingLinux 5.10.4Linux 5.11-rc1Default Kernel150300450600750Min: 854.44 / Avg: 855.36 / Max: 856.98Min: 855.34 / Avg: 857.23 / Max: 859.31Min: 867.3 / Avg: 867.81 / Max: 868.21. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Default KernelLinux 5.11-rc1Linux 5.10.40.52851.0571.58552.1142.6425SE +/- 0.007, N = 3SE +/- 0.019, N = 3SE +/- 0.002, N = 32.3492.3472.316
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Default KernelLinux 5.11-rc1Linux 5.10.4246810Min: 2.34 / Avg: 2.35 / Max: 2.36Min: 2.32 / Avg: 2.35 / Max: 2.38Min: 2.31 / Avg: 2.32 / Max: 2.32

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarLinux 5.10.4Linux 5.11-rc1Default Kernel0.64511.29021.93532.58043.2255SE +/- 0.003, N = 3SE +/- 0.028, N = 3SE +/- 0.004, N = 32.8672.8432.828
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarLinux 5.10.4Linux 5.11-rc1Default Kernel246810Min: 2.86 / Avg: 2.87 / Max: 2.87Min: 2.79 / Avg: 2.84 / Max: 2.87Min: 2.82 / Avg: 2.83 / Max: 2.84

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterDefault KernelLinux 5.11-rc1Linux 5.10.460120180240300287.58289.58291.47

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyLinux 5.10.4Linux 5.11-rc1Default Kernel70140210280350SE +/- 0.38, N = 3SE +/- 0.76, N = 3SE +/- 0.28, N = 3325.24326.50329.39
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyLinux 5.10.4Linux 5.11-rc1Default Kernel60120180240300Min: 324.49 / Avg: 325.24 / Max: 325.77Min: 325.43 / Avg: 326.5 / Max: 327.96Min: 328.84 / Avg: 329.39 / Max: 329.78

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileLinux 5.11-rc1Linux 5.10.4Default Kernel306090120150SE +/- 0.86, N = 3SE +/- 0.92, N = 3SE +/- 0.91, N = 3129.68129.83131.32
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileLinux 5.11-rc1Linux 5.10.4Default Kernel20406080100Min: 128.61 / Avg: 129.68 / Max: 131.38Min: 128.85 / Avg: 129.82 / Max: 131.67Min: 130.25 / Avg: 131.32 / Max: 133.13

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileLinux 5.10.4Linux 5.11-rc1Default Kernel2004006008001000SE +/- 0.83, N = 3SE +/- 3.21, N = 3SE +/- 0.68, N = 31044.461050.451057.63
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileLinux 5.10.4Linux 5.11-rc1Default Kernel2004006008001000Min: 1043.16 / Avg: 1044.46 / Max: 1045.99Min: 1044.74 / Avg: 1050.45 / Max: 1055.84Min: 1056.3 / Avg: 1057.63 / Max: 1058.59

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownLinux 5.11-rc1Linux 5.10.4Default Kernel246810SE +/- 0.0326, N = 3SE +/- 0.0471, N = 3SE +/- 0.0587, N = 37.42247.34977.3312MIN: 7.34 / MAX: 7.57MIN: 7.23 / MAX: 7.53MIN: 7.18 / MAX: 7.52
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownLinux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 7.38 / Avg: 7.42 / Max: 7.49Min: 7.26 / Avg: 7.35 / Max: 7.43Min: 7.22 / Avg: 7.33 / Max: 7.42

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreLinux 5.10.4Linux 5.11-rc1Default Kernel130260390520650588584581

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel2K4K6K8K10KSE +/- 13.60, N = 3SE +/- 8.68, N = 3SE +/- 8.31, N = 37920.97895.57828.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel14002800420056007000Min: 7907.2 / Avg: 7920.9 / Max: 7948.1Min: 7884.5 / Avg: 7895.47 / Max: 7912.6Min: 7814.1 / Avg: 7828.53 / Max: 7842.91. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.11-rc1Default KernelLinux 5.10.46001200180024003000SE +/- 6.18, N = 3SE +/- 14.17, N = 3SE +/- 7.47, N = 32872.02849.02838.61. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.11-rc1Default KernelLinux 5.10.45001000150020002500Min: 2859.7 / Avg: 2872.03 / Max: 2878.9Min: 2825.7 / Avg: 2848.97 / Max: 2874.6Min: 2828.9 / Avg: 2838.6 / Max: 2853.31. (CC) gcc options: -O3 -pthread -lz

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownLinux 5.11-rc1Linux 5.10.4Default Kernel246810SE +/- 0.0114, N = 3SE +/- 0.0390, N = 3SE +/- 0.0323, N = 36.84586.79346.7689MIN: 6.79 / MAX: 6.96MIN: 6.68 / MAX: 6.95MIN: 6.67 / MAX: 6.9
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownLinux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 6.82 / Avg: 6.85 / Max: 6.86Min: 6.72 / Avg: 6.79 / Max: 6.83Min: 6.71 / Avg: 6.77 / Max: 6.81

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Linux 5.10.4Linux 5.11-rc1Default Kernel20406080100SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 381.4881.6282.391. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Linux 5.10.4Linux 5.11-rc1Default Kernel1632486480Min: 81.46 / Avg: 81.47 / Max: 81.49Min: 81.6 / Avg: 81.62 / Max: 81.64Min: 82.27 / Avg: 82.39 / Max: 82.531. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjLinux 5.11-rc1Linux 5.10.4Default Kernel246810SE +/- 0.0090, N = 3SE +/- 0.0234, N = 3SE +/- 0.0127, N = 37.88247.87227.8014MIN: 7.84 / MAX: 7.97MIN: 7.8 / MAX: 7.98MIN: 7.74 / MAX: 7.91
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjLinux 5.11-rc1Linux 5.10.4Default Kernel3691215Min: 7.87 / Avg: 7.88 / Max: 7.9Min: 7.83 / Avg: 7.87 / Max: 7.91Min: 7.78 / Avg: 7.8 / Max: 7.82

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 313.2113.2413.34MIN: 12.85MIN: 12.87MIN: 12.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPULinux 5.11-rc1Linux 5.10.4Default Kernel48121620Min: 13.18 / Avg: 13.21 / Max: 13.23Min: 13.21 / Avg: 13.24 / Max: 13.26Min: 13.34 / Avg: 13.34 / Max: 13.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyLinux 5.10.4Linux 5.11-rc1Default Kernel120240360480600535.01536.84539.96

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel2K4K6K8K10KSE +/- 16.93, N = 3SE +/- 14.26, N = 3SE +/- 5.10, N = 37906.87899.37835.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLinux 5.11-rc1Linux 5.10.4Default Kernel14002800420056007000Min: 7883.5 / Avg: 7906.77 / Max: 7939.7Min: 7874.7 / Avg: 7899.33 / Max: 7924.1Min: 7825.7 / Avg: 7835.53 / Max: 7842.81. (CC) gcc options: -O3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjLinux 5.10.4Linux 5.11-rc1Default Kernel246810SE +/- 0.0051, N = 3SE +/- 0.0109, N = 3SE +/- 0.0139, N = 37.27027.25547.2063MIN: 7.24 / MAX: 7.35MIN: 7.21 / MAX: 7.32MIN: 7.15 / MAX: 7.3
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjLinux 5.10.4Linux 5.11-rc1Default Kernel3691215Min: 7.26 / Avg: 7.27 / Max: 7.28Min: 7.23 / Avg: 7.26 / Max: 7.27Min: 7.18 / Avg: 7.21 / Max: 7.23

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileLinux 5.10.4Linux 5.11-rc1Default Kernel50100150200250SE +/- 1.49, N = 3SE +/- 0.84, N = 3SE +/- 1.02, N = 3231.01231.54232.99
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileLinux 5.10.4Linux 5.11-rc1Default Kernel4080120160200Min: 229.29 / Avg: 231.01 / Max: 233.97Min: 230.24 / Avg: 231.53 / Max: 233.1Min: 231.7 / Avg: 232.99 / Max: 235.01

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Linux 5.11-rc1Default KernelLinux 5.10.40.23130.46260.69390.92521.1565SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 31.0281.0221.020
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Linux 5.11-rc1Default KernelLinux 5.10.4246810Min: 1.02 / Avg: 1.03 / Max: 1.03Min: 1.02 / Avg: 1.02 / Max: 1.02Min: 1.02 / Avg: 1.02 / Max: 1.03

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaDefault KernelLinux 5.11-rc1Linux 5.10.44812162016.1816.2016.24

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDLinux 5.11-rc1Linux 5.10.4Default Kernel0.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.460.461. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDLinux 5.11-rc1Linux 5.10.4Default Kernel12345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.461. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsLinux 5.11-rc1Linux 5.10.4Default Kernel0.10130.20260.30390.40520.5065SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.450.450.451. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsLinux 5.11-rc1Linux 5.10.4Default Kernel12345Min: 0.45 / Avg: 0.45 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.451. (CXX) g++ options: -O3 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyLinux 5.11-rc1Linux 5.10.4Default Kernel1122334455SE +/- 1.73, N = 3SE +/- 0.51, N = 3SE +/- 0.25, N = 341.0542.4650.06MIN: 36.21 / MAX: 43.53MIN: 36.31 / MAX: 45.38MIN: 40.36 / MAX: 77.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyLinux 5.11-rc1Linux 5.10.4Default Kernel1020304050Min: 37.59 / Avg: 41.05 / Max: 42.8Min: 41.49 / Avg: 42.46 / Max: 43.22Min: 49.75 / Avg: 50.06 / Max: 50.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Linux 5.11-rc1Linux 5.10.4Default Kernel612182430SE +/- 0.86, N = 3SE +/- 0.83, N = 3SE +/- 0.42, N = 319.9020.9226.05MIN: 18.96 / MAX: 22.28MIN: 19.02 / MAX: 53.8MIN: 20.5 / MAX: 71.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Linux 5.11-rc1Linux 5.10.4Default Kernel612182430Min: 19.04 / Avg: 19.9 / Max: 21.63Min: 19.26 / Avg: 20.92 / Max: 21.86Min: 25.3 / Avg: 26.05 / Max: 26.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLinux 5.10.4Linux 5.11-rc1Default Kernel0.81231.62462.43693.24924.0615SE +/- 0.03, N = 15SE +/- 0.04, N = 3SE +/- 0.14, N = 33.383.473.61MIN: 3.16 / MAX: 4.51MIN: 3.37 / MAX: 4.01MIN: 3.17 / MAX: 16.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLinux 5.10.4Linux 5.11-rc1Default Kernel246810Min: 3.19 / Avg: 3.38 / Max: 3.65Min: 3.4 / Avg: 3.47 / Max: 3.55Min: 3.41 / Avg: 3.61 / Max: 3.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel48121620SE +/- 0.30, N = 15SE +/- 0.33, N = 15SE +/- 0.41, N = 1514.3114.4514.78MIN: 13MIN: 12.95MIN: 12.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPULinux 5.10.4Linux 5.11-rc1Default Kernel48121620Min: 13.24 / Avg: 14.31 / Max: 16.1Min: 13.16 / Avg: 14.45 / Max: 16.42Min: 13.46 / Avg: 14.78 / Max: 17.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.11-rc1Linux 5.10.4Default Kernel1.07152.1433.21454.2865.3575SE +/- 0.012, N = 3SE +/- 0.014, N = 3SE +/- 0.098, N = 154.7624.7204.3111. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.11-rc1Linux 5.10.4Default Kernel246810Min: 4.74 / Avg: 4.76 / Max: 4.78Min: 4.69 / Avg: 4.72 / Max: 4.74Min: 3.33 / Avg: 4.31 / Max: 4.651. (CXX) g++ options: -O3 -pthread -lm

210 Results Shown

Redis
PyPerformance
oneDNN:
  IP Shapes 3D - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
NCNN:
  CPU - resnet50
  Vulkan GPU - resnet50
  Vulkan GPU - googlenet
  CPU - googlenet
  CPU - resnet18
Timed HMMer Search
NCNN:
  CPU - mobilenet
  CPU - yolov4-tiny
  Vulkan GPU - mobilenet
CLOMP
NCNN:
  CPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  CPU-v2-v2 - mobilenet-v2
  Vulkan GPU - squeezenet_ssd
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - squeezenet_ssd
DeepSpeech
NCNN:
  Vulkan GPU - mnasnet
  Vulkan GPU - blazeface
Polyhedron Fortran Benchmarks
NCNN:
  Vulkan GPU - efficientnet-b0
  CPU - vgg16
Redis
NCNN
TensorFlow Lite
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
NCNN:
  CPU - regnety_400m
  Vulkan GPU-v3-v3 - mobilenet-v3
WebP Image Encode
NCNN
TensorFlow Lite:
  Inception ResNet V2
  Inception V4
NCNN
Cryptsetup:
  Twofish-XTS 256b Decryption
  Serpent-XTS 256b Decryption
TensorFlow Lite
Cryptsetup
oneDNN
Cryptsetup
TensorFlow Lite
Cryptsetup
WebP Image Encode
Mobile Neural Network
WebP Image Encode
Cryptsetup
PyPerformance
GIMP
Timed Eigen Compilation
Cryptsetup:
  Twofish-XTS 256b Encryption
  AES-XTS 512b Encryption
  PBKDF2-whirlpool
  Serpent-XTS 256b Encryption
Redis
Cryptsetup:
  AES-XTS 256b Decryption
  AES-XTS 512b Decryption
RNNoise
Mobile Neural Network
Opus Codec Encoding
PyPerformance
Cryptsetup
BYTE Unix Benchmark
LAME MP3 Encoding
LZ4 Compression
WavPack Audio Encoding
GLmark2
Polyhedron Fortran Benchmarks
TNN
Polyhedron Fortran Benchmarks:
  ac
  fatigue2
PyPerformance
PHPBench
Mobile Neural Network
Monkey Audio Encoding
Polyhedron Fortran Benchmarks
oneDNN
Polyhedron Fortran Benchmarks:
  protein
  doduc
WebP Image Encode
Polyhedron Fortran Benchmarks
PyPerformance:
  raytrace
  float
Polyhedron Fortran Benchmarks
PyPerformance:
  crypto_pyaes
  regex_compile
LZ4 Compression
Polyhedron Fortran Benchmarks
simdjson
PyPerformance
SQLite Speedtest
FLAC Audio Encoding
WebP Image Encode
oneDNN
Hierarchical INTegration
Polyhedron Fortran Benchmarks
NCNN
Cryptsetup
PyPerformance
Polyhedron Fortran Benchmarks
Redis
PyPerformance
Basis Universal
NCNN
Darktable
GLmark2
PyPerformance
Mobile Neural Network
Numpy Benchmark
FFTE
oneDNN
TensorFlow Lite
libavif avifenc
TNN
simdjson
oneDNN
Node.js V8 Web Tooling Benchmark
Basis Universal
GIMP
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
LZ4 Compression
oneDNN
Darktable:
  Masskrug - CPU-only
  Server Room - CPU-only
libavif avifenc:
  10
  0
Mobile Neural Network
LZ4 Compression
ASTC Encoder
PyPerformance
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  IP Shapes 1D - f32 - CPU
eSpeak-NG Speech Engine
Timed MAFFT Alignment
Polyhedron Fortran Benchmarks
Stockfish
VKMark
librsvg
Unpacking The Linux Kernel
yquake2
Appleseed
VKMark
AI Benchmark Alpha
XZ Compression
Darktable
VkFFT
Sunflow Rendering System
GROMACS
Unpacking Firefox
asmFish
WireGuard + Linux Networking Stack Stress Test
libavif avifenc
yquake2
oneDNN
Embree
oneDNN
AI Benchmark Alpha
GIMP
Timed FFmpeg Compilation
Polyhedron Fortran Benchmarks
Zstd Compression
Blender
Basis Universal
RawTherapee
GIMP
rav1e
Polyhedron Fortran Benchmarks
rav1e
IndigoBench
yquake2
ASTC Encoder
Embree
ASTC Encoder
Coremark
Redis
ASTC Encoder
Basis Universal
rav1e
IndigoBench
Appleseed
Blender
Timed Linux Kernel Compilation
Timed LLVM Compilation
Embree
AI Benchmark Alpha
LZ4 Compression
Zstd Compression
Embree
Basis Universal
Embree
oneDNN
Appleseed
LZ4 Compression
Embree
Build2
rav1e
Polyhedron Fortran Benchmarks
simdjson:
  DistinctUserID
  PartialTweets
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet18
  CPU - blazeface
oneDNN
LAMMPS Molecular Dynamics Simulator