Ryzen 7 1700 + RX 480

AMD Ryzen 7 1700 Eight-Core testing with a MSI B350 TOMAHAWK (MS-7A34) v1.0 (1.H0 BIOS) and AMD Radeon RX 470/480/570/570X/580/580X/590 8GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012315-HA-RYZEN717043
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 5 Tests
AV1 2 Tests
Bioinformatics 2 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 14 Tests
Compression Tests 3 Tests
CPU Massive 20 Tests
Creator Workloads 22 Tests
Database Test Suite 2 Tests
Encoding 7 Tests
Fortran Tests 3 Tests
Game Development 3 Tests
HPC - High Performance Computing 14 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 9 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 18 Tests
NVIDIA GPU Compute 5 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 11 Tests
Python 2 Tests
Renderers 3 Tests
Scientific Computing 5 Tests
Server 5 Tests
Server CPU Tests 12 Tests
Single-Threaded 10 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 2 Tests
Video Encoding 2 Tests
Vulkan Compute 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Default Kernel
December 29 2020
  19 Hours
Linux 5.10.4
December 30 2020
  23 Hours, 5 Minutes
Linux 5.11-rc1
December 31 2020
  17 Hours, 14 Minutes
Invert Hiding All Results Option
  19 Hours, 46 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 7 1700 + RX 480OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 1700 Eight-Core @ 3.00GHz (8 Cores / 16 Threads)MSI B350 TOMAHAWK (MS-7A34) v1.0 (1.H0 BIOS)AMD 17h16GB120GB Samsung SSD 840AMD Radeon RX 470/480/570/570X/580/580X/590 8GB (1266/2000MHz)AMD Ellesmere HDMI AudioVA2431Realtek RTL8111/8168/8411Ubuntu 20.105.8.0-33-generic (x86_64)5.10.4-051004-generic (x86_64)5.11.0-rc1-phx (x86_64) 20201228GNOME Shell 3.38.1X Server 1.20.9amdgpu 19.1.04.6 Mesa 20.2.1 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionRyzen 7 1700 + RX 480 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096- Default Kernel: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8001137 - Linux 5.10.4: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 - Linux 5.11-rc1: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8001137 - GLAMOR- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Default KernelLinux 5.10.4Linux 5.11-rc1Result OverviewPhoronix Test Suite100%105%111%116%121%RedisTimed HMMer SearchCLOMPDeepSpeechNCNNTensorFlow LiteLAMMPS Molecular Dynamics SimulatorWebP Image EncodeTimed Eigen CompilationCryptsetupRNNoiseOpus Codec EncodingBYTE Unix BenchmarkLAME MP3 EncodingWavPack Audio EncodingPHPBenchMonkey Audio EncodingSQLite SpeedtestFLAC Audio EncodingGLmark2Hierarchical INTegrationMobile Neural NetworkTNNoneDNNNumpy BenchmarkFFTENode.js V8 Web Tooling BenchmarkPolyhedron Fortran BenchmarksLZ4 CompressioneSpeak-NG Speech Enginelibavif avifencTimed MAFFT AlignmentStockfishGIMPlibrsvgUnpacking The Linux KernelVKMarksimdjsonBasis UniversalPyPerformanceXZ CompressionVkFFTSunflow Rendering SystemGROMACSUnpacking FirefoxWireGuard + Linux Networking Stack Stress TestasmFishAI Benchmark AlphaTimed FFmpeg Compilationyquake2RawTherapeeASTC EncoderDarktableBlenderCoremarkZstd CompressionIndigoBenchTimed Linux Kernel CompilationAppleseedEmbreeTimed LLVM CompilationBuild2rav1e

Ryzen 7 1700 + RX 480redis: LPOPpyperformance: python_startuponednn: IP Shapes 3D - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - resnet50ncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - googlenetncnn: CPU - googlenetncnn: CPU - resnet18hmmer: Pfam Database Searchncnn: CPU - mobilenetncnn: CPU - yolov4-tinyncnn: Vulkan GPU - mobilenetclomp: Static OMP Speedupncnn: CPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: CPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - squeezenet_ssdncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - squeezenet_ssddeepspeech: CPUncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - blazefacepolyhedron: tfft2ncnn: Vulkan GPU - efficientnet-b0ncnn: CPU - vgg16redis: GETncnn: Vulkan GPU - vgg16tensorflow-lite: NASNet Mobileonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUncnn: CPU - regnety_400mncnn: Vulkan GPU-v3-v3 - mobilenet-v3webp: Quality 100, Lossless, Highest Compressionncnn: Vulkan GPU - shufflenet-v2tensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4ncnn: Vulkan GPU - regnety_400mcryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Decryptiontensorflow-lite: Mobilenet Quantcryptsetup: Twofish-XTS 512b Encryptiononednn: Deconvolution Batch shapes_1d - f32 - CPUcryptsetup: Serpent-XTS 512b Decryptiontensorflow-lite: Mobilenet Floatcryptsetup: Twofish-XTS 512b Decryptionwebp: Quality 100, Losslessmnn: inception-v3webp: Quality 100cryptsetup: Serpent-XTS 512b Encryptionpyperformance: pathlibgimp: resizebuild-eigen: Time To Compilecryptsetup: Twofish-XTS 256b Encryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: PBKDF2-whirlpoolcryptsetup: Serpent-XTS 256b Encryptionredis: SETcryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 512b Decryptionrnnoise: mnn: SqueezeNetV1.0encode-opus: WAV To Opus Encodepyperformance: json_loadscryptsetup: AES-XTS 256b Encryptionbyte: Dhrystone 2encode-mp3: WAV To MP3compress-lz4: 9 - Compression Speedencode-wavpack: WAV To WavPackglmark2: 1280 x 1024polyhedron: mdbxtnn: CPU - SqueezeNet v1.1polyhedron: acpolyhedron: fatigue2pyperformance: chaosphpbench: PHP Benchmark Suitemnn: MobileNetV2_224encode-ape: WAV To APEpolyhedron: induct2onednn: IP Shapes 3D - u8s8f32 - CPUpolyhedron: proteinpolyhedron: doducwebp: Defaultpolyhedron: mp_prop_designpyperformance: raytracepyperformance: floatpolyhedron: airpyperformance: crypto_pyaespyperformance: regex_compilecompress-lz4: 3 - Compression Speedpolyhedron: test_fpu2simdjson: Kostyapyperformance: nbodysqlite-speedtest: Timed Time - Size 1,000encode-flac: WAV To FLACwebp: Quality 100, Highest Compressiononednn: Convolution Batch Shapes Auto - f32 - CPUhint: FLOATpolyhedron: aermodncnn: Vulkan GPU - alexnetcryptsetup: PBKDF2-sha512pyperformance: pickle_pure_pythonpolyhedron: rnflowredis: SADDpyperformance: gobasis: UASTC Level 0ncnn: CPU - alexnetdarktable: Server Rack - CPU-onlyglmark2: 1920 x 1080pyperformance: django_templatemnn: resnet-v2-50numpy: ffte: N=256, 3D Complex FFT Routineonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUtensorflow-lite: SqueezeNetavifenc: 8tnn: CPU - MobileNet v2simdjson: LargeRandonednn: Recurrent Neural Network Inference - u8s8f32 - CPUnode-web-tooling: basis: ETC1Sgimp: auto-levelsonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUcompress-lz4: 1 - Decompression Speedonednn: Recurrent Neural Network Training - u8s8f32 - CPUdarktable: Masskrug - CPU-onlydarktable: Server Room - CPU-onlyavifenc: 10avifenc: 0mnn: mobilenet-v1-1.0compress-lz4: 1 - Compression Speedastcenc: Fastpyperformance: 2to3onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUespeak: Text-To-Speech Synthesismafft: Multiple Sequence Alignment - LSU RNApolyhedron: gas_dyn2stockfish: Total Timevkmark: 1280 x 1024rsvg: SVG Files To PNGunpack-linux: linux-4.15.tar.xzyquake2: Software CPU - 1920 x 1080appleseed: Disney Materialvkmark: 1920 x 1080ai-benchmark: Device Training Scorecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9darktable: Boat - CPU-onlyvkfft: sunflow: Global Illumination + Image Synthesisgromacs: Water Benchmarkunpack-firefox: firefox-84.0.source.tar.xzasmfish: 1024 Hash Memory, 26 Depthwireguard: avifenc: 2yquake2: OpenGL 1.x - 1920 x 1080onednn: IP Shapes 1D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragononednn: Deconvolution Batch shapes_3d - f32 - CPUai-benchmark: Device AI Scoregimp: unsharp-maskbuild-ffmpeg: Time To Compilepolyhedron: channel2compress-zstd: 19blender: BMW27 - CPU-Onlybasis: UASTC Level 2rawtherapee: Total Benchmark Timegimp: rotaterav1e: 1polyhedron: linpkrav1e: 5indigobench: CPU - Bedroomyquake2: OpenGL 3.x - 1920 x 1080astcenc: Thoroughembree: Pathtracer - Asian Dragonastcenc: Exhaustivecoremark: CoreMark Size 666 - Iterations Per Secondredis: LPUSHastcenc: Mediumbasis: UASTC Level 2 + RDO Post-Processingrav1e: 10indigobench: CPU - Supercarappleseed: Material Testerblender: Fishy Cat - CPU-Onlybuild-linux-kernel: Time To Compilebuild-llvm: Time To Compileembree: Pathtracer - Crownai-benchmark: Device Inference Scorecompress-lz4: 9 - Decompression Speedcompress-zstd: 3embree: Pathtracer ISPC - Crownbasis: UASTC Level 3embree: Pathtracer - Asian Dragon Objonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUappleseed: Emilycompress-lz4: 3 - Decompression Speedembree: Pathtracer ISPC - Asian Dragon Objbuild2: Time To Compilerav1e: 6polyhedron: capacitasimdjson: DistinctUserIDsimdjson: PartialTweetsncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet18ncnn: CPU - blazefaceonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUlammps: Rhodopsin ProteinDefault KernelLinux 5.10.4Linux 5.11-rc12014169.2912.211.56735.3314658.3859.3131.6031.7725.46151.89338.4749.9238.457.810.0911.6211.9137.3111.5910.8515.5537.1594.0023810.563.8525.0915.2385.701863679.1185.392234266.6075323.828333.499.5950.89011.003299807364970032.86331.7320.6177091333.311.3777321.6169941334.424.46477.1432.831326.125.010.750117.955333.21356.0550533325.11363650.751531.11359.724.18012.6909.54237.41521.633310090.410.37036.0316.20860385.88308.4957.7463.321604751376.96517.20825.162.9420317.029.341.77474.877141641.7815423336.9437.10.3617083.92811.5949.47822.7874282568835.298867.5817.09130011569716.771540737.6735310.45116.960.218443280.971.661246.2629782.8821339564749.062514997.048324.2680.334737.797.2464.56316.0329027.634742.118129.29028.506.7455.2106.622131.13813.6247192.297.724619002.539.3492738.90814.56461.7313361457618434.3177.40385.4322.771845466466747.56513.747102741.5810.52426.47119306833300.1976.654631.87.031977.975917.0012124819.01685.70859.5122.5237.5343.97778.65215.4070.2564.810.7651.335944.336.958.3970303.53267555.2329461065654.106.11867.8082.3492.828287.576718329.39131.3151057.6307.33125817828.52849.06.768982.3887.801413.3370539.9550627835.57.2063232.9861.02216.180.460.4550.0626.053.6114.78234.3111075736.6620.78.810624.2415146.3747.2126.2825.8320.97128.72632.2141.8032.799.38.5010.1410.0931.619.829.2213.2431.80109.729309.063.4629.0213.2874.431611434.0874.532089525.7824820.986429.378.4544.8549.713174873351243029.49367.1354.9170472369.912.6415356.7163162368.222.49669.8662.575359.322.811.832109.471364.81485.4603098356.41243969.701668.21488.722.11311.6598.72134.31658.935909522.39.49039.1914.87065425.42282.8397.158.151515170246.40215.83923.152.8786515.658.611.63968.886611511.6614321540.0134.240.3915977.51710.7108.87221.0535305670769.362877.0915.81140372765115.651429878.043369.86015.780.233472275.967.179262.1531684.7444600844488.102445046.700305.6420.354482.277.6760.99816.9038564.544497.068560.18575.956.3924.9406.312126.02712.9567525.637.354448588.649.8046040.27214.59959.1513559816635035.6527.13188.2316.758583473268946.20513.32299721.5370.53925.74419840743308.46774.657641.46.859758.173917.3787127719.19383.80658.2122.9232.4643.09277.08215.2840.2614.730.7791.354960.536.588.5188300.62270279.6862071049901.526.05855.3552.3162.867291.466605325.24129.8251044.4597.34975887895.52838.66.793481.4757.872213.2398535.014137899.37.2702231.0051.02016.240.460.4542.4620.923.3814.31144.7201114626.2520.78.613724.1685546.4047.6525.2726.2921.96127.07132.6343.1932.209.18.609.8110.3131.769.869.3513.5131.73108.333029.053.3029.2213.0974.001727854.7574.672392035.8771320.869529.608.6945.15910.043580783394770329.35369.8357.3189637370.612.4213355.3180798369.322.15669.8982.56835822.711.833107.161366.21488.7603787356.51297244.291678.31485.422.09211.5968.82734.21663.436408424.29.49639.3614.85365875.39282.8157.158.151475137406.42915.81923.132.7049815.658.591.63268.936571511.6414221540.0334.310.3915778.38710.9308.75621.0906305438957.416097.0115.86140497364615.551465965.543289.71715.870.234475575.566.976262.7531043.7259141444470.712596456.639310.4220.354468.087.6260.95116.9818540.484486.348592.38545.336.3974.9596.284124.54212.9397572.237.424398583.129.4935438.41915.25159.0813897759642634.9987.16088.6311.487833482768546.08513.34399671.5820.53525.98619661267308.38075.056625.36.922078.102217.4095126919.45283.88358.2123.0236.5743.27778.29015.1010.2594.720.7741.359947.337.28.3775305.64271886.2541011053853.026.14857.2282.3472.843289.579019326.50129.6771050.4457.42245847920.92872.06.845881.6207.882413.2091536.8426687906.87.2554231.5351.02816.20.460.4541.0519.903.4714.45104.762OpenBenchmarking.org

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPDefault KernelLinux 5.10.4Linux 5.11-rc1400K800K1200K1600K2000KSE +/- 30740.73, N = 12SE +/- 2684.98, N = 3SE +/- 5120.03, N = 32014169.291075736.661114626.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPDefault KernelLinux 5.10.4Linux 5.11-rc1300K600K900K1200K1500KMin: 1876172.62 / Avg: 2014169.29 / Max: 2156207Min: 1070766.62 / Avg: 1075736.66 / Max: 1079982.75Min: 1105007.75 / Avg: 1114626.25 / Max: 1122478.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefault KernelLinux 5.10.4Linux 5.11-rc1510152025SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 312.220.720.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefault KernelLinux 5.10.4Linux 5.11-rc1510152025Min: 12.2 / Avg: 12.2 / Max: 12.2Min: 20.6 / Avg: 20.67 / Max: 20.8Min: 20.6 / Avg: 20.73 / Max: 20.8

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.03621, N = 3SE +/- 0.02920, N = 3SE +/- 0.01700, N = 311.567308.810628.61372MIN: 10.87MIN: 8.51MIN: 8.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 11.5 / Avg: 11.57 / Max: 11.61Min: 8.77 / Avg: 8.81 / Max: 8.87Min: 8.58 / Avg: 8.61 / Max: 8.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc11.19962.39923.59884.79845.998SE +/- 0.00649, N = 3SE +/- 0.00670, N = 3SE +/- 0.00397, N = 35.331464.241514.16855MIN: 4.84MIN: 4.14MIN: 4.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 5.32 / Avg: 5.33 / Max: 5.34Min: 4.23 / Avg: 4.24 / Max: 4.25Min: 4.16 / Avg: 4.17 / Max: 4.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Default KernelLinux 5.10.4Linux 5.11-rc11326395265SE +/- 0.59, N = 3SE +/- 0.32, N = 15SE +/- 0.09, N = 358.3846.3746.40MIN: 48.11 / MAX: 109.51MIN: 45.13 / MAX: 83.86MIN: 46.13 / MAX: 46.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Default KernelLinux 5.10.4Linux 5.11-rc11224364860Min: 57.77 / Avg: 58.38 / Max: 59.56Min: 45.25 / Avg: 46.37 / Max: 49.23Min: 46.23 / Avg: 46.4 / Max: 46.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Default KernelLinux 5.10.4Linux 5.11-rc11326395265SE +/- 0.58, N = 3SE +/- 1.01, N = 3SE +/- 1.19, N = 359.3147.2147.65MIN: 48.12 / MAX: 109.05MIN: 45.3 / MAX: 81.34MIN: 46.22 / MAX: 82.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Default KernelLinux 5.10.4Linux 5.11-rc11224364860Min: 58.63 / Avg: 59.31 / Max: 60.47Min: 45.51 / Avg: 47.21 / Max: 49.01Min: 46.32 / Avg: 47.65 / Max: 50.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetDefault KernelLinux 5.10.4Linux 5.11-rc1714212835SE +/- 0.32, N = 3SE +/- 0.11, N = 3SE +/- 0.76, N = 331.6026.2825.27MIN: 24.42 / MAX: 69.85MIN: 25.18 / MAX: 37.2MIN: 22.73 / MAX: 45.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetDefault KernelLinux 5.10.4Linux 5.11-rc1714212835Min: 30.97 / Avg: 31.6 / Max: 31.97Min: 26.1 / Avg: 26.28 / Max: 26.49Min: 23.85 / Avg: 25.27 / Max: 26.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetDefault KernelLinux 5.10.4Linux 5.11-rc1714212835SE +/- 0.18, N = 3SE +/- 0.32, N = 15SE +/- 0.11, N = 331.7725.8326.29MIN: 23.25 / MAX: 69.93MIN: 22.76 / MAX: 58.24MIN: 23.04 / MAX: 29.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetDefault KernelLinux 5.10.4Linux 5.11-rc1714212835Min: 31.44 / Avg: 31.77 / Max: 32.06Min: 22.84 / Avg: 25.83 / Max: 26.74Min: 26.08 / Avg: 26.29 / Max: 26.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Default KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.22, N = 3SE +/- 0.28, N = 15SE +/- 0.20, N = 325.4620.9721.96MIN: 19.81 / MAX: 59.82MIN: 18.91 / MAX: 67.27MIN: 19.15 / MAX: 54.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Default KernelLinux 5.10.4Linux 5.11-rc1612182430Min: 25.22 / Avg: 25.46 / Max: 25.89Min: 19 / Avg: 20.97 / Max: 21.72Min: 21.57 / Avg: 21.96 / Max: 22.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150SE +/- 0.27, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3151.89128.73127.071. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150Min: 151.43 / Avg: 151.89 / Max: 152.38Min: 128.71 / Avg: 128.73 / Max: 128.77Min: 127 / Avg: 127.07 / Max: 127.141. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.44, N = 3SE +/- 0.29, N = 15SE +/- 0.27, N = 338.4732.2132.63MIN: 33.17 / MAX: 75.32MIN: 30.06 / MAX: 44.1MIN: 31.52 / MAX: 114.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 37.72 / Avg: 38.47 / Max: 39.23Min: 30.4 / Avg: 32.21 / Max: 34.12Min: 32.32 / Avg: 32.63 / Max: 33.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyDefault KernelLinux 5.10.4Linux 5.11-rc11122334455SE +/- 0.08, N = 3SE +/- 0.49, N = 15SE +/- 0.01, N = 349.9241.8043.19MIN: 41.04 / MAX: 67.32MIN: 36.06 / MAX: 63.79MIN: 41.97 / MAX: 48.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyDefault KernelLinux 5.10.4Linux 5.11-rc11020304050Min: 49.77 / Avg: 49.92 / Max: 50.05Min: 36.87 / Avg: 41.8 / Max: 43.7Min: 43.17 / Avg: 43.19 / Max: 43.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.03, N = 3SE +/- 0.19, N = 3SE +/- 0.20, N = 338.4532.7932.20MIN: 33.4 / MAX: 77.43MIN: 30.27 / MAX: 45.97MIN: 30.63 / MAX: 391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 38.41 / Avg: 38.45 / Max: 38.5Min: 32.42 / Avg: 32.79 / Max: 33.02Min: 32 / Avg: 32.2 / Max: 32.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.09, N = 4SE +/- 0.11, N = 15SE +/- 0.12, N = 37.89.39.11. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.6 / Avg: 7.83 / Max: 8Min: 8.7 / Avg: 9.35 / Max: 10Min: 8.9 / Avg: 9.1 / Max: 9.31. (CC) gcc options: -fopenmp -O3 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.20, N = 3SE +/- 0.03, N = 15SE +/- 0.09, N = 310.098.508.60MIN: 8.08 / MAX: 57.72MIN: 8.19 / MAX: 20.56MIN: 8.28 / MAX: 22.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 9.8 / Avg: 10.09 / Max: 10.48Min: 8.28 / Avg: 8.5 / Max: 8.73Min: 8.42 / Avg: 8.6 / Max: 8.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.19, N = 3SE +/- 0.21, N = 3SE +/- 0.04, N = 311.6210.149.81MIN: 9.25 / MAX: 43.5MIN: 9.38 / MAX: 12.94MIN: 9.63 / MAX: 13.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 11.25 / Avg: 11.62 / Max: 11.91Min: 9.91 / Avg: 10.14 / Max: 10.56Min: 9.75 / Avg: 9.81 / Max: 9.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.15, N = 3SE +/- 0.11, N = 15SE +/- 0.21, N = 311.9110.0910.31MIN: 9.52 / MAX: 44.02MIN: 9.47 / MAX: 59.9MIN: 9.57 / MAX: 22.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 11.63 / Avg: 11.91 / Max: 12.14Min: 9.65 / Avg: 10.09 / Max: 10.87Min: 10.04 / Avg: 10.31 / Max: 10.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 337.3131.6131.76MIN: 32.73 / MAX: 94.67MIN: 30.89 / MAX: 38.84MIN: 30.84 / MAX: 39.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 37.21 / Avg: 37.31 / Max: 37.41Min: 31.58 / Avg: 31.61 / Max: 31.65Min: 31.7 / Avg: 31.76 / Max: 31.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.14, N = 3SE +/- 0.03, N = 15SE +/- 0.05, N = 311.599.829.86MIN: 9.62 / MAX: 57.85MIN: 9.61 / MAX: 11.52MIN: 9.73 / MAX: 10.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 11.4 / Avg: 11.59 / Max: 11.86Min: 9.65 / Avg: 9.82 / Max: 9.99Min: 9.77 / Avg: 9.86 / Max: 9.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.27, N = 3SE +/- 0.07, N = 15SE +/- 0.16, N = 310.859.229.35MIN: 8.7 / MAX: 50.23MIN: 8.76 / MAX: 13.65MIN: 9.12 / MAX: 10.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 10.51 / Avg: 10.85 / Max: 11.38Min: 8.79 / Avg: 9.22 / Max: 9.67Min: 9.16 / Avg: 9.35 / Max: 9.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Default KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.25, N = 3SE +/- 0.11, N = 15SE +/- 0.29, N = 315.5513.2413.51MIN: 12.82 / MAX: 44.91MIN: 12.68 / MAX: 56.97MIN: 13.14 / MAX: 14.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Default KernelLinux 5.10.4Linux 5.11-rc148121620Min: 15.12 / Avg: 15.55 / Max: 15.98Min: 12.72 / Avg: 13.24 / Max: 14.02Min: 13.2 / Avg: 13.51 / Max: 14.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.35, N = 3SE +/- 0.14, N = 15SE +/- 0.04, N = 337.1531.8031.73MIN: 32.74 / MAX: 88.45MIN: 30.81 / MAX: 42.83MIN: 30.96 / MAX: 37.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 36.52 / Avg: 37.15 / Max: 37.73Min: 31.49 / Avg: 31.8 / Max: 33.66Min: 31.65 / Avg: 31.73 / Max: 31.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefault KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.16, N = 3SE +/- 0.78, N = 15SE +/- 0.94, N = 894.00109.73108.33
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUDefault KernelLinux 5.10.4Linux 5.11-rc120406080100Min: 93.73 / Avg: 94 / Max: 94.27Min: 103.53 / Avg: 109.73 / Max: 112.6Min: 103.35 / Avg: 108.33 / Max: 110.59

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.25, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 310.569.069.05MIN: 8.6 / MAX: 49.35MIN: 8.8 / MAX: 9.55MIN: 8.77 / MAX: 32.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 10.06 / Avg: 10.56 / Max: 10.82Min: 8.84 / Avg: 9.06 / Max: 9.26Min: 8.81 / Avg: 9.05 / Max: 9.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceDefault KernelLinux 5.10.4Linux 5.11-rc10.86631.73262.59893.46524.3315SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 33.853.463.30MIN: 3.16 / MAX: 16.49MIN: 3.33 / MAX: 7.35MIN: 3.22 / MAX: 3.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 3.75 / Avg: 3.85 / Max: 3.99Min: 3.42 / Avg: 3.46 / Max: 3.5Min: 3.23 / Avg: 3.3 / Max: 3.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Default KernelLinux 5.10.4Linux 5.11-rc171421283525.0929.0229.22

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Default KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 315.2313.2813.09MIN: 12.91 / MAX: 49.97MIN: 13.12 / MAX: 14.99MIN: 12.94 / MAX: 14.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Default KernelLinux 5.10.4Linux 5.11-rc148121620Min: 14.99 / Avg: 15.23 / Max: 15.47Min: 13.22 / Avg: 13.28 / Max: 13.31Min: 12.98 / Avg: 13.09 / Max: 13.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Default KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.20, N = 3SE +/- 0.16, N = 15SE +/- 0.12, N = 385.7074.4374.00MIN: 76.58 / MAX: 109.13MIN: 73.09 / MAX: 107.82MIN: 73.3 / MAX: 82.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Default KernelLinux 5.10.4Linux 5.11-rc11632486480Min: 85.3 / Avg: 85.7 / Max: 85.98Min: 73.66 / Avg: 74.43 / Max: 75.5Min: 73.85 / Avg: 74 / Max: 74.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETDefault KernelLinux 5.10.4Linux 5.11-rc1400K800K1200K1600K2000KSE +/- 22737.24, N = 15SE +/- 11369.12, N = 3SE +/- 16571.48, N = 31863679.111611434.081727854.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETDefault KernelLinux 5.10.4Linux 5.11-rc1300K600K900K1200K1500KMin: 1689405.38 / Avg: 1863679.11 / Max: 1996071.88Min: 1597648.62 / Avg: 1611434.08 / Max: 1633987Min: 1709894.12 / Avg: 1727854.75 / Max: 1760957.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Default KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.33, N = 3SE +/- 0.36, N = 3SE +/- 0.33, N = 385.3974.5374.67MIN: 76.96 / MAX: 108.88MIN: 73.33 / MAX: 84.41MIN: 73.34 / MAX: 83.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Default KernelLinux 5.10.4Linux 5.11-rc11632486480Min: 84.86 / Avg: 85.39 / Max: 86Min: 73.8 / Avg: 74.53 / Max: 74.95Min: 74.18 / Avg: 74.67 / Max: 75.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileDefault KernelLinux 5.10.4Linux 5.11-rc150K100K150K200K250KSE +/- 1532.37, N = 3SE +/- 207.90, N = 3SE +/- 1205.42, N = 3223426208952239203
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileDefault KernelLinux 5.10.4Linux 5.11-rc140K80K120K160K200KMin: 221478 / Avg: 223426 / Max: 226449Min: 208537 / Avg: 208952.33 / Max: 209177Min: 236841 / Avg: 239202.67 / Max: 240803

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.07065, N = 5SE +/- 0.05897, N = 15SE +/- 0.01520, N = 36.607535.782485.87713MIN: 5.69MIN: 5.23MIN: 5.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 6.33 / Avg: 6.61 / Max: 6.71Min: 5.32 / Avg: 5.78 / Max: 5.95Min: 5.85 / Avg: 5.88 / Max: 5.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 323.8320.9920.87MIN: 22.68MIN: 20.72MIN: 20.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1612182430Min: 23.8 / Avg: 23.83 / Max: 23.84Min: 20.94 / Avg: 20.99 / Max: 21.03Min: 20.8 / Avg: 20.87 / Max: 20.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mDefault KernelLinux 5.10.4Linux 5.11-rc1816243240SE +/- 0.53, N = 3SE +/- 0.09, N = 15SE +/- 0.06, N = 333.4929.3729.60MIN: 30.17 / MAX: 68.43MIN: 28.47 / MAX: 73.28MIN: 29.39 / MAX: 40.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mDefault KernelLinux 5.10.4Linux 5.11-rc1714212835Min: 32.69 / Avg: 33.49 / Max: 34.49Min: 28.55 / Avg: 29.37 / Max: 29.75Min: 29.52 / Avg: 29.6 / Max: 29.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.10, N = 3SE +/- 0.14, N = 3SE +/- 0.12, N = 39.598.458.69MIN: 8.1 / MAX: 72.56MIN: 8.09 / MAX: 10.09MIN: 8.29 / MAX: 26.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 9.42 / Avg: 9.59 / Max: 9.75Min: 8.17 / Avg: 8.45 / Max: 8.62Min: 8.55 / Avg: 8.69 / Max: 8.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionDefault KernelLinux 5.10.4Linux 5.11-rc11122334455SE +/- 0.27, N = 3SE +/- 0.02, N = 3SE +/- 0.15, N = 350.8944.8545.161. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionDefault KernelLinux 5.10.4Linux 5.11-rc11020304050Min: 50.38 / Avg: 50.89 / Max: 51.28Min: 44.83 / Avg: 44.85 / Max: 44.89Min: 44.9 / Avg: 45.16 / Max: 45.421. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.30, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 311.009.7110.04MIN: 9.52 / MAX: 47.8MIN: 9.51 / MAX: 10.13MIN: 9.81 / MAX: 10.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 10.58 / Avg: 11 / Max: 11.59Min: 9.55 / Avg: 9.71 / Max: 9.87Min: 9.85 / Avg: 10.04 / Max: 10.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Default KernelLinux 5.10.4Linux 5.11-rc1800K1600K2400K3200K4000KSE +/- 1161.41, N = 3SE +/- 713.33, N = 3SE +/- 2970.22, N = 3329980731748733580783
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Default KernelLinux 5.10.4Linux 5.11-rc1600K1200K1800K2400K3000KMin: 3297750 / Avg: 3299806.67 / Max: 3301770Min: 3174160 / Avg: 3174873.33 / Max: 3176300Min: 3577470 / Avg: 3580783.33 / Max: 3586710

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Default KernelLinux 5.10.4Linux 5.11-rc1800K1600K2400K3200K4000KSE +/- 2124.36, N = 3SE +/- 2619.34, N = 3SE +/- 912.82, N = 3364970035124303947703
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Default KernelLinux 5.10.4Linux 5.11-rc1700K1400K2100K2800K3500KMin: 3646130 / Avg: 3649700 / Max: 3653480Min: 3508490 / Avg: 3512430 / Max: 3517390Min: 3946370 / Avg: 3947703.33 / Max: 3949450

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mDefault KernelLinux 5.10.4Linux 5.11-rc1816243240SE +/- 0.30, N = 3SE +/- 0.25, N = 3SE +/- 0.32, N = 332.8629.4929.35MIN: 30.37 / MAX: 71.09MIN: 28.88 / MAX: 31.63MIN: 28.95 / MAX: 100.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mDefault KernelLinux 5.10.4Linux 5.11-rc1714212835Min: 32.32 / Avg: 32.86 / Max: 33.34Min: 28.99 / Avg: 29.49 / Max: 29.76Min: 29.03 / Avg: 29.35 / Max: 29.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 1.40, N = 3SE +/- 1.64, N = 3SE +/- 0.24, N = 3331.7367.1369.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350Min: 329 / Avg: 331.73 / Max: 333.6Min: 365.2 / Avg: 367.13 / Max: 370.4Min: 369.5 / Avg: 369.83 / Max: 370.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 2.23, N = 3SE +/- 1.70, N = 3SE +/- 1.32, N = 3320.6354.9357.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 318 / Avg: 320.57 / Max: 325Min: 352.9 / Avg: 354.93 / Max: 358.3Min: 354.7 / Avg: 357.33 / Max: 358.7

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantDefault KernelLinux 5.10.4Linux 5.11-rc140K80K120K160K200KSE +/- 75.51, N = 3SE +/- 30.12, N = 3SE +/- 424.39, N = 3177091170472189637
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantDefault KernelLinux 5.10.4Linux 5.11-rc130K60K90K120K150KMin: 176947 / Avg: 177091.33 / Max: 177202Min: 170413 / Avg: 170472.33 / Max: 170511Min: 188790 / Avg: 189637.33 / Max: 190104

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 1.79, N = 3SE +/- 1.00, N = 3SE +/- 0.25, N = 3333.3369.9370.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350Min: 330.6 / Avg: 333.33 / Max: 336.7Min: 367.9 / Avg: 369.9 / Max: 371Min: 370.1 / Avg: 370.6 / Max: 370.9

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.05, N = 3SE +/- 0.12, N = 15SE +/- 0.16, N = 1511.3812.6412.42MIN: 10.79MIN: 11.31MIN: 11.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 11.31 / Avg: 11.38 / Max: 11.49Min: 11.65 / Avg: 12.64 / Max: 13.4Min: 11.37 / Avg: 12.42 / Max: 13.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 2.22, N = 3SE +/- 0.91, N = 3SE +/- 1.71, N = 3321.6356.7355.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 317.2 / Avg: 321.6 / Max: 324.3Min: 355.6 / Avg: 356.7 / Max: 358.5Min: 352.4 / Avg: 355.27 / Max: 358.3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatDefault KernelLinux 5.10.4Linux 5.11-rc140K80K120K160K200KSE +/- 82.73, N = 3SE +/- 89.09, N = 3SE +/- 51.97, N = 3169941163162180798
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatDefault KernelLinux 5.10.4Linux 5.11-rc130K60K90K120K150KMin: 169803 / Avg: 169940.67 / Max: 170089Min: 162985 / Avg: 163162.33 / Max: 163266Min: 180695 / Avg: 180797.67 / Max: 180863

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 1.69, N = 3SE +/- 0.35, N = 3SE +/- 0.61, N = 3334.4368.2369.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350Min: 332.1 / Avg: 334.4 / Max: 337.7Min: 367.7 / Avg: 368.23 / Max: 368.9Min: 368.3 / Avg: 369.33 / Max: 370.4

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessDefault KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.26, N = 3SE +/- 0.10, N = 3SE +/- 0.19, N = 324.4622.5022.161. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessDefault KernelLinux 5.10.4Linux 5.11-rc1612182430Min: 23.98 / Avg: 24.46 / Max: 24.86Min: 22.32 / Avg: 22.5 / Max: 22.65Min: 21.91 / Avg: 22.16 / Max: 22.531. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Default KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.46, N = 3SE +/- 0.35, N = 3SE +/- 0.07, N = 377.1469.8769.90MIN: 69.93 / MAX: 103.83MIN: 68.99 / MAX: 81.83MIN: 69.61 / MAX: 94.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Default KernelLinux 5.10.4Linux 5.11-rc11530456075Min: 76.51 / Avg: 77.14 / Max: 78.04Min: 69.17 / Avg: 69.87 / Max: 70.25Min: 69.77 / Avg: 69.9 / Max: 69.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Default KernelLinux 5.10.4Linux 5.11-rc10.6371.2741.9112.5483.185SE +/- 0.034, N = 3SE +/- 0.009, N = 3SE +/- 0.006, N = 32.8312.5752.5681. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Default KernelLinux 5.10.4Linux 5.11-rc1246810Min: 2.77 / Avg: 2.83 / Max: 2.89Min: 2.56 / Avg: 2.58 / Max: 2.59Min: 2.56 / Avg: 2.57 / Max: 2.581. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 2.94, N = 3SE +/- 1.57, N = 3SE +/- 2.00, N = 2326.1359.3358.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 321 / Avg: 326.07 / Max: 331.2Min: 356.2 / Avg: 359.33 / Max: 361.1Min: 356 / Avg: 358 / Max: 360

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefault KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 325.022.822.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefault KernelLinux 5.10.4Linux 5.11-rc1612182430Min: 24.9 / Avg: 25 / Max: 25.1Min: 22.7 / Avg: 22.77 / Max: 22.8Min: 22.7 / Avg: 22.7 / Max: 22.7

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 310.7511.8311.83
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 10.65 / Avg: 10.75 / Max: 10.94Min: 11.65 / Avg: 11.83 / Max: 11.97Min: 11.76 / Avg: 11.83 / Max: 11.89

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150SE +/- 0.40, N = 3SE +/- 0.03, N = 3SE +/- 0.13, N = 3117.96109.47107.16
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc120406080100Min: 117.29 / Avg: 117.96 / Max: 118.67Min: 109.42 / Avg: 109.47 / Max: 109.51Min: 107 / Avg: 107.16 / Max: 107.43

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 1.62, N = 3SE +/- 5.33, N = 3SE +/- 4.26, N = 3333.2364.8366.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350Min: 330.1 / Avg: 333.17 / Max: 335.6Min: 354.3 / Avg: 364.8 / Max: 371.6Min: 357.9 / Avg: 366.2 / Max: 372

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc130060090012001500SE +/- 11.60, N = 3SE +/- 3.09, N = 3SE +/- 0.68, N = 31356.01485.41488.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc130060090012001500Min: 1333.9 / Avg: 1355.97 / Max: 1373.2Min: 1479.3 / Avg: 1485.43 / Max: 1489.2Min: 1487.4 / Avg: 1488.7 / Max: 1489.7

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolDefault KernelLinux 5.10.4Linux 5.11-rc1130K260K390K520K650KSE +/- 1020.69, N = 3SE +/- 1406.03, N = 3SE +/- 613.12, N = 3550533603098603787
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolDefault KernelLinux 5.10.4Linux 5.11-rc1100K200K300K400K500KMin: 548992 / Avg: 550533 / Max: 552463Min: 600558 / Avg: 603098 / Max: 605413Min: 602629 / Avg: 603787.33 / Max: 604715

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 1.68, N = 3SE +/- 2.62, N = 3SE +/- 4.86, N = 3325.1356.4356.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 321.8 / Avg: 325.13 / Max: 327.1Min: 351.5 / Avg: 356.37 / Max: 360.5Min: 346.8 / Avg: 356.5 / Max: 362

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETDefault KernelLinux 5.10.4Linux 5.11-rc1300K600K900K1200K1500KSE +/- 16871.12, N = 4SE +/- 8535.72, N = 15SE +/- 2028.99, N = 31363650.751243969.701297244.291. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETDefault KernelLinux 5.10.4Linux 5.11-rc1200K400K600K800K1000KMin: 1317776 / Avg: 1363650.75 / Max: 1389066.62Min: 1169628 / Avg: 1243969.7 / Max: 1285593.88Min: 1293826.62 / Avg: 1297244.29 / Max: 1300847.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc1400800120016002000SE +/- 7.30, N = 3SE +/- 1.07, N = 3SE +/- 1.79, N = 31531.11668.21678.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc130060090012001500Min: 1517.9 / Avg: 1531.1 / Max: 1543.1Min: 1666.6 / Avg: 1668.17 / Max: 1670.2Min: 1675.1 / Avg: 1678.3 / Max: 1681.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc130060090012001500SE +/- 9.75, N = 3SE +/- 2.77, N = 3SE +/- 3.76, N = 31359.71488.71485.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionDefault KernelLinux 5.10.4Linux 5.11-rc130060090012001500Min: 1342.3 / Avg: 1359.73 / Max: 1376Min: 1483.3 / Avg: 1488.73 / Max: 1492.4Min: 1479.4 / Avg: 1485.37 / Max: 1492.3

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Default KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 324.1822.1122.091. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Default KernelLinux 5.10.4Linux 5.11-rc1612182430Min: 23.91 / Avg: 24.18 / Max: 24.5Min: 21.98 / Avg: 22.11 / Max: 22.22Min: 21.95 / Avg: 22.09 / Max: 22.181. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 312.6911.6611.60MIN: 11.22 / MAX: 25.7MIN: 11.43 / MAX: 23.12MIN: 11.41 / MAX: 23.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Default KernelLinux 5.10.4Linux 5.11-rc148121620Min: 12.64 / Avg: 12.69 / Max: 12.74Min: 11.54 / Avg: 11.66 / Max: 11.8Min: 11.55 / Avg: 11.6 / Max: 11.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.037, N = 5SE +/- 0.015, N = 5SE +/- 0.063, N = 129.5428.7218.8271. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 9.45 / Avg: 9.54 / Max: 9.64Min: 8.7 / Avg: 8.72 / Max: 8.78Min: 8.71 / Avg: 8.83 / Max: 9.291. (CXX) g++ options: -fvisibility=hidden -logg -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 337.434.334.2
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 37.3 / Avg: 37.37 / Max: 37.4Min: 34.2 / Avg: 34.33 / Max: 34.4Min: 34.1 / Avg: 34.17 / Max: 34.2

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc1400800120016002000SE +/- 2.80, N = 3SE +/- 12.75, N = 3SE +/- 9.69, N = 31521.61658.91663.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionDefault KernelLinux 5.10.4Linux 5.11-rc130060090012001500Min: 1517.2 / Avg: 1521.63 / Max: 1526.8Min: 1638.3 / Avg: 1658.87 / Max: 1682.2Min: 1644.5 / Avg: 1663.4 / Max: 1676.6

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Default KernelLinux 5.10.4Linux 5.11-rc18M16M24M32M40MSE +/- 91702.31, N = 3SE +/- 207274.15, N = 3SE +/- 77245.63, N = 333310090.435909522.336408424.2
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Default KernelLinux 5.10.4Linux 5.11-rc16M12M18M24M30MMin: 33126709.9 / Avg: 33310090.4 / Max: 33404356.4Min: 35639526.9 / Avg: 35909522.27 / Max: 36316943.5Min: 36313928.7 / Avg: 36408424.23 / Max: 36561519.1

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.054, N = 3SE +/- 0.005, N = 3SE +/- 0.014, N = 310.3709.4909.4961. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 10.29 / Avg: 10.37 / Max: 10.47Min: 9.48 / Avg: 9.49 / Max: 9.5Min: 9.47 / Avg: 9.5 / Max: 9.511. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.41, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 336.0339.1939.361. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 35.29 / Avg: 36.03 / Max: 36.7Min: 39.15 / Avg: 39.19 / Max: 39.22Min: 39.27 / Avg: 39.36 / Max: 39.511. (CC) gcc options: -O3

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.10, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 516.2114.8714.851. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 15.99 / Avg: 16.21 / Max: 16.44Min: 14.85 / Avg: 14.87 / Max: 14.91Min: 14.83 / Avg: 14.85 / Max: 14.91. (CXX) g++ options: -rdynamic

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1280 x 1024Default KernelLinux 5.10.4Linux 5.11-rc114002800420056007000603865426587

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxDefault KernelLinux 5.10.4Linux 5.11-rc11.3232.6463.9695.2926.6155.885.425.39

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Default KernelLinux 5.10.4Linux 5.11-rc170140210280350SE +/- 1.02, N = 3SE +/- 0.18, N = 3SE +/- 0.09, N = 3308.50282.84282.82MIN: 297.18 / MAX: 319.3MIN: 281.31 / MAX: 287.35MIN: 281.29 / MAX: 287.161. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Default KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 307.02 / Avg: 308.5 / Max: 310.46Min: 282.49 / Avg: 282.84 / Max: 283.11Min: 282.64 / Avg: 282.81 / Max: 282.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acDefault KernelLinux 5.10.4Linux 5.11-rc12468107.747.107.10

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Default KernelLinux 5.10.4Linux 5.11-rc1142842567063.3258.1558.15

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefault KernelLinux 5.10.4Linux 5.11-rc14080120160200SE +/- 0.67, N = 3SE +/- 0.33, N = 3160151147
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150Min: 159 / Avg: 160.33 / Max: 161Min: 146 / Avg: 146.67 / Max: 147

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteDefault KernelLinux 5.10.4Linux 5.11-rc1110K220K330K440K550KSE +/- 1118.13, N = 3SE +/- 700.71, N = 3SE +/- 1896.86, N = 3475137517024513740
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteDefault KernelLinux 5.10.4Linux 5.11-rc190K180K270K360K450KMin: 472917 / Avg: 475136.67 / Max: 476482Min: 515722 / Avg: 517024 / Max: 518124Min: 511165 / Avg: 513739.67 / Max: 517440

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Default KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.024, N = 3SE +/- 0.054, N = 3SE +/- 0.017, N = 36.9656.4026.429MIN: 6.27 / MAX: 22.38MIN: 6.26 / MAX: 15.65MIN: 6.35 / MAX: 8.721. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 6.92 / Avg: 6.97 / Max: 7Min: 6.3 / Avg: 6.4 / Max: 6.48Min: 6.4 / Avg: 6.43 / Max: 6.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.07, N = 5SE +/- 0.04, N = 5SE +/- 0.02, N = 517.2115.8415.821. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 17.05 / Avg: 17.21 / Max: 17.43Min: 15.78 / Avg: 15.84 / Max: 15.96Min: 15.78 / Avg: 15.82 / Max: 15.911. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Default KernelLinux 5.10.4Linux 5.11-rc161218243025.1623.1523.13

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc10.6621.3241.9862.6483.31SE +/- 0.01399, N = 3SE +/- 0.02712, N = 3SE +/- 0.03607, N = 32.942032.878652.70498MIN: 2.51MIN: 2.53MIN: 2.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 2.92 / Avg: 2.94 / Max: 2.96Min: 2.83 / Avg: 2.88 / Max: 2.93Min: 2.65 / Avg: 2.7 / Max: 2.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinDefault KernelLinux 5.10.4Linux 5.11-rc14812162017.0215.6515.65

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducDefault KernelLinux 5.10.4Linux 5.11-rc136912159.348.618.59

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultDefault KernelLinux 5.10.4Linux 5.11-rc10.39920.79841.19761.59681.996SE +/- 0.008, N = 3SE +/- 0.011, N = 3SE +/- 0.001, N = 31.7741.6391.6321. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 1.76 / Avg: 1.77 / Max: 1.79Min: 1.63 / Avg: 1.64 / Max: 1.66Min: 1.63 / Avg: 1.63 / Max: 1.631. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designDefault KernelLinux 5.10.4Linux 5.11-rc12040608010074.8768.8868.93

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefault KernelLinux 5.10.4Linux 5.11-rc1150300450600750SE +/- 1.20, N = 3SE +/- 1.73, N = 3714661657
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefault KernelLinux 5.10.4Linux 5.11-rc1130260390520650Min: 712 / Avg: 713.67 / Max: 716Min: 658 / Avg: 661 / Max: 664

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatDefault KernelLinux 5.10.4Linux 5.11-rc14080120160200164151151

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airDefault KernelLinux 5.10.4Linux 5.11-rc10.40050.8011.20151.6022.00251.781.661.64

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150SE +/- 0.33, N = 3154143142
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150Min: 142 / Avg: 142.67 / Max: 143

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileDefault KernelLinux 5.10.4Linux 5.11-rc150100150200250SE +/- 0.33, N = 3233215215
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileDefault KernelLinux 5.10.4Linux 5.11-rc14080120160200Min: 232 / Avg: 232.67 / Max: 233

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.26, N = 3SE +/- 0.15, N = 3SE +/- 0.01, N = 336.9440.0140.031. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 36.43 / Avg: 36.94 / Max: 37.27Min: 39.8 / Avg: 40.01 / Max: 40.29Min: 40.01 / Avg: 40.03 / Max: 40.051. (CC) gcc options: -O3

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Default KernelLinux 5.10.4Linux 5.11-rc191827364537.1034.2434.31

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaDefault KernelLinux 5.10.4Linux 5.11-rc10.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.360.390.391. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaDefault KernelLinux 5.10.4Linux 5.11-rc112345Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.391. (CXX) g++ options: -O3 -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefault KernelLinux 5.10.4Linux 5.11-rc14080120160200SE +/- 0.67, N = 3SE +/- 0.67, N = 3170159157
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150Min: 169 / Avg: 170.33 / Max: 171Min: 156 / Avg: 157.33 / Max: 158

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Default KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.66, N = 3SE +/- 0.20, N = 3SE +/- 0.35, N = 383.9377.5278.391. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Default KernelLinux 5.10.4Linux 5.11-rc11632486480Min: 83.13 / Avg: 83.93 / Max: 85.23Min: 77.12 / Avg: 77.52 / Max: 77.73Min: 77.8 / Avg: 78.39 / Max: 79.021. (CC) gcc options: -O2 -ldl -lz -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.06, N = 5SE +/- 0.03, N = 5SE +/- 0.08, N = 511.5910.7110.931. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 11.44 / Avg: 11.59 / Max: 11.81Min: 10.6 / Avg: 10.71 / Max: 10.79Min: 10.77 / Avg: 10.93 / Max: 11.221. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.033, N = 3SE +/- 0.094, N = 5SE +/- 0.008, N = 39.4788.8728.7561. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 9.42 / Avg: 9.48 / Max: 9.54Min: 8.74 / Avg: 8.87 / Max: 9.24Min: 8.74 / Avg: 8.76 / Max: 8.771. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1510152025SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 322.7921.0521.09MIN: 21.5MIN: 20.7MIN: 20.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1510152025Min: 22.75 / Avg: 22.79 / Max: 22.82Min: 20.86 / Avg: 21.05 / Max: 21.18Min: 20.83 / Avg: 21.09 / Max: 21.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATDefault KernelLinux 5.10.4Linux 5.11-rc170M140M210M280M350MSE +/- 1313862.73, N = 3SE +/- 80446.54, N = 3SE +/- 262122.57, N = 3282568835.30305670769.36305438957.421. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATDefault KernelLinux 5.10.4Linux 5.11-rc150M100M150M200M250MMin: 279963270.52 / Avg: 282568835.3 / Max: 284166541.35Min: 305546362.9 / Avg: 305670769.36 / Max: 305821331.03Min: 305139599.04 / Avg: 305438957.42 / Max: 305961347.151. (CC) gcc options: -O3 -march=native -lm

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodDefault KernelLinux 5.10.4Linux 5.11-rc12468107.587.097.01

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 317.0915.8115.86MIN: 15.85 / MAX: 45.86MIN: 15.64 / MAX: 24.91MIN: 15.71 / MAX: 16.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 16.97 / Avg: 17.09 / Max: 17.16Min: 15.78 / Avg: 15.81 / Max: 15.82Min: 15.83 / Avg: 15.86 / Max: 15.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Default KernelLinux 5.10.4Linux 5.11-rc1300K600K900K1200K1500KSE +/- 12175.94, N = 3SE +/- 2865.42, N = 3SE +/- 1659.14, N = 3130011514037271404973
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Default KernelLinux 5.10.4Linux 5.11-rc1200K400K600K800K1000KMin: 1280312 / Avg: 1300114.67 / Max: 1322290Min: 1398101 / Avg: 1403727.33 / Max: 1407484Min: 1401839 / Avg: 1404973.33 / Max: 1407484

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefault KernelLinux 5.10.4Linux 5.11-rc1150300450600750SE +/- 0.67, N = 3697651646
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefault KernelLinux 5.10.4Linux 5.11-rc1120240360480600Min: 650 / Avg: 650.67 / Max: 652

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowDefault KernelLinux 5.10.4Linux 5.11-rc14812162016.7715.6515.55

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDDefault KernelLinux 5.10.4Linux 5.11-rc1300K600K900K1200K1500KSE +/- 21666.11, N = 3SE +/- 19698.88, N = 3SE +/- 11716.13, N = 31540737.671429878.041465965.541. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDDefault KernelLinux 5.10.4Linux 5.11-rc1300K600K900K1200K1500KMin: 1499298.38 / Avg: 1540737.67 / Max: 1572427.75Min: 1404494.38 / Avg: 1429878.04 / Max: 1468663.75Min: 1443093.75 / Avg: 1465965.54 / Max: 1481813.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefault KernelLinux 5.10.4Linux 5.11-rc180160240320400SE +/- 0.33, N = 3353336328
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 353 / Avg: 353.33 / Max: 354

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Default KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.048, N = 3SE +/- 0.010, N = 3SE +/- 0.040, N = 310.4519.8609.7171. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 10.36 / Avg: 10.45 / Max: 10.52Min: 9.84 / Avg: 9.86 / Max: 9.87Min: 9.65 / Avg: 9.72 / Max: 9.791. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.10, N = 3SE +/- 0.01, N = 15SE +/- 0.03, N = 316.9615.7815.87MIN: 15.86 / MAX: 33.88MIN: 15.63 / MAX: 27.03MIN: 15.7 / MAX: 18.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 16.8 / Avg: 16.96 / Max: 17.15Min: 15.7 / Avg: 15.78 / Max: 15.88Min: 15.84 / Avg: 15.87 / Max: 15.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc10.05270.10540.15810.21080.2635SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 30.2180.2330.234
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc112345Min: 0.22 / Avg: 0.22 / Max: 0.22Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.23 / Max: 0.24

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc110002000300040005000443247224755

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefault KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.35, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 380.975.975.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefault KernelLinux 5.10.4Linux 5.11-rc11530456075Min: 80.2 / Avg: 80.87 / Max: 81.4Min: 75.8 / Avg: 75.93 / Max: 76.1Min: 75.4 / Avg: 75.53 / Max: 75.6

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Default KernelLinux 5.10.4Linux 5.11-rc11632486480SE +/- 0.91, N = 3SE +/- 0.30, N = 3SE +/- 0.14, N = 371.6667.1866.98MIN: 63.38 / MAX: 108.99MIN: 66.68 / MAX: 78.44MIN: 66.5 / MAX: 78.241. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Default KernelLinux 5.10.4Linux 5.11-rc11428425670Min: 70.18 / Avg: 71.66 / Max: 73.32Min: 66.83 / Avg: 67.18 / Max: 67.78Min: 66.72 / Avg: 66.98 / Max: 67.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300SE +/- 2.11, N = 3SE +/- 0.10, N = 3SE +/- 0.21, N = 3246.26262.15262.75
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkDefault KernelLinux 5.10.4Linux 5.11-rc150100150200250Min: 242.24 / Avg: 246.26 / Max: 249.37Min: 261.96 / Avg: 262.15 / Max: 262.32Min: 262.35 / Avg: 262.75 / Max: 263.05

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineDefault KernelLinux 5.10.4Linux 5.11-rc17K14K21K28K35KSE +/- 35.57, N = 3SE +/- 40.89, N = 3SE +/- 1.43, N = 329782.8831684.7431043.731. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineDefault KernelLinux 5.10.4Linux 5.11-rc15K10K15K20K25KMin: 29742.09 / Avg: 29782.88 / Max: 29853.75Min: 31613.38 / Avg: 31684.74 / Max: 31755.02Min: 31041 / Avg: 31043.73 / Max: 31045.841. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc110002000300040005000SE +/- 2.03, N = 3SE +/- 5.53, N = 3SE +/- 9.99, N = 34749.064488.104470.71MIN: 4699.81MIN: 4464.06MIN: 4441.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc18001600240032004000Min: 4745.34 / Avg: 4749.06 / Max: 4752.33Min: 4477.08 / Avg: 4488.1 / Max: 4494.37Min: 4452.75 / Avg: 4470.71 / Max: 4487.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetDefault KernelLinux 5.10.4Linux 5.11-rc160K120K180K240K300KSE +/- 272.99, N = 3SE +/- 290.99, N = 3SE +/- 354.62, N = 3251499244504259645
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetDefault KernelLinux 5.10.4Linux 5.11-rc150K100K150K200K250KMin: 251144 / Avg: 251499.33 / Max: 252036Min: 244028 / Avg: 244504 / Max: 245032Min: 259246 / Avg: 259644.67 / Max: 260352

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Default KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.031, N = 3SE +/- 0.018, N = 3SE +/- 0.006, N = 37.0486.7006.6391. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 6.99 / Avg: 7.05 / Max: 7.1Min: 6.67 / Avg: 6.7 / Max: 6.73Min: 6.63 / Avg: 6.64 / Max: 6.651. (CXX) g++ options: -O3 -fPIC

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Default KernelLinux 5.10.4Linux 5.11-rc170140210280350SE +/- 0.50, N = 3SE +/- 0.20, N = 3SE +/- 0.45, N = 3324.27305.64310.42MIN: 310.87 / MAX: 349.12MIN: 291.26 / MAX: 314.5MIN: 290.92 / MAX: 337.971. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Default KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 323.56 / Avg: 324.27 / Max: 325.23Min: 305.39 / Avg: 305.64 / Max: 306.03Min: 309.53 / Avg: 310.42 / Max: 310.871. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomDefault KernelLinux 5.10.4Linux 5.11-rc10.07880.15760.23640.31520.394SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.330.350.351. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomDefault KernelLinux 5.10.4Linux 5.11-rc112345Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.351. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc110002000300040005000SE +/- 8.86, N = 3SE +/- 5.62, N = 3SE +/- 13.33, N = 34737.794482.274468.08MIN: 4680.34MIN: 4460.82MIN: 4427.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc18001600240032004000Min: 4720.16 / Avg: 4737.79 / Max: 4748.22Min: 4473.52 / Avg: 4482.27 / Max: 4492.76Min: 4444.05 / Avg: 4468.08 / Max: 4490.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 37.247.677.621. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.23 / Avg: 7.24 / Max: 7.26Min: 7.61 / Avg: 7.67 / Max: 7.72Min: 7.49 / Avg: 7.62 / Max: 7.711. Nodejs v12.18.2

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SDefault KernelLinux 5.10.4Linux 5.11-rc11428425670SE +/- 0.35, N = 3SE +/- 0.24, N = 3SE +/- 0.22, N = 364.5661.0060.951. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SDefault KernelLinux 5.10.4Linux 5.11-rc11326395265Min: 64 / Avg: 64.56 / Max: 65.19Min: 60.69 / Avg: 61 / Max: 61.48Min: 60.52 / Avg: 60.95 / Max: 61.261. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 316.0316.9016.98
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 16.01 / Avg: 16.03 / Max: 16.05Min: 16.78 / Avg: 16.9 / Max: 17.01Min: 16.79 / Avg: 16.98 / Max: 17.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 9.60, N = 3SE +/- 14.45, N = 3SE +/- 17.59, N = 39027.638564.548540.48MIN: 8975.45MIN: 8529.89MIN: 8490.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc116003200480064008000Min: 9009.65 / Avg: 9027.63 / Max: 9042.44Min: 8544.13 / Avg: 8564.54 / Max: 8592.46Min: 8518.57 / Avg: 8540.48 / Max: 8575.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc110002000300040005000SE +/- 20.61, N = 3SE +/- 7.62, N = 3SE +/- 8.44, N = 34742.114497.064486.34MIN: 4659.09MIN: 4465.18MIN: 4457.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc18001600240032004000Min: 4702.32 / Avg: 4742.11 / Max: 4771.32Min: 4482.06 / Avg: 4497.06 / Max: 4506.9Min: 4469.46 / Avg: 4486.34 / Max: 4494.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 55.92, N = 3SE +/- 9.29, N = 3SE +/- 40.00, N = 38129.28560.18592.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc115003000450060007500Min: 8028.9 / Avg: 8129.2 / Max: 8222.2Min: 8550.1 / Avg: 8560.13 / Max: 8578.7Min: 8528.5 / Avg: 8592.3 / Max: 86661. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 23.24, N = 3SE +/- 15.87, N = 3SE +/- 17.36, N = 39028.508575.958545.33MIN: 8965.57MIN: 8526.04MIN: 8489.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc116003200480064008000Min: 8998.4 / Avg: 9028.5 / Max: 9074.22Min: 8544.37 / Avg: 8575.95 / Max: 8594.51Min: 8519.22 / Avg: 8545.33 / Max: 8578.211. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.017, N = 3SE +/- 0.004, N = 3SE +/- 0.006, N = 36.7456.3926.397
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 6.72 / Avg: 6.75 / Max: 6.78Min: 6.38 / Avg: 6.39 / Max: 6.4Min: 6.39 / Avg: 6.4 / Max: 6.41

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc11.17232.34463.51694.68925.8615SE +/- 0.017, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 35.2104.9404.959
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 5.19 / Avg: 5.21 / Max: 5.24Min: 4.93 / Avg: 4.94 / Max: 4.95Min: 4.95 / Avg: 4.96 / Max: 4.97

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Default KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.026, N = 3SE +/- 0.017, N = 3SE +/- 0.005, N = 36.6226.3126.2841. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Default KernelLinux 5.10.4Linux 5.11-rc13691215Min: 6.6 / Avg: 6.62 / Max: 6.67Min: 6.28 / Avg: 6.31 / Max: 6.34Min: 6.28 / Avg: 6.28 / Max: 6.291. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Default KernelLinux 5.10.4Linux 5.11-rc1306090120150SE +/- 1.72, N = 3SE +/- 0.38, N = 3SE +/- 0.15, N = 3131.14126.03124.541. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Default KernelLinux 5.10.4Linux 5.11-rc120406080100Min: 128.97 / Avg: 131.14 / Max: 134.54Min: 125.48 / Avg: 126.03 / Max: 126.75Min: 124.29 / Avg: 124.54 / Max: 124.821. (CXX) g++ options: -O3 -fPIC

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Default KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 313.6212.9612.94MIN: 12.87 / MAX: 26.35MIN: 12.85 / MAX: 22.3MIN: 12.76 / MAX: 24.471. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Default KernelLinux 5.10.4Linux 5.11-rc148121620Min: 13.35 / Avg: 13.62 / Max: 13.77Min: 12.89 / Avg: 12.96 / Max: 13.04Min: 12.89 / Avg: 12.94 / Max: 12.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc116003200480064008000SE +/- 73.87, N = 3SE +/- 92.18, N = 3SE +/- 46.06, N = 37192.297525.637572.231. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc113002600390052006500Min: 7087.4 / Avg: 7192.29 / Max: 7334.85Min: 7363.7 / Avg: 7525.63 / Max: 7682.93Min: 7480.69 / Avg: 7572.23 / Max: 7626.851. (CC) gcc options: -O3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 87.727.357.421. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.71 / Avg: 7.72 / Max: 7.73Min: 7.34 / Avg: 7.35 / Max: 7.35Min: 7.33 / Avg: 7.42 / Max: 7.871. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Default KernelLinux 5.10.4Linux 5.11-rc1100200300400500SE +/- 0.33, N = 3SE +/- 0.67, N = 3461444439
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Default KernelLinux 5.10.4Linux 5.11-rc180160240320400Min: 460 / Avg: 460.67 / Max: 461Min: 438 / Avg: 439.33 / Max: 440

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 4.57, N = 3SE +/- 9.21, N = 3SE +/- 10.93, N = 39002.538588.648583.12MIN: 8960.78MIN: 8557.28MIN: 8546.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc116003200480064008000Min: 8993.4 / Avg: 9002.53 / Max: 9007.34Min: 8576.93 / Avg: 8588.64 / Max: 8606.82Min: 8570.25 / Avg: 8583.12 / Max: 8604.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.06535, N = 3SE +/- 0.12964, N = 3SE +/- 0.04049, N = 39.349279.804609.49354MIN: 8.51MIN: 9.16MIN: 91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 9.24 / Avg: 9.35 / Max: 9.46Min: 9.65 / Avg: 9.8 / Max: 10.06Min: 9.42 / Avg: 9.49 / Max: 9.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.24, N = 16SE +/- 0.54, N = 20SE +/- 0.38, N = 438.9140.2738.421. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 37.3 / Avg: 38.91 / Max: 40.49Min: 37.72 / Avg: 40.27 / Max: 44.46Min: 37.66 / Avg: 38.42 / Max: 39.181. (CC) gcc options: -O2 -std=c99

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNADefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.20, N = 314.5614.6015.251. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNADefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 14.47 / Avg: 14.56 / Max: 14.69Min: 14.47 / Avg: 14.6 / Max: 14.79Min: 15.04 / Avg: 15.25 / Max: 15.661. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Default KernelLinux 5.10.4Linux 5.11-rc1142842567061.7359.1559.08

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeDefault KernelLinux 5.10.4Linux 5.11-rc13M6M9M12M15MSE +/- 170465.83, N = 3SE +/- 168497.64, N = 3SE +/- 38144.94, N = 31336145713559816138977591. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeDefault KernelLinux 5.10.4Linux 5.11-rc12M4M6M8M10MMin: 13101996 / Avg: 13361457.33 / Max: 13682723Min: 13230986 / Avg: 13559815.67 / Max: 13788087Min: 13853639 / Avg: 13897759.33 / Max: 139737191. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 1024Default KernelLinux 5.10.4Linux 5.11-rc114002800420056007000SE +/- 3.28, N = 3SE +/- 13.48, N = 3SE +/- 8.41, N = 36184635064261. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 1024Default KernelLinux 5.10.4Linux 5.11-rc111002200330044005500Min: 6179 / Avg: 6183.67 / Max: 6190Min: 6329 / Avg: 6349.67 / Max: 6375Min: 6416 / Avg: 6426.33 / Max: 64431. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefault KernelLinux 5.10.4Linux 5.11-rc1816243240SE +/- 0.15, N = 3SE +/- 0.31, N = 3SE +/- 0.03, N = 334.3235.6535.001. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 34.06 / Avg: 34.32 / Max: 34.57Min: 35.2 / Avg: 35.65 / Max: 36.25Min: 34.97 / Avg: 35 / Max: 35.051. rsvg-convert version 2.50.1

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.047, N = 4SE +/- 0.040, N = 20SE +/- 0.037, N = 47.4037.1317.160
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.28 / Avg: 7.4 / Max: 7.51Min: 6.96 / Avg: 7.13 / Max: 7.81Min: 7.1 / Avg: 7.16 / Max: 7.27

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.52, N = 3SE +/- 0.27, N = 3SE +/- 0.29, N = 385.488.288.61. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc120406080100Min: 84.4 / Avg: 85.43 / Max: 86.1Min: 87.7 / Avg: 88.23 / Max: 88.6Min: 88.1 / Avg: 88.6 / Max: 89.11. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350322.77316.76311.49

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc110002000300040005000SE +/- 4.93, N = 3SE +/- 3.71, N = 3SE +/- 3.61, N = 34664473248271. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc18001600240032004000Min: 4656 / Avg: 4664 / Max: 4673Min: 4727 / Avg: 4731.67 / Max: 4739Min: 4820 / Avg: 4827 / Max: 48321. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreDefault KernelLinux 5.10.4Linux 5.11-rc1150300450600750667689685

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Default KernelLinux 5.10.4Linux 5.11-rc11122334455SE +/- 0.30, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 347.5746.2146.091. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Default KernelLinux 5.10.4Linux 5.11-rc11020304050Min: 47.15 / Avg: 47.57 / Max: 48.16Min: 46.08 / Avg: 46.21 / Max: 46.34Min: 45.91 / Avg: 46.08 / Max: 46.321. (CC) gcc options: -pthread -fvisibility=hidden -O2

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 313.7513.3213.34
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 13.71 / Avg: 13.75 / Max: 13.79Min: 13.3 / Avg: 13.32 / Max: 13.35Min: 13.3 / Avg: 13.34 / Max: 13.39

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Default KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 23.03, N = 3SE +/- 67.59, N = 3SE +/- 46.59, N = 310274997299671. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Default KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KMin: 10228 / Avg: 10274 / Max: 10299Min: 9844 / Avg: 9971.67 / Max: 10074Min: 9907 / Avg: 9967.33 / Max: 100591. (CXX) g++ options: -O3 -pthread

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisDefault KernelLinux 5.10.4Linux 5.11-rc10.3560.7121.0681.4241.78SE +/- 0.014, N = 8SE +/- 0.013, N = 3SE +/- 0.022, N = 31.5811.5371.582MIN: 1.38 / MAX: 2.33MIN: 1.4 / MAX: 2.36MIN: 1.4 / MAX: 2.3
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 1.53 / Avg: 1.58 / Max: 1.64Min: 1.52 / Avg: 1.54 / Max: 1.56Min: 1.55 / Avg: 1.58 / Max: 1.62

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkDefault KernelLinux 5.10.4Linux 5.11-rc10.12130.24260.36390.48520.6065SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.5240.5390.5351. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.53 / Avg: 0.53 / Max: 0.541. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzDefault KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.22, N = 8SE +/- 0.14, N = 20SE +/- 0.16, N = 2026.4725.7425.99
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzDefault KernelLinux 5.10.4Linux 5.11-rc1612182430Min: 26.16 / Avg: 26.47 / Max: 28Min: 25.28 / Avg: 25.74 / Max: 28.2Min: 25.45 / Avg: 25.99 / Max: 28.75

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthDefault KernelLinux 5.10.4Linux 5.11-rc14M8M12M16M20MSE +/- 237774.54, N = 3SE +/- 76230.74, N = 3SE +/- 97740.42, N = 3193068331984074319661267
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthDefault KernelLinux 5.10.4Linux 5.11-rc13M6M9M12M15MMin: 18877888 / Avg: 19306833.33 / Max: 19699111Min: 19733463 / Avg: 19840743 / Max: 19988200Min: 19553676 / Avg: 19661267 / Max: 19856405

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350SE +/- 1.15, N = 3SE +/- 3.52, N = 3SE +/- 0.88, N = 3300.19308.47308.38
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 298.6 / Avg: 300.19 / Max: 302.43Min: 302.31 / Avg: 308.47 / Max: 314.5Min: 306.74 / Avg: 308.38 / Max: 309.78

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Default KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.08, N = 3SE +/- 0.50, N = 3SE +/- 0.18, N = 376.6574.6675.061. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Default KernelLinux 5.10.4Linux 5.11-rc11530456075Min: 76.52 / Avg: 76.65 / Max: 76.79Min: 73.84 / Avg: 74.66 / Max: 75.57Min: 74.75 / Avg: 75.06 / Max: 75.371. (CXX) g++ options: -O3 -fPIC

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc1140280420560700SE +/- 3.40, N = 3SE +/- 8.16, N = 3SE +/- 7.06, N = 3631.8641.4625.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc1110220330440550Min: 627 / Avg: 631.83 / Max: 638.4Min: 625.1 / Avg: 641.4 / Max: 650.2Min: 614.2 / Avg: 625.3 / Max: 638.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.00187, N = 3SE +/- 0.03024, N = 3SE +/- 0.04602, N = 37.031976.859756.92207MIN: 6.48MIN: 6.65MIN: 6.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.03 / Avg: 7.03 / Max: 7.04Min: 6.83 / Avg: 6.86 / Max: 6.92Min: 6.85 / Avg: 6.92 / Max: 7.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.0448, N = 3SE +/- 0.0788, N = 3SE +/- 0.0259, N = 37.97598.17398.1022MIN: 7.86 / MAX: 8.15MIN: 8.02 / MAX: 8.44MIN: 8.03 / MAX: 8.26
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.89 / Avg: 7.98 / Max: 8.05Min: 8.06 / Avg: 8.17 / Max: 8.33Min: 8.06 / Avg: 8.1 / Max: 8.15

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 317.0017.3817.41MIN: 16.89MIN: 17.2MIN: 17.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 17 / Avg: 17 / Max: 17Min: 17.3 / Avg: 17.38 / Max: 17.51Min: 17.39 / Avg: 17.41 / Max: 17.421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreDefault KernelLinux 5.10.4Linux 5.11-rc130060090012001500124812771269

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefault KernelLinux 5.10.4Linux 5.11-rc1510152025SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 319.0219.1919.45
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskDefault KernelLinux 5.10.4Linux 5.11-rc1510152025Min: 19.01 / Avg: 19.02 / Max: 19.03Min: 19.07 / Avg: 19.19 / Max: 19.33Min: 19.4 / Avg: 19.45 / Max: 19.5

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.07, N = 3SE +/- 0.12, N = 3SE +/- 0.17, N = 385.7183.8183.88
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc11632486480Min: 85.61 / Avg: 85.71 / Max: 85.86Min: 83.61 / Avg: 83.81 / Max: 84.03Min: 83.71 / Avg: 83.88 / Max: 84.21

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Default KernelLinux 5.10.4Linux 5.11-rc1132639526559.5158.2158.21

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Default KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 322.522.923.01. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Default KernelLinux 5.10.4Linux 5.11-rc1510152025Min: 22.5 / Avg: 22.5 / Max: 22.5Min: 22.9 / Avg: 22.9 / Max: 22.9Min: 22.9 / Avg: 22.97 / Max: 231. (CC) gcc options: -O3 -pthread -lz

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyDefault KernelLinux 5.10.4Linux 5.11-rc150100150200250SE +/- 2.78, N = 4SE +/- 0.81, N = 3SE +/- 1.16, N = 3237.53232.46236.57
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyDefault KernelLinux 5.10.4Linux 5.11-rc14080120160200Min: 234.75 / Avg: 237.53 / Max: 245.86Min: 231.5 / Avg: 232.46 / Max: 234.07Min: 234.42 / Avg: 236.57 / Max: 238.39

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Default KernelLinux 5.10.4Linux 5.11-rc11020304050SE +/- 0.11, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 343.9843.0943.281. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Default KernelLinux 5.10.4Linux 5.11-rc1918273645Min: 43.78 / Avg: 43.98 / Max: 44.15Min: 43.08 / Avg: 43.09 / Max: 43.12Min: 43.02 / Avg: 43.28 / Max: 43.511. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeDefault KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 378.6577.0878.291. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeDefault KernelLinux 5.10.4Linux 5.11-rc11530456075Min: 78.51 / Avg: 78.65 / Max: 78.76Min: 77.02 / Avg: 77.08 / Max: 77.15Min: 78.18 / Avg: 78.29 / Max: 78.461. RawTherapee, version 5.8, command line.

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 315.4115.2815.10
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 15.3 / Avg: 15.41 / Max: 15.47Min: 15.12 / Avg: 15.28 / Max: 15.39Min: 14.99 / Avg: 15.1 / Max: 15.18

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Default KernelLinux 5.10.4Linux 5.11-rc10.05870.11740.17610.23480.2935SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.2560.2610.259
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Default KernelLinux 5.10.4Linux 5.11-rc112345Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.26 / Avg: 0.26 / Max: 0.26

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkDefault KernelLinux 5.10.4Linux 5.11-rc11.08232.16463.24694.32925.41154.814.734.72

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Default KernelLinux 5.10.4Linux 5.11-rc10.17530.35060.52590.70120.8765SE +/- 0.001, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 30.7650.7790.774
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Default KernelLinux 5.10.4Linux 5.11-rc1246810Min: 0.76 / Avg: 0.77 / Max: 0.77Min: 0.77 / Avg: 0.78 / Max: 0.79Min: 0.77 / Avg: 0.77 / Max: 0.78

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomDefault KernelLinux 5.10.4Linux 5.11-rc10.30580.61160.91741.22321.529SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.001, N = 31.3351.3541.359
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 1.33 / Avg: 1.33 / Max: 1.34Min: 1.35 / Avg: 1.35 / Max: 1.36Min: 1.36 / Avg: 1.36 / Max: 1.36

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc12004006008001000SE +/- 7.08, N = 3SE +/- 2.23, N = 3SE +/- 9.60, N = 3944.3960.5947.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080Default KernelLinux 5.10.4Linux 5.11-rc12004006008001000Min: 930.8 / Avg: 944.33 / Max: 954.7Min: 956.1 / Avg: 960.47 / Max: 963.4Min: 936.3 / Avg: 947.27 / Max: 966.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughDefault KernelLinux 5.10.4Linux 5.11-rc1918273645SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 336.9536.5837.201. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughDefault KernelLinux 5.10.4Linux 5.11-rc1816243240Min: 36.93 / Avg: 36.95 / Max: 36.97Min: 36.56 / Avg: 36.58 / Max: 36.6Min: 37.2 / Avg: 37.2 / Max: 37.21. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.0725, N = 3SE +/- 0.0500, N = 3SE +/- 0.0404, N = 38.39708.51888.3775MIN: 8.23 / MAX: 8.61MIN: 8.4 / MAX: 8.7MIN: 8.3 / MAX: 8.55
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 8.29 / Avg: 8.4 / Max: 8.53Min: 8.43 / Avg: 8.52 / Max: 8.6Min: 8.33 / Avg: 8.38 / Max: 8.46

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3303.53300.62305.641. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveDefault KernelLinux 5.10.4Linux 5.11-rc150100150200250Min: 303.43 / Avg: 303.53 / Max: 303.68Min: 300.54 / Avg: 300.62 / Max: 300.71Min: 305.48 / Avg: 305.64 / Max: 305.811. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondDefault KernelLinux 5.10.4Linux 5.11-rc160K120K180K240K300KSE +/- 1434.22, N = 3SE +/- 872.17, N = 3SE +/- 802.31, N = 3267555.23270279.69271886.251. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondDefault KernelLinux 5.10.4Linux 5.11-rc150K100K150K200K250KMin: 265273.98 / Avg: 267555.23 / Max: 270201.81Min: 269269.61 / Avg: 270279.69 / Max: 272016.32Min: 270281.68 / Avg: 271886.25 / Max: 272700.161. (CC) gcc options: -O2 -lrt" -lrt

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHDefault KernelLinux 5.10.4Linux 5.11-rc1200K400K600K800K1000KSE +/- 9875.40, N = 15SE +/- 14750.43, N = 3SE +/- 11649.30, N = 31065654.101049901.521053853.021. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHDefault KernelLinux 5.10.4Linux 5.11-rc1200K400K600K800K1000KMin: 1006229.38 / Avg: 1065654.1 / Max: 1117390Min: 1022560.31 / Avg: 1049901.52 / Max: 1073167.38Min: 1037775.94 / Avg: 1053853.02 / Max: 1076495.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.116.056.141. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 6.1 / Avg: 6.11 / Max: 6.12Min: 6.04 / Avg: 6.05 / Max: 6.06Min: 6.13 / Avg: 6.14 / Max: 6.141. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingDefault KernelLinux 5.10.4Linux 5.11-rc12004006008001000SE +/- 0.27, N = 3SE +/- 0.82, N = 3SE +/- 1.15, N = 3867.81855.36857.231. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingDefault KernelLinux 5.10.4Linux 5.11-rc1150300450600750Min: 867.3 / Avg: 867.81 / Max: 868.2Min: 854.44 / Avg: 855.36 / Max: 856.98Min: 855.34 / Avg: 857.23 / Max: 859.311. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Default KernelLinux 5.10.4Linux 5.11-rc10.52851.0571.58552.1142.6425SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.019, N = 32.3492.3162.347
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Default KernelLinux 5.10.4Linux 5.11-rc1246810Min: 2.34 / Avg: 2.35 / Max: 2.36Min: 2.31 / Avg: 2.32 / Max: 2.32Min: 2.32 / Avg: 2.35 / Max: 2.38

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarDefault KernelLinux 5.10.4Linux 5.11-rc10.64511.29021.93532.58043.2255SE +/- 0.004, N = 3SE +/- 0.003, N = 3SE +/- 0.028, N = 32.8282.8672.843
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 2.82 / Avg: 2.83 / Max: 2.84Min: 2.86 / Avg: 2.87 / Max: 2.87Min: 2.79 / Avg: 2.84 / Max: 2.87

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300287.58291.47289.58

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyDefault KernelLinux 5.10.4Linux 5.11-rc170140210280350SE +/- 0.28, N = 3SE +/- 0.38, N = 3SE +/- 0.76, N = 3329.39325.24326.50
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyDefault KernelLinux 5.10.4Linux 5.11-rc160120180240300Min: 328.84 / Avg: 329.39 / Max: 329.78Min: 324.49 / Avg: 325.24 / Max: 325.77Min: 325.43 / Avg: 326.5 / Max: 327.96

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc1306090120150SE +/- 0.91, N = 3SE +/- 0.92, N = 3SE +/- 0.86, N = 3131.32129.83129.68
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc120406080100Min: 130.25 / Avg: 131.32 / Max: 133.13Min: 128.85 / Avg: 129.82 / Max: 131.67Min: 128.61 / Avg: 129.68 / Max: 131.38

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc12004006008001000SE +/- 0.68, N = 3SE +/- 0.83, N = 3SE +/- 3.21, N = 31057.631044.461050.45
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc12004006008001000Min: 1056.3 / Avg: 1057.63 / Max: 1058.59Min: 1043.16 / Avg: 1044.46 / Max: 1045.99Min: 1044.74 / Avg: 1050.45 / Max: 1055.84

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.0587, N = 3SE +/- 0.0471, N = 3SE +/- 0.0326, N = 37.33127.34977.4224MIN: 7.18 / MAX: 7.52MIN: 7.23 / MAX: 7.53MIN: 7.34 / MAX: 7.57
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.22 / Avg: 7.33 / Max: 7.42Min: 7.26 / Avg: 7.35 / Max: 7.43Min: 7.38 / Avg: 7.42 / Max: 7.49

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreDefault KernelLinux 5.10.4Linux 5.11-rc1130260390520650581588584

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 8.31, N = 3SE +/- 8.68, N = 3SE +/- 13.60, N = 37828.57895.57920.91. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc114002800420056007000Min: 7814.1 / Avg: 7828.53 / Max: 7842.9Min: 7884.5 / Avg: 7895.47 / Max: 7912.6Min: 7907.2 / Avg: 7920.9 / Max: 7948.11. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Default KernelLinux 5.10.4Linux 5.11-rc16001200180024003000SE +/- 14.17, N = 3SE +/- 7.47, N = 3SE +/- 6.18, N = 32849.02838.62872.01. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Default KernelLinux 5.10.4Linux 5.11-rc15001000150020002500Min: 2825.7 / Avg: 2848.97 / Max: 2874.6Min: 2828.9 / Avg: 2838.6 / Max: 2853.3Min: 2859.7 / Avg: 2872.03 / Max: 2878.91. (CC) gcc options: -O3 -pthread -lz

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.0323, N = 3SE +/- 0.0390, N = 3SE +/- 0.0114, N = 36.76896.79346.8458MIN: 6.67 / MAX: 6.9MIN: 6.68 / MAX: 6.95MIN: 6.79 / MAX: 6.96
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 6.71 / Avg: 6.77 / Max: 6.81Min: 6.72 / Avg: 6.79 / Max: 6.83Min: 6.82 / Avg: 6.85 / Max: 6.86

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Default KernelLinux 5.10.4Linux 5.11-rc120406080100SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 382.3981.4881.621. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Default KernelLinux 5.10.4Linux 5.11-rc11632486480Min: 82.27 / Avg: 82.39 / Max: 82.53Min: 81.46 / Avg: 81.47 / Max: 81.49Min: 81.6 / Avg: 81.62 / Max: 81.641. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.0127, N = 3SE +/- 0.0234, N = 3SE +/- 0.0090, N = 37.80147.87227.8824MIN: 7.74 / MAX: 7.91MIN: 7.8 / MAX: 7.98MIN: 7.84 / MAX: 7.97
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.78 / Avg: 7.8 / Max: 7.82Min: 7.83 / Avg: 7.87 / Max: 7.91Min: 7.87 / Avg: 7.88 / Max: 7.9

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc13691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 313.3413.2413.21MIN: 12.96MIN: 12.87MIN: 12.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 13.34 / Avg: 13.34 / Max: 13.34Min: 13.21 / Avg: 13.24 / Max: 13.26Min: 13.18 / Avg: 13.21 / Max: 13.231. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyDefault KernelLinux 5.10.4Linux 5.11-rc1120240360480600539.96535.01536.84

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc12K4K6K8K10KSE +/- 5.10, N = 3SE +/- 14.26, N = 3SE +/- 16.93, N = 37835.57899.37906.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedDefault KernelLinux 5.10.4Linux 5.11-rc114002800420056007000Min: 7825.7 / Avg: 7835.53 / Max: 7842.8Min: 7874.7 / Avg: 7899.33 / Max: 7924.1Min: 7883.5 / Avg: 7906.77 / Max: 7939.71. (CC) gcc options: -O3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjDefault KernelLinux 5.10.4Linux 5.11-rc1246810SE +/- 0.0139, N = 3SE +/- 0.0051, N = 3SE +/- 0.0109, N = 37.20637.27027.2554MIN: 7.15 / MAX: 7.3MIN: 7.24 / MAX: 7.35MIN: 7.21 / MAX: 7.32
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjDefault KernelLinux 5.10.4Linux 5.11-rc13691215Min: 7.18 / Avg: 7.21 / Max: 7.23Min: 7.26 / Avg: 7.27 / Max: 7.28Min: 7.23 / Avg: 7.26 / Max: 7.27

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc150100150200250SE +/- 1.02, N = 3SE +/- 1.49, N = 3SE +/- 0.84, N = 3232.99231.01231.54
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileDefault KernelLinux 5.10.4Linux 5.11-rc14080120160200Min: 231.7 / Avg: 232.99 / Max: 235.01Min: 229.29 / Avg: 231.01 / Max: 233.97Min: 230.24 / Avg: 231.53 / Max: 233.1

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Default KernelLinux 5.10.4Linux 5.11-rc10.23130.46260.69390.92521.1565SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 31.0221.0201.028
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Default KernelLinux 5.10.4Linux 5.11-rc1246810Min: 1.02 / Avg: 1.02 / Max: 1.02Min: 1.02 / Avg: 1.02 / Max: 1.03Min: 1.02 / Avg: 1.03 / Max: 1.03

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaDefault KernelLinux 5.10.4Linux 5.11-rc14812162016.1816.2416.20

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDDefault KernelLinux 5.10.4Linux 5.11-rc10.10350.2070.31050.4140.5175SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.460.460.461. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDDefault KernelLinux 5.10.4Linux 5.11-rc112345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.461. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsDefault KernelLinux 5.10.4Linux 5.11-rc10.10130.20260.30390.40520.5065SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.450.450.451. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsDefault KernelLinux 5.10.4Linux 5.11-rc112345Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.461. (CXX) g++ options: -O3 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyDefault KernelLinux 5.10.4Linux 5.11-rc11122334455SE +/- 0.25, N = 3SE +/- 0.51, N = 3SE +/- 1.73, N = 350.0642.4641.05MIN: 40.36 / MAX: 77.16MIN: 36.31 / MAX: 45.38MIN: 36.21 / MAX: 43.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyDefault KernelLinux 5.10.4Linux 5.11-rc11020304050Min: 49.75 / Avg: 50.06 / Max: 50.56Min: 41.49 / Avg: 42.46 / Max: 43.22Min: 37.59 / Avg: 41.05 / Max: 42.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Default KernelLinux 5.10.4Linux 5.11-rc1612182430SE +/- 0.42, N = 3SE +/- 0.83, N = 3SE +/- 0.86, N = 326.0520.9219.90MIN: 20.5 / MAX: 71.2MIN: 19.02 / MAX: 53.8MIN: 18.96 / MAX: 22.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Default KernelLinux 5.10.4Linux 5.11-rc1612182430Min: 25.3 / Avg: 26.05 / Max: 26.75Min: 19.26 / Avg: 20.92 / Max: 21.86Min: 19.04 / Avg: 19.9 / Max: 21.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceDefault KernelLinux 5.10.4Linux 5.11-rc10.81231.62462.43693.24924.0615SE +/- 0.14, N = 3SE +/- 0.03, N = 15SE +/- 0.04, N = 33.613.383.47MIN: 3.17 / MAX: 16.8MIN: 3.16 / MAX: 4.51MIN: 3.37 / MAX: 4.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 3.41 / Avg: 3.61 / Max: 3.87Min: 3.19 / Avg: 3.38 / Max: 3.65Min: 3.4 / Avg: 3.47 / Max: 3.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620SE +/- 0.41, N = 15SE +/- 0.30, N = 15SE +/- 0.33, N = 1514.7814.3114.45MIN: 12.9MIN: 13MIN: 12.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUDefault KernelLinux 5.10.4Linux 5.11-rc148121620Min: 13.46 / Avg: 14.78 / Max: 17.08Min: 13.24 / Avg: 14.31 / Max: 16.1Min: 13.16 / Avg: 14.45 / Max: 16.421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinDefault KernelLinux 5.10.4Linux 5.11-rc11.07152.1433.21454.2865.3575SE +/- 0.098, N = 15SE +/- 0.014, N = 3SE +/- 0.012, N = 34.3114.7204.7621. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinDefault KernelLinux 5.10.4Linux 5.11-rc1246810Min: 3.33 / Avg: 4.31 / Max: 4.65Min: 4.69 / Avg: 4.72 / Max: 4.74Min: 4.74 / Avg: 4.76 / Max: 4.781. (CXX) g++ options: -O3 -pthread -lm

210 Results Shown

Redis
PyPerformance
oneDNN:
  IP Shapes 3D - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
NCNN:
  CPU - resnet50
  Vulkan GPU - resnet50
  Vulkan GPU - googlenet
  CPU - googlenet
  CPU - resnet18
Timed HMMer Search
NCNN:
  CPU - mobilenet
  CPU - yolov4-tiny
  Vulkan GPU - mobilenet
CLOMP
NCNN:
  CPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  CPU-v2-v2 - mobilenet-v2
  Vulkan GPU - squeezenet_ssd
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - squeezenet_ssd
DeepSpeech
NCNN:
  Vulkan GPU - mnasnet
  Vulkan GPU - blazeface
Polyhedron Fortran Benchmarks
NCNN:
  Vulkan GPU - efficientnet-b0
  CPU - vgg16
Redis
NCNN
TensorFlow Lite
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
NCNN:
  CPU - regnety_400m
  Vulkan GPU-v3-v3 - mobilenet-v3
WebP Image Encode
NCNN
TensorFlow Lite:
  Inception ResNet V2
  Inception V4
NCNN
Cryptsetup:
  Twofish-XTS 256b Decryption
  Serpent-XTS 256b Decryption
TensorFlow Lite
Cryptsetup
oneDNN
Cryptsetup
TensorFlow Lite
Cryptsetup
WebP Image Encode
Mobile Neural Network
WebP Image Encode
Cryptsetup
PyPerformance
GIMP
Timed Eigen Compilation
Cryptsetup:
  Twofish-XTS 256b Encryption
  AES-XTS 512b Encryption
  PBKDF2-whirlpool
  Serpent-XTS 256b Encryption
Redis
Cryptsetup:
  AES-XTS 256b Decryption
  AES-XTS 512b Decryption
RNNoise
Mobile Neural Network
Opus Codec Encoding
PyPerformance
Cryptsetup
BYTE Unix Benchmark
LAME MP3 Encoding
LZ4 Compression
WavPack Audio Encoding
GLmark2
Polyhedron Fortran Benchmarks
TNN
Polyhedron Fortran Benchmarks:
  ac
  fatigue2
PyPerformance
PHPBench
Mobile Neural Network
Monkey Audio Encoding
Polyhedron Fortran Benchmarks
oneDNN
Polyhedron Fortran Benchmarks:
  protein
  doduc
WebP Image Encode
Polyhedron Fortran Benchmarks
PyPerformance:
  raytrace
  float
Polyhedron Fortran Benchmarks
PyPerformance:
  crypto_pyaes
  regex_compile
LZ4 Compression
Polyhedron Fortran Benchmarks
simdjson
PyPerformance
SQLite Speedtest
FLAC Audio Encoding
WebP Image Encode
oneDNN
Hierarchical INTegration
Polyhedron Fortran Benchmarks
NCNN
Cryptsetup
PyPerformance
Polyhedron Fortran Benchmarks
Redis
PyPerformance
Basis Universal
NCNN
Darktable
GLmark2
PyPerformance
Mobile Neural Network
Numpy Benchmark
FFTE
oneDNN
TensorFlow Lite
libavif avifenc
TNN
simdjson
oneDNN
Node.js V8 Web Tooling Benchmark
Basis Universal
GIMP
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
LZ4 Compression
oneDNN
Darktable:
  Masskrug - CPU-only
  Server Room - CPU-only
libavif avifenc:
  10
  0
Mobile Neural Network
LZ4 Compression
ASTC Encoder
PyPerformance
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  IP Shapes 1D - f32 - CPU
eSpeak-NG Speech Engine
Timed MAFFT Alignment
Polyhedron Fortran Benchmarks
Stockfish
VKMark
librsvg
Unpacking The Linux Kernel
yquake2
Appleseed
VKMark
AI Benchmark Alpha
XZ Compression
Darktable
VkFFT
Sunflow Rendering System
GROMACS
Unpacking Firefox
asmFish
WireGuard + Linux Networking Stack Stress Test
libavif avifenc
yquake2
oneDNN
Embree
oneDNN
AI Benchmark Alpha
GIMP
Timed FFmpeg Compilation
Polyhedron Fortran Benchmarks
Zstd Compression
Blender
Basis Universal
RawTherapee
GIMP
rav1e
Polyhedron Fortran Benchmarks
rav1e
IndigoBench
yquake2
ASTC Encoder
Embree
ASTC Encoder
Coremark
Redis
ASTC Encoder
Basis Universal
rav1e
IndigoBench
Appleseed
Blender
Timed Linux Kernel Compilation
Timed LLVM Compilation
Embree
AI Benchmark Alpha
LZ4 Compression
Zstd Compression
Embree
Basis Universal
Embree
oneDNN
Appleseed
LZ4 Compression
Embree
Build2
rav1e
Polyhedron Fortran Benchmarks
simdjson:
  DistinctUserID
  PartialTweets
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet18
  CPU - blazeface
oneDNN
LAMMPS Molecular Dynamics Simulator