AMD FX-8370 2021

AMD FX-8370 Eight-Core testing with a MSI 970 GAMING (MS-7693) v4.0 (V22.3 BIOS) and AMD Radeon HD 5770 1GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101033-HA-AMDFX837036
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 5 Tests
AV1 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 19 Tests
Compression Tests 3 Tests
CPU Massive 23 Tests
Creator Workloads 29 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 10 Tests
Game Development 3 Tests
HPC - High Performance Computing 13 Tests
Imaging 8 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 10 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 27 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 3 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 2 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 12 Tests
Python Tests 6 Tests
Raytracing 2 Tests
Renderers 4 Tests
Scientific Computing 3 Tests
Server 6 Tests
Server CPU Tests 14 Tests
Single-Threaded 7 Tests
Speech 3 Tests
Telephony 3 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Vet 1
January 01 2021
  1 Day, 14 Hours, 13 Minutes
Vet 2
January 02 2021
  1 Day, 12 Hours, 50 Minutes
Invert Hiding All Results Option
  1 Day, 13 Hours, 32 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD FX-8370 2021OpenBenchmarking.orgPhoronix Test SuiteAMD FX-8370 Eight-Core @ 4.00GHz (4 Cores / 8 Threads)MSI 970 GAMING (MS-7693) v4.0 (V22.3 BIOS)AMD RD9x0/RX9808GB120GB TOSHIBA TR150AMD Radeon HD 5770 1GBRealtek ALC1150G237HLQualcomm Atheros Killer E220xUbuntu 20.105.8.0-33-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.93.3 Mesa 20.2.1 (LLVM 11.0.0)GCC 10.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionAMD FX-8370 2021 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x6000852 - GLAMOR- Python 3.8.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Vet 1 vs. Vet 2 ComparisonPhoronix Test SuiteBaseline+20%+20%+40%+40%+60%+60%7.1%6.1%5.3%3.7%3.4%2.7%2.7%2.6%2.1%LPOP80%MMAP22.7%firefox-84.0.source.tar.xz15.5%GET10%Static OMP SpeedupDLSCUASTC Level 3T.T.S.S5%Context Switching4.4%CPU CacheKostyaC.B.S.A - f32 - CPU3.4%IP Shapes 3D - u8s8f32 - CPU3.2%3.1%CPU - blazefaceSETBoat - CPU-only2.6%CPU - shufflenet-v24 - 10000 - 2,5000,1 - 100002.5%IP Shapes 3D - f32 - CPU2.5%Server Rack - CPU-only2.4%Total Time2.3%Q.1.LR.C.a.P2%RedisStress-NGUnpacking FirefoxRedisCLOMPLuxCoreRenderBasis UniversaleSpeak-NG Speech EngineStress-NGStress-NGsimdjsononeDNNoneDNNNode.js V8 Web Tooling BenchmarkNCNNRedisDarktableNCNNInfluxDBoneDNNDarktableStockfishWebP Image EncodeLuxCoreRenderVet 1Vet 2

AMD FX-8370 2021redis: LPOPredis: GETclomp: Static OMP Speedupluxcorerender: DLSCbasis: UASTC Level 3stress-ng: Context Switchingsimdjson: Kostyaonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUnode-web-tooling: ncnn: CPU - blazefaceredis: SETdarktable: Boat - CPU-onlyncnn: CPU - shufflenet-v2influxdb: 4 - 10000 - 2,5000,1 - 10000onednn: IP Shapes 3D - f32 - CPUdarktable: Server Rack - CPU-onlystockfish: Total Timewebp: Quality 100, Losslessluxcorerender: Rainbow Colors and Prisminfluxdb: 64 - 10000 - 2,5000,1 - 10000onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUmnn: inception-v3stress-ng: MEMFDredis: LPUSHlczero: BLASwarsow: 1920 x 1080mnn: SqueezeNetV1.0openvkl: vklBenchmarkavifenc: 2build-eigen: Time To Compilephpbench: PHP Benchmark Suitecryptsetup: Twofish-XTS 256b Decryptioncompress-lz4: 1 - Decompression Speedx265: Bosphorus 1080pindigobench: CPU - Bedroomkvazaar: Bosphorus 4K - Ultra Fastncnn: CPU-v2-v2 - mobilenet-v2onednn: IP Shapes 1D - f32 - CPUstress-ng: Mallocbuild-ffmpeg: Time To Compilecryptsetup: AES-XTS 256b Encryptioncompress-lz4: 9 - Decompression Speedcompress-lz4: 3 - Decompression Speedstress-ng: Glibc Qsort Data Sortingcryptsetup: Twofish-XTS 512b Encryptioncompress-lz4: 1 - Compression Speedmnn: MobileNetV2_224gimp: unsharp-maskncnn: CPU - regnety_400mcryptsetup: Twofish-XTS 512b Decryptionasmfish: 1024 Hash Memory, 26 Depthcompress-zstd: 19cryptsetup: AES-XTS 512b Decryptionncnn: CPU - alexnetwebp: Quality 100redis: SADDncnn: CPU - mobilenetgimp: auto-levelsrnnoise: webp: Defaultstress-ng: CPU Stressstress-ng: Forkingncnn: CPU-v3-v3 - mobilenet-v3onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUstress-ng: Cryptogimp: rotatestress-ng: Socket Activitydarktable: Masskrug - CPU-onlycryptsetup: Serpent-XTS 256b Decryptionavifenc: 8cryptsetup: AES-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptionstress-ng: Atomicyafaray: Total Time For Sample Scenewebp: Quality 100, Lossless, Highest Compressionavifenc: 10rav1e: 5caffe: AlexNet - CPU - 100deepspeech: CPUmnn: resnet-v2-50stress-ng: System V Message Passingonednn: Deconvolution Batch shapes_3d - f32 - CPUlammps: Rhodopsin Proteinx264: H.264 Video Encodingonednn: IP Shapes 1D - u8s8f32 - CPUncnn: CPU - yolov4-tinycryptsetup: Serpent-XTS 512b Decryptioncryptsetup: PBKDF2-whirlpoolindigobench: CPU - Supercarcoremark: CoreMark Size 666 - Iterations Per Secondstress-ng: Glibc C String Functionsrav1e: 6caffe: GoogleNet - CPU - 100build-linux-kernel: Time To Compilestress-ng: SENDFILEbasis: UASTC Level 2 + RDO Post-Processingbasis: UASTC Level 2gimp: resizeencode-opus: WAV To Opus Encodecompress-7zip: Compress Speed Testcompress-lz4: 9 - Compression Speedstress-ng: Memory Copyinghugin: Panorama Photo Assistant + Stitching Timesqlite-speedtest: Timed Time - Size 1,000basis: ETC1Sstress-ng: NUMAbuild-gdb: Time To Compilencnn: CPU - efficientnet-b0encode-flac: WAV To FLAConednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUbasis: UASTC Level 0rsvg: SVG Files To PNGavifenc: 0cryptsetup: PBKDF2-sha512ocrmypdf: Processing 60 Page PDF Documentrays1bench: Large Scenebuild-apache: Time To Compileonednn: Recurrent Neural Network Training - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUwebp: Quality 100, Highest Compressiononednn: Deconvolution Batch shapes_1d - f32 - CPUtensorflow-lite: SqueezeNetcryptsetup: Serpent-XTS 512b Encryptiononednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUencode-mp3: WAV To MP3stress-ng: Matrix Mathrav1e: 10kvazaar: Bosphorus 1080p - Ultra Fastcryptsetup: Twofish-XTS 256b Encryptionncnn: CPU - mnasnetbuild-mplayer: Time To Compilestress-ng: Vector Mathonednn: Recurrent Neural Network Inference - f32 - CPUtensorflow-lite: Mobilenet Floatonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUtensorflow-lite: Inception V4mnn: mobilenet-v1-1.0numpy: crafty: Elapsed Timecompress-lz4: 3 - Compression Speedcryptsetup: AES-XTS 512b Encryptionjohn-the-ripper: MD5build2: Time To Compilencnn: CPU - resnet50tensorflow-lite: NASNet Mobilencnn: CPU - googlenetncnn: CPU - squeezenet_ssdstress-ng: Semaphorestensorflow-lite: Mobilenet Quanttnn: CPU - SqueezeNet v1.1ncnn: CPU - resnet18encode-ape: WAV To APEencode-wavpack: WAV To WavPacktnn: CPU - MobileNet v2onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUjohn-the-ripper: Blowfishhmmer: Pfam Database Searchrawtherapee: Total Benchmark Timencnn: CPU - vgg16tensorflow-lite: Inception ResNet V2darktable: Server Room - CPU-onlycompress-zstd: 3gromacs: Water Benchmarkoidn: Memorialx265: Bosphorus 4Krav1e: 1kvazaar: Bosphorus 1080p - Very Fastkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Mediumkvazaar: Bosphorus 4K - Mediumlibraw: Post-Processing Benchmarksimdjson: DistinctUserIDsimdjson: PartialTweetssimdjson: LargeRandlczero: Eigenglmark2: 1920 x 1080unpack-firefox: firefox-84.0.source.tar.xzstress-ng: CPU Cachestress-ng: MMAPespeak: Text-To-Speech SynthesisVet 1Vet 21483179.581420635.002.80.49178.488958188.670.2948.15175.742205.314.551077335.4235.37018.80594226.424.07100.491697148430.6480.52721348.238.91371344.893127.99806438.96160143.7159.81432.27168.708134.655415846242.73697.119.520.4902.9525.0532.620322154552.21156.3421081.73653.93656.055.09247.23387.2984.92228.98740.24247.91059069516.9944.849.723.5731212043.0498.2925.46947.7682.4441823.7518452.9223.0021.6285872.8924.2381776.1628.640320.212.8591093.6312.054519.87439.46666.55611.9010.32271021287.012451131.6832212153.79113.8601.51430.2910.2245201.70316.43953981.210122656.510810267246.200.416177755252.49150924.751121.05688.17918.96015.7082123129.811002.06105.809122.125104.18384.19231.26239.8816.92844728.914.90443.324275.517127359562.3689.0648.37444739.813.585511.40871.2033512325315.222769.044745.013.63315073.341.17011.84244.124.69113.03923898.1122758.635117122771.811.74897343657169.003150.23519527130.96943.8256221392.482211.7638350499.74103.79505610.19357144513.82391.2326.39722.974528.34722.20886411191.720138.968767.38661853326.7291745.50.0740.874.550.1155.551.352.090.4517.990.380.370.24138132232.99214.7612.1057.487823801.381291856.213.00.52169.517917908.080.349.78195.928775.154.431106164.7936.30618.33579620.224.67530.503681459430.0260.51707886.538.22741325.195126.33796182.64158145.5157.84732.67166.702136.251420714245.53657.219.320.4852.9824.8032.929521949359.53157.7701091.13622.73625.355.53245.33362.3285.54929.16939.99246.41052700816.8950.149.993.5541205601.5497.7825.34047.5412.4331831.9818535.2323.1021.5349869.1324.1371783.2428.751319.012.8131089.7310.954711.61437.99266.77611.8620.32170801286.130761128.3062206047.98114.1631.51030.2110.1979202.21317.23944141.207122356.312586266600.950.415178137251.95950818.291118.92688.01418.99415.7362119529.861003.73105.648122.302104.32784.08230.97139.9316.90744784.214.88643.376275.205127216662.4379.0548.32144788.013.599911.39671.1316512833314.922789.744784.513.62115086.291.17111.85243.924.71112.95023915.8222775.335142422787.911.75717348723168.889150.33519869630.94944.4256381392.276211.6538370199.69103.84505851.94357312513.59491.2726.38622.965528.55022.20036409191.666139.005767.18661990026.7341745.60.0740.874.550.1155.551.352.090.4517.990.380.370.24138132238.11415.319.8660.334OpenBenchmarking.org

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPVet 1Vet 2300K600K900K1200K1500KSE +/- 9551.20, N = 3SE +/- 5581.05, N = 31483179.58823801.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPVet 1Vet 2300K600K900K1200K1500KMin: 1466369.5 / Avg: 1483179.58 / Max: 1499442.25Min: 813190.25 / Avg: 823801.38 / Max: 832106.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETVet 1Vet 2300K600K900K1200K1500KSE +/- 7264.81, N = 3SE +/- 16057.72, N = 31420635.001291856.211. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETVet 1Vet 2200K400K600K800K1000KMin: 1406514.75 / Avg: 1420635 / Max: 1430661Min: 1269035.5 / Avg: 1291856.21 / Max: 13228361. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupVet 1Vet 20.6751.352.0252.73.375SE +/- 0.03, N = 3SE +/- 0.03, N = 122.83.01. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupVet 1Vet 2246810Min: 2.8 / Avg: 2.83 / Max: 2.9Min: 2.8 / Avg: 2.97 / Max: 3.11. (CC) gcc options: -fopenmp -O3 -lm

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCVet 1Vet 20.1170.2340.3510.4680.585SE +/- 0.01, N = 15SE +/- 0.00, N = 30.490.52MIN: 0.44 / MAX: 0.52MIN: 0.51
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCVet 1Vet 2246810Min: 0.45 / Avg: 0.49 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.52

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Vet 1Vet 24080120160200SE +/- 3.41, N = 9SE +/- 0.10, N = 3178.49169.521. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Vet 1Vet 2306090120150Min: 169.45 / Avg: 178.49 / Max: 192.29Min: 169.36 / Avg: 169.52 / Max: 169.691. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingVet 1Vet 2200K400K600K800K1000KSE +/- 9262.61, N = 3SE +/- 6076.39, N = 3958188.67917908.081. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingVet 1Vet 2170K340K510K680K850KMin: 947811.21 / Avg: 958188.67 / Max: 976667.21Min: 907460.11 / Avg: 917908.08 / Max: 928507.711. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaVet 1Vet 20.06750.1350.20250.270.3375SE +/- 0.00, N = 3SE +/- 0.00, N = 30.290.301. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaVet 1Vet 212345Min: 0.29 / Avg: 0.29 / Max: 0.3Min: 0.3 / Avg: 0.3 / Max: 0.31. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUVet 1Vet 21122334455SE +/- 0.07, N = 3SE +/- 0.06, N = 348.1549.78MIN: 46.47MIN: 47.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUVet 1Vet 21020304050Min: 48.08 / Avg: 48.15 / Max: 48.29Min: 49.7 / Avg: 49.78 / Max: 49.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUVet 1Vet 21.3342.6684.0025.3366.67SE +/- 0.00798, N = 3SE +/- 0.01645, N = 35.742205.92877MIN: 5.23MIN: 5.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUVet 1Vet 2246810Min: 5.73 / Avg: 5.74 / Max: 5.75Min: 5.9 / Avg: 5.93 / Max: 5.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkVet 1Vet 21.19482.38963.58444.77925.974SE +/- 0.02, N = 3SE +/- 0.07, N = 35.315.151. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkVet 1Vet 2246810Min: 5.28 / Avg: 5.31 / Max: 5.34Min: 5.04 / Avg: 5.15 / Max: 5.271. Nodejs v12.18.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceVet 1Vet 21.02382.04763.07144.09525.119SE +/- 0.03, N = 3SE +/- 0.05, N = 34.554.43MIN: 3.95 / MAX: 18.03MIN: 3.96 / MAX: 22.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceVet 1Vet 2246810Min: 4.51 / Avg: 4.55 / Max: 4.6Min: 4.35 / Avg: 4.43 / Max: 4.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETVet 1Vet 2200K400K600K800K1000KSE +/- 6967.52, N = 3SE +/- 11513.44, N = 31077335.421106164.791. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETVet 1Vet 2200K400K600K800K1000KMin: 1063829.75 / Avg: 1077335.42 / Max: 1087060.88Min: 1091738 / Avg: 1106164.79 / Max: 11289211. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyVet 1Vet 2816243240SE +/- 0.03, N = 3SE +/- 0.04, N = 335.3736.31
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyVet 1Vet 2816243240Min: 35.32 / Avg: 35.37 / Max: 35.42Min: 36.23 / Avg: 36.31 / Max: 36.35

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Vet 1Vet 2510152025SE +/- 0.10, N = 3SE +/- 0.15, N = 318.8018.33MIN: 16.63 / MAX: 34.89MIN: 16.43 / MAX: 32.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Vet 1Vet 2510152025Min: 18.61 / Avg: 18.8 / Max: 18.97Min: 18.18 / Avg: 18.33 / Max: 18.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Vet 1Vet 2130K260K390K520K650KSE +/- 4437.95, N = 3SE +/- 8990.08, N = 12594226.4579620.2
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Vet 1Vet 2100K200K300K400K500KMin: 586130 / Avg: 594226.4 / Max: 601424.6Min: 521129.2 / Avg: 579620.24 / Max: 649880.6

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUVet 1Vet 2612182430SE +/- 0.13, N = 3SE +/- 0.08, N = 324.0724.68MIN: 23.42MIN: 23.971. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUVet 1Vet 2612182430Min: 23.8 / Avg: 24.07 / Max: 24.24Min: 24.59 / Avg: 24.68 / Max: 24.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyVet 1Vet 20.11320.22640.33960.45280.566SE +/- 0.001, N = 3SE +/- 0.001, N = 30.4910.503
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyVet 1Vet 2246810Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.5 / Avg: 0.5 / Max: 0.51

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeVet 1Vet 21.5M3M4.5M6M7.5MSE +/- 92057.35, N = 3SE +/- 53835.86, N = 13697148468145941. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeVet 1Vet 21.2M2.4M3.6M4.8M6MMin: 6834715 / Avg: 6971484.33 / Max: 7146613Min: 6383562 / Avg: 6814593.92 / Max: 71561521. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessVet 1Vet 2714212835SE +/- 0.04, N = 3SE +/- 0.18, N = 330.6530.031. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessVet 1Vet 2714212835Min: 30.58 / Avg: 30.65 / Max: 30.71Min: 29.66 / Avg: 30.03 / Max: 30.251. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismVet 1Vet 20.1170.2340.3510.4680.585SE +/- 0.01, N = 3SE +/- 0.01, N = 30.520.51MIN: 0.49 / MAX: 0.58MIN: 0.49 / MAX: 0.58
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismVet 1Vet 2246810Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.5 / Avg: 0.51 / Max: 0.52

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Vet 1Vet 2150K300K450K600K750KSE +/- 7598.50, N = 5SE +/- 7227.41, N = 5721348.2707886.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Vet 1Vet 2130K260K390K520K650KMin: 699306.9 / Avg: 721348.22 / Max: 740913.4Min: 684286.5 / Avg: 707886.52 / Max: 726298.8

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUVet 1Vet 2918273645SE +/- 0.11, N = 3SE +/- 0.14, N = 338.9138.23MIN: 36.73MIN: 35.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUVet 1Vet 2816243240Min: 38.69 / Avg: 38.91 / Max: 39.04Min: 37.94 / Avg: 38.23 / Max: 38.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Vet 1Vet 230060090012001500SE +/- 16.11, N = 3SE +/- 2.29, N = 31344.891325.20MIN: 1285.6 / MAX: 1579.75MIN: 1283.55 / MAX: 1801.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Vet 1Vet 22004006008001000Min: 1328.64 / Avg: 1344.89 / Max: 1377.11Min: 1321.68 / Avg: 1325.2 / Max: 1329.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDVet 1Vet 2306090120150SE +/- 0.95, N = 3SE +/- 0.92, N = 3127.99126.331. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDVet 1Vet 220406080100Min: 126.14 / Avg: 127.99 / Max: 129.27Min: 124.73 / Avg: 126.33 / Max: 127.921. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHVet 1Vet 2200K400K600K800K1000KSE +/- 8135.45, N = 3SE +/- 9548.09, N = 4806438.96796182.641. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHVet 1Vet 2140K280K420K560K700KMin: 794687.88 / Avg: 806438.96 / Max: 822060.81Min: 772522.06 / Avg: 796182.64 / Max: 819210.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASVet 1Vet 24080120160200SE +/- 0.58, N = 3SE +/- 0.67, N = 31601581. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASVet 1Vet 2306090120150Min: 159 / Avg: 160 / Max: 161Min: 157 / Avg: 157.67 / Max: 1591. (CXX) g++ options: -flto -pthread

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Vet 1Vet 2306090120150SE +/- 1.92, N = 3SE +/- 0.23, N = 3143.7145.5
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Vet 1Vet 2306090120150Min: 139.9 / Avg: 143.73 / Max: 145.8Min: 145 / Avg: 145.47 / Max: 145.7

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Vet 1Vet 24080120160200SE +/- 2.02, N = 3SE +/- 0.77, N = 3159.81157.85MIN: 150.79 / MAX: 263.93MIN: 151.35 / MAX: 262.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Vet 1Vet 2306090120150Min: 156.09 / Avg: 159.81 / Max: 163.01Min: 156.48 / Avg: 157.85 / Max: 159.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVet 1Vet 2816243240SE +/- 0.10, N = 3SE +/- 0.08, N = 332.2732.67MIN: 1 / MAX: 76MIN: 1 / MAX: 77
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVet 1Vet 2714212835Min: 32.09 / Avg: 32.27 / Max: 32.45Min: 32.55 / Avg: 32.67 / Max: 32.82

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Vet 1Vet 24080120160200SE +/- 1.23, N = 3SE +/- 1.16, N = 3168.71166.701. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Vet 1Vet 2306090120150Min: 166.33 / Avg: 168.71 / Max: 170.45Min: 165.05 / Avg: 166.7 / Max: 168.931. (CXX) g++ options: -O3 -fPIC

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileVet 1Vet 2306090120150SE +/- 0.17, N = 3SE +/- 1.80, N = 3134.66136.25
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileVet 1Vet 2306090120150Min: 134.38 / Avg: 134.65 / Max: 134.96Min: 134.42 / Avg: 136.25 / Max: 139.86

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteVet 1Vet 290K180K270K360K450KSE +/- 4041.48, N = 12SE +/- 1456.68, N = 3415846420714
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteVet 1Vet 270K140K210K280K350KMin: 372118 / Avg: 415846.08 / Max: 425210Min: 418182 / Avg: 420714.33 / Max: 423228

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionVet 1Vet 250100150200250SE +/- 4.37, N = 3SE +/- 1.82, N = 3242.7245.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionVet 1Vet 24080120160200Min: 234 / Avg: 242.73 / Max: 247.5Min: 242.5 / Avg: 245.5 / Max: 248.8

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedVet 1Vet 28001600240032004000SE +/- 4.86, N = 3SE +/- 9.10, N = 33697.13657.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedVet 1Vet 26001200180024003000Min: 3689 / Avg: 3697.1 / Max: 3705.8Min: 3641.8 / Avg: 3657.23 / Max: 3673.31. (CC) gcc options: -O3

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pVet 1Vet 2510152025SE +/- 0.04, N = 3SE +/- 0.08, N = 319.5219.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pVet 1Vet 2510152025Min: 19.44 / Avg: 19.52 / Max: 19.57Min: 19.17 / Avg: 19.32 / Max: 19.441. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomVet 1Vet 20.11030.22060.33090.44120.5515SE +/- 0.002, N = 3SE +/- 0.003, N = 30.4900.485
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomVet 1Vet 2246810Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.48 / Avg: 0.49 / Max: 0.49

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastVet 1Vet 20.67051.3412.01152.6823.3525SE +/- 0.04, N = 4SE +/- 0.00, N = 32.952.981. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastVet 1Vet 2246810Min: 2.84 / Avg: 2.95 / Max: 2.99Min: 2.98 / Avg: 2.98 / Max: 2.991. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Vet 1Vet 2612182430SE +/- 0.09, N = 3SE +/- 0.09, N = 325.0524.80MIN: 22.78 / MAX: 36.55MIN: 23.05 / MAX: 35.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Vet 1Vet 2612182430Min: 24.86 / Avg: 25.05 / Max: 25.17Min: 24.63 / Avg: 24.8 / Max: 24.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUVet 1Vet 2816243240SE +/- 0.06, N = 3SE +/- 0.08, N = 332.6232.93MIN: 31.6MIN: 31.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUVet 1Vet 2714212835Min: 32.5 / Avg: 32.62 / Max: 32.69Min: 32.77 / Avg: 32.93 / Max: 33.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocVet 1Vet 25M10M15M20M25MSE +/- 161586.44, N = 3SE +/- 274937.61, N = 322154552.2121949359.531. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocVet 1Vet 24M8M12M16M20MMin: 21880964.91 / Avg: 22154552.21 / Max: 22440319.1Min: 21489115.28 / Avg: 21949359.53 / Max: 22440064.631. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileVet 1Vet 2306090120150SE +/- 0.84, N = 3SE +/- 2.09, N = 12156.34157.77
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileVet 1Vet 2306090120150Min: 154.74 / Avg: 156.34 / Max: 157.6Min: 153.95 / Avg: 157.77 / Max: 180.56

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionVet 1Vet 22004006008001000SE +/- 10.94, N = 3SE +/- 3.78, N = 31081.71091.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionVet 1Vet 22004006008001000Min: 1061 / Avg: 1081.7 / Max: 1098.2Min: 1083.9 / Avg: 1091.1 / Max: 1096.7

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedVet 1Vet 28001600240032004000SE +/- 1.08, N = 3SE +/- 1.53, N = 33653.93622.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedVet 1Vet 26001200180024003000Min: 3651.9 / Avg: 3653.93 / Max: 3655.6Min: 3620.6 / Avg: 3622.73 / Max: 3625.71. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedVet 1Vet 28001600240032004000SE +/- 5.46, N = 3SE +/- 6.33, N = 33656.03625.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedVet 1Vet 26001200180024003000Min: 3646.6 / Avg: 3656 / Max: 3665.5Min: 3615.1 / Avg: 3625.33 / Max: 3636.91. (CC) gcc options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingVet 1Vet 21224364860SE +/- 0.40, N = 3SE +/- 0.11, N = 355.0955.531. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingVet 1Vet 21122334455Min: 54.33 / Avg: 55.09 / Max: 55.67Min: 55.37 / Avg: 55.53 / Max: 55.731. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionVet 1Vet 250100150200250SE +/- 0.96, N = 3SE +/- 1.24, N = 3247.2245.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionVet 1Vet 24080120160200Min: 245.3 / Avg: 247.17 / Max: 248.5Min: 242.9 / Avg: 245.27 / Max: 247.1

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedVet 1Vet 27001400210028003500SE +/- 4.52, N = 3SE +/- 5.68, N = 33387.293362.321. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedVet 1Vet 26001200180024003000Min: 3381.87 / Avg: 3387.29 / Max: 3396.27Min: 3351.85 / Avg: 3362.32 / Max: 3371.351. (CC) gcc options: -O3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Vet 1Vet 220406080100SE +/- 0.30, N = 3SE +/- 0.84, N = 384.9285.55MIN: 81.76 / MAX: 138.82MIN: 81.09 / MAX: 142.921. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Vet 1Vet 21632486480Min: 84.33 / Avg: 84.92 / Max: 85.32Min: 84.52 / Avg: 85.55 / Max: 87.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskVet 1Vet 2714212835SE +/- 0.05, N = 3SE +/- 0.24, N = 328.9929.17
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskVet 1Vet 2612182430Min: 28.92 / Avg: 28.99 / Max: 29.08Min: 28.89 / Avg: 29.17 / Max: 29.65

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mVet 1Vet 2918273645SE +/- 0.31, N = 3SE +/- 0.12, N = 340.2439.99MIN: 36.63 / MAX: 98.56MIN: 36.36 / MAX: 56.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mVet 1Vet 2816243240Min: 39.68 / Avg: 40.24 / Max: 40.75Min: 39.77 / Avg: 39.99 / Max: 40.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionVet 1Vet 250100150200250SE +/- 1.33, N = 3SE +/- 1.39, N = 3247.9246.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionVet 1Vet 24080120160200Min: 245.2 / Avg: 247.87 / Max: 249.2Min: 244.1 / Avg: 246.43 / Max: 248.9

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthVet 1Vet 22M4M6M8M10MSE +/- 22277.77, N = 3SE +/- 136672.28, N = 31059069510527008
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthVet 1Vet 22M4M6M8M10MMin: 10546170 / Avg: 10590695 / Max: 10614386Min: 10282427 / Avg: 10527008.33 / Max: 10755001

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Vet 1Vet 248121620SE +/- 0.16, N = 12SE +/- 0.21, N = 316.916.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Vet 1Vet 248121620Min: 16.1 / Avg: 16.94 / Max: 17.8Min: 16.4 / Avg: 16.8 / Max: 17.11. (CC) gcc options: -O3 -pthread -lz -llzma

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionVet 1Vet 22004006008001000SE +/- 5.40, N = 3SE +/- 0.07, N = 3944.8950.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionVet 1Vet 2170340510680850Min: 935.6 / Avg: 944.8 / Max: 954.3Min: 950 / Avg: 950.07 / Max: 950.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetVet 1Vet 21122334455SE +/- 0.08, N = 3SE +/- 0.09, N = 349.7249.99MIN: 47.15 / MAX: 75.12MIN: 47.21 / MAX: 77.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetVet 1Vet 21020304050Min: 49.62 / Avg: 49.72 / Max: 49.89Min: 49.81 / Avg: 49.99 / Max: 50.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Vet 1Vet 20.80391.60782.41173.21564.0195SE +/- 0.010, N = 3SE +/- 0.008, N = 33.5733.5541. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Vet 1Vet 2246810Min: 3.56 / Avg: 3.57 / Max: 3.59Min: 3.54 / Avg: 3.55 / Max: 3.571. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDVet 1Vet 2300K600K900K1200K1500KSE +/- 10820.63, N = 3SE +/- 10488.62, N = 31212043.041205601.541. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDVet 1Vet 2200K400K600K800K1000KMin: 1197988 / Avg: 1212043.04 / Max: 1233321.88Min: 1187838.5 / Avg: 1205601.54 / Max: 1224146.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetVet 1Vet 220406080100SE +/- 0.24, N = 3SE +/- 0.21, N = 398.2997.78MIN: 93.92 / MAX: 120.36MIN: 94.04 / MAX: 112.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetVet 1Vet 220406080100Min: 97.96 / Avg: 98.29 / Max: 98.76Min: 97.38 / Avg: 97.78 / Max: 98.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsVet 1Vet 2612182430SE +/- 0.05, N = 3SE +/- 0.02, N = 325.4725.34
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsVet 1Vet 2612182430Min: 25.4 / Avg: 25.47 / Max: 25.56Min: 25.31 / Avg: 25.34 / Max: 25.39

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Vet 1Vet 21122334455SE +/- 0.65, N = 3SE +/- 0.62, N = 347.7747.541. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Vet 1Vet 21020304050Min: 46.83 / Avg: 47.77 / Max: 49.02Min: 46.62 / Avg: 47.54 / Max: 48.711. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultVet 1Vet 20.54991.09981.64972.19962.7495SE +/- 0.005, N = 3SE +/- 0.006, N = 32.4442.4331. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultVet 1Vet 2246810Min: 2.44 / Avg: 2.44 / Max: 2.45Min: 2.42 / Avg: 2.43 / Max: 2.441. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressVet 1Vet 2400800120016002000SE +/- 6.19, N = 3SE +/- 16.57, N = 31823.751831.981. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressVet 1Vet 230060090012001500Min: 1811.39 / Avg: 1823.75 / Max: 1830.43Min: 1805.04 / Avg: 1831.98 / Max: 1862.171. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingVet 1Vet 24K8K12K16K20KSE +/- 48.99, N = 3SE +/- 143.47, N = 318452.9218535.231. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingVet 1Vet 23K6K9K12K15KMin: 18360.08 / Avg: 18452.92 / Max: 18526.45Min: 18353.9 / Avg: 18535.23 / Max: 18818.481. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Vet 1Vet 2612182430SE +/- 0.06, N = 3SE +/- 0.04, N = 323.0023.10MIN: 21.42 / MAX: 41.88MIN: 21.25 / MAX: 39.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Vet 1Vet 2510152025Min: 22.92 / Avg: 23 / Max: 23.12Min: 23.04 / Avg: 23.1 / Max: 23.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUVet 1Vet 2510152025SE +/- 0.07, N = 3SE +/- 0.03, N = 321.6321.53MIN: 19.96MIN: 20.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUVet 1Vet 2510152025Min: 21.51 / Avg: 21.63 / Max: 21.73Min: 21.5 / Avg: 21.53 / Max: 21.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoVet 1Vet 22004006008001000SE +/- 1.19, N = 3SE +/- 0.06, N = 3872.89869.131. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoVet 1Vet 2150300450600750Min: 870.52 / Avg: 872.89 / Max: 874.19Min: 869.02 / Avg: 869.13 / Max: 869.211. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateVet 1Vet 2612182430SE +/- 0.05, N = 3SE +/- 0.05, N = 324.2424.14
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateVet 1Vet 2612182430Min: 24.16 / Avg: 24.24 / Max: 24.33Min: 24.04 / Avg: 24.14 / Max: 24.2

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityVet 1Vet 2400800120016002000SE +/- 7.16, N = 3SE +/- 6.82, N = 31776.161783.241. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityVet 1Vet 230060090012001500Min: 1762.4 / Avg: 1776.16 / Max: 1786.48Min: 1770.07 / Avg: 1783.24 / Max: 1792.911. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyVet 1Vet 2714212835SE +/- 0.01, N = 3SE +/- 0.01, N = 328.6428.75
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyVet 1Vet 2612182430Min: 28.62 / Avg: 28.64 / Max: 28.66Min: 28.74 / Avg: 28.75 / Max: 28.77

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionVet 1Vet 270140210280350SE +/- 0.50, N = 3SE +/- 2.43, N = 3320.2319.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionVet 1Vet 260120180240300Min: 319.2 / Avg: 320.2 / Max: 320.8Min: 315.2 / Avg: 318.97 / Max: 323.5

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Vet 1Vet 23691215SE +/- 0.11, N = 3SE +/- 0.10, N = 312.8612.811. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Vet 1Vet 248121620Min: 12.69 / Avg: 12.86 / Max: 13.07Min: 12.69 / Avg: 12.81 / Max: 13.021. (CXX) g++ options: -O3 -fPIC

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionVet 1Vet 22004006008001000SE +/- 5.61, N = 3SE +/- 3.99, N = 31093.61089.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionVet 1Vet 22004006008001000Min: 1082.6 / Avg: 1093.63 / Max: 1100.9Min: 1082.3 / Avg: 1089.67 / Max: 1096

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionVet 1Vet 270140210280350SE +/- 1.25, N = 3SE +/- 3.18, N = 3312.0310.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionVet 1Vet 260120180240300Min: 309.5 / Avg: 311.97 / Max: 313.5Min: 307.3 / Avg: 310.87 / Max: 317.2

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicVet 1Vet 212K24K36K48K60KSE +/- 32.93, N = 3SE +/- 114.37, N = 354519.8754711.611. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicVet 1Vet 29K18K27K36K45KMin: 54481.15 / Avg: 54519.87 / Max: 54585.36Min: 54501.82 / Avg: 54711.61 / Max: 54895.461. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneVet 1Vet 2100200300400500SE +/- 0.95, N = 3SE +/- 1.55, N = 3439.47437.991. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneVet 1Vet 280160240320400Min: 437.61 / Avg: 439.47 / Max: 440.71Min: 435.82 / Avg: 437.99 / Max: 440.981. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionVet 1Vet 21530456075SE +/- 0.20, N = 3SE +/- 0.06, N = 366.5666.781. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionVet 1Vet 21326395265Min: 66.17 / Avg: 66.56 / Max: 66.81Min: 66.66 / Avg: 66.78 / Max: 66.851. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Vet 1Vet 23691215SE +/- 0.07, N = 3SE +/- 0.04, N = 311.9011.861. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Vet 1Vet 23691215Min: 11.77 / Avg: 11.9 / Max: 11.97Min: 11.79 / Avg: 11.86 / Max: 11.931. (CXX) g++ options: -O3 -fPIC

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Vet 1Vet 20.07250.1450.21750.290.3625SE +/- 0.000, N = 3SE +/- 0.001, N = 30.3220.321
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Vet 1Vet 212345Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Vet 1Vet 215K30K45K60K75KSE +/- 204.80, N = 3SE +/- 73.54, N = 371021708011. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Vet 1Vet 212K24K36K48K60KMin: 70725 / Avg: 71020.67 / Max: 71414Min: 70704 / Avg: 70800.67 / Max: 709451. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUVet 1Vet 260120180240300SE +/- 1.20, N = 3SE +/- 1.12, N = 3287.01286.13
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUVet 1Vet 250100150200250Min: 285.46 / Avg: 287.01 / Max: 289.38Min: 284.9 / Avg: 286.13 / Max: 288.37

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Vet 1Vet 22004006008001000SE +/- 2.09, N = 3SE +/- 2.75, N = 31131.681128.31MIN: 1108.54 / MAX: 1212.22MIN: 1107.01 / MAX: 1197.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Vet 1Vet 22004006008001000Min: 1127.56 / Avg: 1131.68 / Max: 1134.37Min: 1122.81 / Avg: 1128.31 / Max: 1131.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingVet 1Vet 2500K1000K1500K2000K2500KSE +/- 5483.77, N = 3SE +/- 6160.98, N = 32212153.792206047.981. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingVet 1Vet 2400K800K1200K1600K2000KMin: 2202423.38 / Avg: 2212153.79 / Max: 2221401.3Min: 2197517.28 / Avg: 2206047.98 / Max: 2218013.551. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUVet 1Vet 2306090120150SE +/- 0.07, N = 3SE +/- 0.24, N = 3113.86114.16MIN: 111.71MIN: 111.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUVet 1Vet 220406080100Min: 113.72 / Avg: 113.86 / Max: 113.94Min: 113.7 / Avg: 114.16 / Max: 114.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinVet 1Vet 20.34070.68141.02211.36281.7035SE +/- 0.015, N = 15SE +/- 0.007, N = 31.5141.5101. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinVet 1Vet 2246810Min: 1.37 / Avg: 1.51 / Max: 1.57Min: 1.5 / Avg: 1.51 / Max: 1.521. (CXX) g++ options: -O3 -pthread -lm

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingVet 1Vet 2714212835SE +/- 0.22, N = 11SE +/- 0.27, N = 730.2930.211. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingVet 1Vet 2714212835Min: 28.67 / Avg: 30.29 / Max: 30.7Min: 28.67 / Avg: 30.21 / Max: 30.731. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUVet 1Vet 23691215SE +/- 0.00, N = 3SE +/- 0.01, N = 310.2210.20MIN: 9.53MIN: 9.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUVet 1Vet 23691215Min: 10.22 / Avg: 10.22 / Max: 10.23Min: 10.19 / Avg: 10.2 / Max: 10.211. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyVet 1Vet 24080120160200SE +/- 0.34, N = 3SE +/- 0.27, N = 3201.70202.21MIN: 197.09 / MAX: 225.22MIN: 197.31 / MAX: 217.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyVet 1Vet 24080120160200Min: 201.09 / Avg: 201.7 / Max: 202.28Min: 201.74 / Avg: 202.21 / Max: 202.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionVet 1Vet 270140210280350SE +/- 6.70, N = 2SE +/- 2.67, N = 3316.4317.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionVet 1Vet 260120180240300Min: 309.7 / Avg: 316.4 / Max: 323.1Min: 313.3 / Avg: 317.2 / Max: 322.3

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolVet 1Vet 280K160K240K320K400KSE +/- 1238.75, N = 3SE +/- 1763.06, N = 3395398394414
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolVet 1Vet 270K140K210K280K350KMin: 393019 / Avg: 395397.67 / Max: 397187Min: 391844 / Avg: 394414.33 / Max: 397790

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarVet 1Vet 20.27230.54460.81691.08921.3615SE +/- 0.003, N = 3SE +/- 0.007, N = 31.2101.207
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarVet 1Vet 2246810Min: 1.2 / Avg: 1.21 / Max: 1.21Min: 1.19 / Avg: 1.21 / Max: 1.22

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondVet 1Vet 230K60K90K120K150KSE +/- 563.65, N = 3SE +/- 243.76, N = 3122656.51122356.311. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondVet 1Vet 220K40K60K80K100KMin: 121562.07 / Avg: 122656.51 / Max: 123437.74Min: 121895.47 / Avg: 122356.31 / Max: 122724.481. (CC) gcc options: -O2 -lrt" -lrt

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsVet 1Vet 260K120K180K240K300KSE +/- 202.00, N = 3SE +/- 439.83, N = 3267246.20266600.951. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsVet 1Vet 250K100K150K200K250KMin: 266946.35 / Avg: 267246.2 / Max: 267630.59Min: 265725.34 / Avg: 266600.95 / Max: 267111.81. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Vet 1Vet 20.09360.18720.28080.37440.468SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4160.415
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Vet 1Vet 212345Min: 0.42 / Avg: 0.42 / Max: 0.42Min: 0.41 / Avg: 0.41 / Max: 0.42

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Vet 1Vet 240K80K120K160K200KSE +/- 126.85, N = 3SE +/- 98.17, N = 31777551781371. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Vet 1Vet 230K60K90K120K150KMin: 177504 / Avg: 177755.33 / Max: 177911Min: 178029 / Avg: 178137 / Max: 1783331. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileVet 1Vet 260120180240300SE +/- 0.89, N = 3SE +/- 1.18, N = 3252.49251.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileVet 1Vet 250100150200250Min: 250.78 / Avg: 252.49 / Max: 253.74Min: 249.67 / Avg: 251.96 / Max: 253.61

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEVet 1Vet 211K22K33K44K55KSE +/- 40.91, N = 3SE +/- 19.61, N = 350924.7550818.291. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEVet 1Vet 29K18K27K36K45KMin: 50850.62 / Avg: 50924.75 / Max: 50991.82Min: 50789.37 / Avg: 50818.29 / Max: 50855.71. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingVet 1Vet 22004006008001000SE +/- 9.83, N = 9SE +/- 7.10, N = 31121.061118.931. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingVet 1Vet 22004006008001000Min: 1106.83 / Avg: 1121.06 / Max: 1199.53Min: 1111.58 / Avg: 1118.93 / Max: 1133.121. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Vet 1Vet 220406080100SE +/- 0.13, N = 3SE +/- 0.04, N = 388.1888.011. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Vet 1Vet 220406080100Min: 87.98 / Avg: 88.18 / Max: 88.43Min: 87.98 / Avg: 88.01 / Max: 88.091. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeVet 1Vet 2510152025SE +/- 0.16, N = 3SE +/- 0.17, N = 318.9618.99
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeVet 1Vet 2510152025Min: 18.75 / Avg: 18.96 / Max: 19.27Min: 18.8 / Avg: 18.99 / Max: 19.33

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeVet 1Vet 248121620SE +/- 0.03, N = 5SE +/- 0.03, N = 515.7115.741. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeVet 1Vet 248121620Min: 15.65 / Avg: 15.71 / Max: 15.82Min: 15.67 / Avg: 15.74 / Max: 15.841. (CXX) g++ options: -fvisibility=hidden -logg -lm

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestVet 1Vet 25K10K15K20K25KSE +/- 14.62, N = 3SE +/- 34.22, N = 321231211951. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestVet 1Vet 24K8K12K16K20KMin: 21208 / Avg: 21230.67 / Max: 21258Min: 21142 / Avg: 21195 / Max: 212591. (CXX) g++ options: -pipe -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedVet 1Vet 2714212835SE +/- 0.06, N = 3SE +/- 0.04, N = 329.8129.861. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedVet 1Vet 2714212835Min: 29.73 / Avg: 29.81 / Max: 29.92Min: 29.79 / Avg: 29.86 / Max: 29.931. (CC) gcc options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingVet 1Vet 22004006008001000SE +/- 2.90, N = 3SE +/- 2.16, N = 31002.061003.731. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingVet 1Vet 22004006008001000Min: 996.4 / Avg: 1002.06 / Max: 1005.94Min: 999.91 / Avg: 1003.73 / Max: 1007.371. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeVet 1Vet 220406080100SE +/- 0.45, N = 3SE +/- 1.14, N = 3105.81105.65
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeVet 1Vet 220406080100Min: 105 / Avg: 105.81 / Max: 106.53Min: 104.3 / Avg: 105.65 / Max: 107.92

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Vet 1Vet 2306090120150SE +/- 0.33, N = 3SE +/- 0.40, N = 3122.13122.301. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Vet 1Vet 220406080100Min: 121.56 / Avg: 122.12 / Max: 122.69Min: 121.73 / Avg: 122.3 / Max: 123.081. (CC) gcc options: -O2 -ldl -lz -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SVet 1Vet 220406080100SE +/- 0.16, N = 3SE +/- 0.39, N = 3104.18104.331. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SVet 1Vet 220406080100Min: 103.9 / Avg: 104.18 / Max: 104.43Min: 103.66 / Avg: 104.33 / Max: 104.991. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAVet 1Vet 220406080100SE +/- 0.28, N = 3SE +/- 0.62, N = 384.1984.081. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAVet 1Vet 21632486480Min: 83.64 / Avg: 84.19 / Max: 84.55Min: 83.3 / Avg: 84.08 / Max: 85.311. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileVet 1Vet 250100150200250SE +/- 0.19, N = 3SE +/- 0.15, N = 3231.26230.97
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileVet 1Vet 24080120160200Min: 230.92 / Avg: 231.26 / Max: 231.57Min: 230.67 / Avg: 230.97 / Max: 231.14

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Vet 1Vet 2918273645SE +/- 0.14, N = 3SE +/- 0.20, N = 339.8839.93MIN: 37.27 / MAX: 56.04MIN: 37.28 / MAX: 54.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Vet 1Vet 2816243240Min: 39.63 / Avg: 39.88 / Max: 40.11Min: 39.73 / Avg: 39.93 / Max: 40.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACVet 1Vet 248121620SE +/- 0.04, N = 5SE +/- 0.06, N = 516.9316.911. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACVet 1Vet 248121620Min: 16.82 / Avg: 16.93 / Max: 17.05Min: 16.69 / Avg: 16.91 / Max: 17.041. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUVet 1Vet 210K20K30K40K50KSE +/- 13.80, N = 3SE +/- 2.68, N = 344728.944784.2MIN: 44551.4MIN: 44656.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUVet 1Vet 28K16K24K32K40KMin: 44701.8 / Avg: 44728.9 / Max: 44747Min: 44780.2 / Avg: 44784.2 / Max: 44789.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Vet 1Vet 248121620SE +/- 0.03, N = 3SE +/- 0.03, N = 314.9014.891. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Vet 1Vet 248121620Min: 14.85 / Avg: 14.9 / Max: 14.95Min: 14.85 / Avg: 14.89 / Max: 14.941. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGVet 1Vet 21020304050SE +/- 0.10, N = 3SE +/- 0.13, N = 343.3243.381. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGVet 1Vet 2918273645Min: 43.17 / Avg: 43.32 / Max: 43.52Min: 43.18 / Avg: 43.38 / Max: 43.621. rsvg-convert version 2.50.1

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Vet 1Vet 260120180240300SE +/- 0.68, N = 3SE +/- 0.50, N = 3275.52275.211. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Vet 1Vet 250100150200250Min: 274.56 / Avg: 275.52 / Max: 276.84Min: 274.34 / Avg: 275.21 / Max: 276.081. (CXX) g++ options: -O3 -fPIC

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Vet 1Vet 2300K600K900K1200K1500KSE +/- 3710.07, N = 3SE +/- 9354.11, N = 312735951272166
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Vet 1Vet 2200K400K600K800K1000KMin: 1266396 / Avg: 1273595.33 / Max: 1278751Min: 1255779 / Avg: 1272166 / Max: 1288176

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentVet 1Vet 21428425670SE +/- 0.15, N = 3SE +/- 0.34, N = 362.3762.44
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentVet 1Vet 21224364860Min: 62.08 / Avg: 62.37 / Max: 62.55Min: 61.97 / Avg: 62.44 / Max: 63.1

rays1bench

This is a test of rays1bench, a simple path-tracer / ray-tracing that supports SSE and AVX instructions, multi-threading, and other features. This test profile is measuring the performance of the "large scene" in rays1bench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneVet 1Vet 23691215SE +/- 0.00, N = 3SE +/- 0.01, N = 39.069.05
OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneVet 1Vet 23691215Min: 9.05 / Avg: 9.06 / Max: 9.06Min: 9.03 / Avg: 9.05 / Max: 9.06

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileVet 1Vet 21122334455SE +/- 0.49, N = 3SE +/- 0.22, N = 348.3748.32
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileVet 1Vet 21020304050Min: 47.74 / Avg: 48.37 / Max: 49.34Min: 48.04 / Avg: 48.32 / Max: 48.76

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUVet 1Vet 210K20K30K40K50KSE +/- 11.37, N = 3SE +/- 6.26, N = 344739.844788.0MIN: 44603MIN: 44662.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUVet 1Vet 28K16K24K32K40KMin: 44717.1 / Avg: 44739.83 / Max: 44751.8Min: 44775.6 / Avg: 44788.03 / Max: 44795.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUVet 1Vet 23691215SE +/- 0.00, N = 3SE +/- 0.01, N = 313.5913.60MIN: 12.65MIN: 12.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUVet 1Vet 248121620Min: 13.58 / Avg: 13.59 / Max: 13.59Min: 13.58 / Avg: 13.6 / Max: 13.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionVet 1Vet 23691215SE +/- 0.05, N = 3SE +/- 0.04, N = 311.4111.401. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionVet 1Vet 23691215Min: 11.33 / Avg: 11.41 / Max: 11.5Min: 11.33 / Avg: 11.4 / Max: 11.461. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUVet 1Vet 21632486480SE +/- 0.12, N = 3SE +/- 0.09, N = 371.2071.13MIN: 68.92MIN: 68.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUVet 1Vet 21428425670Min: 71.03 / Avg: 71.2 / Max: 71.44Min: 71.04 / Avg: 71.13 / Max: 71.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetVet 1Vet 2110K220K330K440K550KSE +/- 78.87, N = 3SE +/- 469.08, N = 3512325512833
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetVet 1Vet 290K180K270K360K450KMin: 512174 / Avg: 512325 / Max: 512440Min: 511901 / Avg: 512833 / Max: 513392

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionVet 1Vet 270140210280350SE +/- 0.07, N = 3SE +/- 1.47, N = 3315.2314.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionVet 1Vet 260120180240300Min: 315.1 / Avg: 315.23 / Max: 315.3Min: 312 / Avg: 314.93 / Max: 316.5

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUVet 1Vet 25K10K15K20K25KSE +/- 7.47, N = 3SE +/- 15.06, N = 322769.022789.7MIN: 22649.1MIN: 22662.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUVet 1Vet 24K8K12K16K20KMin: 22758.7 / Avg: 22768.97 / Max: 22783.5Min: 22761.1 / Avg: 22789.67 / Max: 22812.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUVet 1Vet 210K20K30K40K50KSE +/- 4.04, N = 3SE +/- 15.69, N = 344745.044784.5MIN: 44611.3MIN: 44650.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUVet 1Vet 28K16K24K32K40KMin: 44740.2 / Avg: 44744.97 / Max: 44753Min: 44765.3 / Avg: 44784.5 / Max: 44815.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Vet 1Vet 248121620SE +/- 0.03, N = 3SE +/- 0.03, N = 313.6313.621. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Vet 1Vet 248121620Min: 13.6 / Avg: 13.63 / Max: 13.7Min: 13.57 / Avg: 13.62 / Max: 13.671. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathVet 1Vet 23K6K9K12K15KSE +/- 4.41, N = 3SE +/- 2.24, N = 315073.3415086.291. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathVet 1Vet 23K6K9K12K15KMin: 15064.65 / Avg: 15073.34 / Max: 15079.01Min: 15081.94 / Avg: 15086.29 / Max: 15089.41. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Vet 1Vet 20.26350.5270.79051.0541.3175SE +/- 0.003, N = 3SE +/- 0.002, N = 31.1701.171
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Vet 1Vet 2246810Min: 1.16 / Avg: 1.17 / Max: 1.18Min: 1.17 / Avg: 1.17 / Max: 1.17

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastVet 1Vet 23691215SE +/- 0.02, N = 3SE +/- 0.01, N = 311.8411.851. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastVet 1Vet 23691215Min: 11.81 / Avg: 11.84 / Max: 11.87Min: 11.84 / Avg: 11.85 / Max: 11.861. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionVet 1Vet 250100150200250SE +/- 3.08, N = 3SE +/- 1.80, N = 3244.1243.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionVet 1Vet 24080120160200Min: 237.9 / Avg: 244.07 / Max: 247.2Min: 240.3 / Avg: 243.9 / Max: 245.9

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetVet 1Vet 2612182430SE +/- 0.04, N = 3SE +/- 0.08, N = 324.6924.71MIN: 22.88 / MAX: 37.47MIN: 22.88 / MAX: 341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetVet 1Vet 2612182430Min: 24.62 / Avg: 24.69 / Max: 24.76Min: 24.56 / Avg: 24.71 / Max: 24.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileVet 1Vet 2306090120150SE +/- 0.12, N = 3SE +/- 0.06, N = 3113.04112.95
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileVet 1Vet 220406080100Min: 112.8 / Avg: 113.04 / Max: 113.18Min: 112.87 / Avg: 112.95 / Max: 113.06

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathVet 1Vet 25K10K15K20K25KSE +/- 9.23, N = 3SE +/- 6.13, N = 323898.1123915.821. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathVet 1Vet 24K8K12K16K20KMin: 23888.22 / Avg: 23898.11 / Max: 23916.55Min: 23909.1 / Avg: 23915.82 / Max: 23928.071. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUVet 1Vet 25K10K15K20K25KSE +/- 11.99, N = 3SE +/- 1.32, N = 322758.622775.3MIN: 22625.2MIN: 22676.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUVet 1Vet 24K8K12K16K20KMin: 22739.5 / Avg: 22758.6 / Max: 22780.7Min: 22772.7 / Avg: 22775.33 / Max: 22776.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatVet 1Vet 280K160K240K320K400KSE +/- 124.25, N = 3SE +/- 235.83, N = 3351171351424
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatVet 1Vet 260K120K180K240K300KMin: 350998 / Avg: 351171 / Max: 351412Min: 351052 / Avg: 351423.67 / Max: 351861

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUVet 1Vet 25K10K15K20K25KSE +/- 10.92, N = 3SE +/- 1.49, N = 322771.822787.9MIN: 22638.3MIN: 22661.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUVet 1Vet 24K8K12K16K20KMin: 22750 / Avg: 22771.8 / Max: 22783.9Min: 22784.9 / Avg: 22787.87 / Max: 22789.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUVet 1Vet 23691215SE +/- 0.05, N = 3SE +/- 0.03, N = 311.7511.76MIN: 10.39MIN: 10.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUVet 1Vet 23691215Min: 11.67 / Avg: 11.75 / Max: 11.84Min: 11.71 / Avg: 11.76 / Max: 11.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Vet 1Vet 21.6M3.2M4.8M6.4M8MSE +/- 5425.43, N = 3SE +/- 1824.37, N = 373436577348723
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Vet 1Vet 21.3M2.6M3.9M5.2M6.5MMin: 7335760 / Avg: 7343656.67 / Max: 7354050Min: 7345090 / Avg: 7348723.33 / Max: 7350830

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Vet 1Vet 24080120160200SE +/- 0.45, N = 3SE +/- 0.50, N = 3169.00168.89MIN: 165.04 / MAX: 249.99MIN: 164.85 / MAX: 289.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Vet 1Vet 2306090120150Min: 168.49 / Avg: 169 / Max: 169.91Min: 168.01 / Avg: 168.89 / Max: 169.741. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkVet 1Vet 2306090120150SE +/- 0.06, N = 3SE +/- 0.33, N = 3150.23150.33
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkVet 1Vet 2306090120150Min: 150.16 / Avg: 150.23 / Max: 150.34Min: 149.73 / Avg: 150.33 / Max: 150.87

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeVet 1Vet 21.1M2.2M3.3M4.4M5.5MSE +/- 522.74, N = 3SE +/- 11611.33, N = 3519527151986961. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeVet 1Vet 2900K1800K2700K3600K4500KMin: 5194267 / Avg: 5195270.67 / Max: 5196026Min: 5184967 / Avg: 5198696 / Max: 52217811. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedVet 1Vet 2714212835SE +/- 0.02, N = 3SE +/- 0.07, N = 330.9630.941. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedVet 1Vet 2714212835Min: 30.92 / Avg: 30.96 / Max: 30.98Min: 30.82 / Avg: 30.94 / Max: 31.061. (CC) gcc options: -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionVet 1Vet 22004006008001000SE +/- 2.47, N = 3SE +/- 5.29, N = 3943.8944.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionVet 1Vet 2170340510680850Min: 938.9 / Avg: 943.83 / Max: 946.5Min: 933.9 / Avg: 944.43 / Max: 950.5

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Vet 1Vet 250K100K150K200K250KSE +/- 72.37, N = 3SE +/- 146.86, N = 32562212563811. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Vet 1Vet 240K80K120K160K200KMin: 256145 / Avg: 256221.33 / Max: 256366Min: 256221 / Avg: 256380.67 / Max: 2566741. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileVet 1Vet 290180270360450SE +/- 2.77, N = 3SE +/- 2.02, N = 3392.48392.28
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileVet 1Vet 270140210280350Min: 388.58 / Avg: 392.48 / Max: 397.83Min: 388.53 / Avg: 392.28 / Max: 395.45

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Vet 1Vet 250100150200250SE +/- 0.15, N = 3SE +/- 0.29, N = 3211.76211.65MIN: 205.66 / MAX: 235.93MIN: 206.15 / MAX: 234.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Vet 1Vet 24080120160200Min: 211.48 / Avg: 211.76 / Max: 211.99Min: 211.18 / Avg: 211.65 / Max: 212.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileVet 1Vet 280K160K240K320K400KSE +/- 325.92, N = 3SE +/- 57.94, N = 3383504383701
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileVet 1Vet 270K140K210K280K350KMin: 383104 / Avg: 383504.33 / Max: 384150Min: 383626 / Avg: 383701 / Max: 383815

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetVet 1Vet 220406080100SE +/- 0.10, N = 3SE +/- 0.16, N = 399.7499.69MIN: 95.23 / MAX: 114.59MIN: 95.68 / MAX: 117.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetVet 1Vet 220406080100Min: 99.56 / Avg: 99.74 / Max: 99.91Min: 99.38 / Avg: 99.69 / Max: 99.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdVet 1Vet 220406080100SE +/- 0.20, N = 3SE +/- 0.21, N = 3103.79103.84MIN: 97.27 / MAX: 120.35MIN: 96.54 / MAX: 140.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdVet 1Vet 220406080100Min: 103.39 / Avg: 103.79 / Max: 104.05Min: 103.53 / Avg: 103.84 / Max: 104.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresVet 1Vet 2110K220K330K440K550KSE +/- 6.77, N = 3SE +/- 151.49, N = 3505610.19505851.941. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresVet 1Vet 290K180K270K360K450KMin: 505600.96 / Avg: 505610.19 / Max: 505623.39Min: 505551.61 / Avg: 505851.94 / Max: 506036.81. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantVet 1Vet 280K160K240K320K400KSE +/- 293.40, N = 3SE +/- 50.50, N = 3357144357312
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantVet 1Vet 260K120K180K240K300KMin: 356567 / Avg: 357144 / Max: 357525Min: 357255 / Avg: 357312.33 / Max: 357413

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Vet 1Vet 2110220330440550SE +/- 0.29, N = 3SE +/- 0.39, N = 3513.82513.59MIN: 510.88 / MAX: 517.71MIN: 509.34 / MAX: 530.21. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Vet 1Vet 290180270360450Min: 513.32 / Avg: 513.82 / Max: 514.34Min: 513.18 / Avg: 513.59 / Max: 514.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Vet 1Vet 220406080100SE +/- 0.16, N = 3SE +/- 0.03, N = 391.2391.27MIN: 88.35 / MAX: 106.84MIN: 88.86 / MAX: 108.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Vet 1Vet 220406080100Min: 90.91 / Avg: 91.23 / Max: 91.43Min: 91.22 / Avg: 91.27 / Max: 91.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEVet 1Vet 2612182430SE +/- 0.08, N = 5SE +/- 0.06, N = 526.4026.391. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEVet 1Vet 2612182430Min: 26.28 / Avg: 26.4 / Max: 26.72Min: 26.29 / Avg: 26.39 / Max: 26.631. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackVet 1Vet 2612182430SE +/- 0.05, N = 5SE +/- 0.08, N = 522.9722.971. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackVet 1Vet 2510152025Min: 22.89 / Avg: 22.97 / Max: 23.14Min: 22.87 / Avg: 22.97 / Max: 23.31. (CXX) g++ options: -rdynamic

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Vet 1Vet 2110220330440550SE +/- 0.75, N = 3SE +/- 0.14, N = 3528.35528.55MIN: 519.59 / MAX: 546.48MIN: 518.27 / MAX: 548.781. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Vet 1Vet 290180270360450Min: 526.93 / Avg: 528.35 / Max: 529.46Min: 528.27 / Avg: 528.55 / Max: 528.721. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUVet 1Vet 2510152025SE +/- 0.02, N = 3SE +/- 0.07, N = 322.2122.20MIN: 21.47MIN: 21.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUVet 1Vet 2510152025Min: 22.18 / Avg: 22.21 / Max: 22.26Min: 22.12 / Avg: 22.2 / Max: 22.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishVet 1Vet 214002800420056007000SE +/- 3.48, N = 3SE +/- 1.33, N = 3641164091. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishVet 1Vet 211002200330044005500Min: 6405 / Avg: 6410.67 / Max: 6417Min: 6408 / Avg: 6409.33 / Max: 64121. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchVet 1Vet 24080120160200SE +/- 0.11, N = 3SE +/- 0.44, N = 3191.72191.671. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchVet 1Vet 24080120160200Min: 191.53 / Avg: 191.72 / Max: 191.9Min: 190.8 / Avg: 191.67 / Max: 192.191. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeVet 1Vet 2306090120150SE +/- 0.27, N = 3SE +/- 0.13, N = 3138.97139.011. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeVet 1Vet 2306090120150Min: 138.49 / Avg: 138.97 / Max: 139.44Min: 138.83 / Avg: 139.01 / Max: 139.271. RawTherapee, version 5.8, command line.

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Vet 1Vet 2170340510680850SE +/- 0.31, N = 3SE +/- 0.31, N = 3767.38767.18MIN: 748.84 / MAX: 812.65MIN: 749.76 / MAX: 809.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Vet 1Vet 2130260390520650Min: 766.78 / Avg: 767.38 / Max: 767.81Min: 766.64 / Avg: 767.18 / Max: 767.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Vet 1Vet 21.4M2.8M4.2M5.6M7MSE +/- 401.43, N = 3SE +/- 1881.66, N = 366185336619900
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Vet 1Vet 21.1M2.2M3.3M4.4M5.5MMin: 6617850 / Avg: 6618533.33 / Max: 6619240Min: 6616830 / Avg: 6619900 / Max: 6623320

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyVet 1Vet 2612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 326.7326.73
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyVet 1Vet 2612182430Min: 26.67 / Avg: 26.73 / Max: 26.78Min: 26.69 / Avg: 26.73 / Max: 26.77

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Vet 1Vet 2400800120016002000SE +/- 1.21, N = 3SE +/- 0.65, N = 31745.51745.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Vet 1Vet 230060090012001500Min: 1743.4 / Avg: 1745.47 / Max: 1747.6Min: 1744.4 / Avg: 1745.63 / Max: 1746.61. (CC) gcc options: -O3 -pthread -lz -llzma

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkVet 1Vet 20.01670.03340.05010.06680.0835SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0740.0741. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkVet 1Vet 212345Min: 0.07 / Avg: 0.07 / Max: 0.07Min: 0.07 / Avg: 0.07 / Max: 0.081. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialVet 1Vet 20.19580.39160.58740.78320.979SE +/- 0.00, N = 3SE +/- 0.00, N = 30.870.87
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialVet 1Vet 2246810Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.87 / Avg: 0.87 / Max: 0.87

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KVet 1Vet 21.02382.04763.07144.09525.119SE +/- 0.01, N = 3SE +/- 0.00, N = 34.554.551. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KVet 1Vet 2246810Min: 4.53 / Avg: 4.55 / Max: 4.58Min: 4.55 / Avg: 4.55 / Max: 4.561. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Vet 1Vet 20.02590.05180.07770.10360.1295SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1150.115
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Vet 1Vet 212345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastVet 1Vet 21.24882.49763.74644.99526.244SE +/- 0.00, N = 3SE +/- 0.00, N = 35.555.551. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastVet 1Vet 2246810Min: 5.55 / Avg: 5.55 / Max: 5.56Min: 5.55 / Avg: 5.55 / Max: 5.561. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastVet 1Vet 20.30380.60760.91141.21521.519SE +/- 0.00, N = 3SE +/- 0.00, N = 31.351.351. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastVet 1Vet 2246810Min: 1.35 / Avg: 1.35 / Max: 1.35Min: 1.35 / Avg: 1.35 / Max: 1.351. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumVet 1Vet 20.47030.94061.41091.88122.3515SE +/- 0.00, N = 3SE +/- 0.00, N = 32.092.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumVet 1Vet 2246810Min: 2.09 / Avg: 2.09 / Max: 2.09Min: 2.09 / Avg: 2.09 / Max: 2.11. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumVet 1Vet 20.10130.20260.30390.40520.5065SE +/- 0.00, N = 3SE +/- 0.00, N = 30.450.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumVet 1Vet 212345Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkVet 1Vet 248121620SE +/- 0.02, N = 3SE +/- 0.03, N = 317.9917.991. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkVet 1Vet 2510152025Min: 17.95 / Avg: 17.99 / Max: 18.02Min: 17.94 / Avg: 17.99 / Max: 18.021. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDVet 1Vet 20.08550.1710.25650.3420.4275SE +/- 0.00, N = 3SE +/- 0.00, N = 30.380.381. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDVet 1Vet 212345Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.38 / Avg: 0.38 / Max: 0.381. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsVet 1Vet 20.08330.16660.24990.33320.4165SE +/- 0.00, N = 3SE +/- 0.00, N = 30.370.371. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsVet 1Vet 212345Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.371. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomVet 1Vet 20.0540.1080.1620.2160.27SE +/- 0.00, N = 3SE +/- 0.00, N = 30.240.241. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomVet 1Vet 212345Min: 0.24 / Avg: 0.24 / Max: 0.24Min: 0.24 / Avg: 0.24 / Max: 0.241. (CXX) g++ options: -O3 -pthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenVet 1Vet 2306090120150SE +/- 1.20, N = 31381381. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenVet 1Vet 2306090120150Min: 136 / Avg: 138.33 / Max: 1401. (CXX) g++ options: -flto -pthread

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Vet 1Vet 23006009001200150013221322

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzVet 1Vet 2918273645SE +/- 1.07, N = 20SE +/- 2.01, N = 1632.9938.11
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzVet 1Vet 2816243240Min: 28.94 / Avg: 32.99 / Max: 45.78Min: 28.61 / Avg: 38.11 / Max: 54.7

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheVet 1Vet 248121620SE +/- 0.30, N = 15SE +/- 0.17, N = 1514.7615.311. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheVet 1Vet 248121620Min: 12.37 / Avg: 14.76 / Max: 16.23Min: 14.27 / Avg: 15.31 / Max: 16.161. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPVet 1Vet 23691215SE +/- 1.07, N = 15SE +/- 0.68, N = 1212.109.861. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPVet 1Vet 248121620Min: 7.36 / Avg: 12.1 / Max: 18.3Min: 3.17 / Avg: 9.86 / Max: 11.931. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisVet 1Vet 21428425670SE +/- 0.68, N = 4SE +/- 0.91, N = 1657.4960.331. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisVet 1Vet 21224364860Min: 55.68 / Avg: 57.49 / Max: 58.9Min: 56.17 / Avg: 60.33 / Max: 69.231. (CC) gcc options: -O2 -std=c99

183 Results Shown

Redis:
  LPOP
  GET
CLOMP
LuxCoreRender
Basis Universal
Stress-NG
simdjson
oneDNN:
  Convolution Batch Shapes Auto - f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
Node.js V8 Web Tooling Benchmark
NCNN
Redis
Darktable
NCNN
InfluxDB
oneDNN
Darktable
Stockfish
WebP Image Encode
LuxCoreRender
InfluxDB
oneDNN
Mobile Neural Network
Stress-NG
Redis
LeelaChessZero
Warsow
Mobile Neural Network
OpenVKL
libavif avifenc
Timed Eigen Compilation
PHPBench
Cryptsetup
LZ4 Compression
x265
IndigoBench
Kvazaar
NCNN
oneDNN
Stress-NG
Timed FFmpeg Compilation
Cryptsetup
LZ4 Compression:
  9 - Decompression Speed
  3 - Decompression Speed
Stress-NG
Cryptsetup
LZ4 Compression
Mobile Neural Network
GIMP
NCNN
Cryptsetup
asmFish
Zstd Compression
Cryptsetup
NCNN
WebP Image Encode
Redis
NCNN
GIMP
RNNoise
WebP Image Encode
Stress-NG:
  CPU Stress
  Forking
NCNN
oneDNN
Stress-NG
GIMP
Stress-NG
Darktable
Cryptsetup
libavif avifenc
Cryptsetup:
  AES-XTS 256b Decryption
  Serpent-XTS 256b Encryption
Stress-NG
YafaRay
WebP Image Encode
libavif avifenc
rav1e
Caffe
DeepSpeech
Mobile Neural Network
Stress-NG
oneDNN
LAMMPS Molecular Dynamics Simulator
x264
oneDNN
NCNN
Cryptsetup:
  Serpent-XTS 512b Decryption
  PBKDF2-whirlpool
IndigoBench
Coremark
Stress-NG
rav1e
Caffe
Timed Linux Kernel Compilation
Stress-NG
Basis Universal:
  UASTC Level 2 + RDO Post-Processing
  UASTC Level 2
GIMP
Opus Codec Encoding
7-Zip Compression
LZ4 Compression
Stress-NG
Hugin
SQLite Speedtest
Basis Universal
Stress-NG
Timed GDB GNU Debugger Compilation
NCNN
FLAC Audio Encoding
oneDNN
Basis Universal
librsvg
libavif avifenc
Cryptsetup
OCRMyPDF
rays1bench
Timed Apache Compilation
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
WebP Image Encode
oneDNN
TensorFlow Lite
Cryptsetup
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
LAME MP3 Encoding
Stress-NG
rav1e
Kvazaar
Cryptsetup
NCNN
Timed MPlayer Compilation
Stress-NG
oneDNN
TensorFlow Lite
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
TensorFlow Lite
Mobile Neural Network
Numpy Benchmark
Crafty
LZ4 Compression
Cryptsetup
John The Ripper
Build2
NCNN
TensorFlow Lite
NCNN:
  CPU - googlenet
  CPU - squeezenet_ssd
Stress-NG
TensorFlow Lite
TNN
NCNN
Monkey Audio Encoding
WavPack Audio Encoding
TNN
oneDNN
John The Ripper
Timed HMMer Search
RawTherapee
NCNN
TensorFlow Lite
Darktable
Zstd Compression
GROMACS
Intel Open Image Denoise
x265
rav1e
Kvazaar:
  Bosphorus 1080p - Very Fast
  Bosphorus 4K - Very Fast
  Bosphorus 1080p - Medium
  Bosphorus 4K - Medium
LibRaw
simdjson:
  DistinctUserID
  PartialTweets
  LargeRand
LeelaChessZero
GLmark2
Unpacking Firefox
Stress-NG:
  CPU Cache
  MMAP
eSpeak-NG Speech Engine