POWER9 44c 176t 2021

POWER9 testing with a PowerNV T2P9D01 REV 1.01 and ASPEED on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101051-HA-POWER944C01
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 5 Tests
AV1 4 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 9 Tests
C/C++ Compiler Tests 27 Tests
Compression Tests 10 Tests
CPU Massive 36 Tests
Creator Workloads 36 Tests
Database Test Suite 2 Tests
Encoding 13 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 11 Tests
Machine Learning 11 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 31 Tests
NVIDIA GPU Compute 2 Tests
OCR 2 Tests
OpenMPI Tests 3 Tests
Productivity 4 Tests
Programmer / Developer System Benchmarks 19 Tests
Python 6 Tests
Raytracing 3 Tests
Renderers 6 Tests
Scientific Computing 5 Tests
Server 4 Tests
Server CPU Tests 21 Tests
Single-Threaded 12 Tests
Speech 2 Tests
Telephony 2 Tests
Video Encoding 8 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
January 03 2021
  1 Day, 14 Hours, 53 Minutes
Run 2
January 05 2021
  9 Hours, 39 Minutes
Run 3
January 05 2021
  9 Hours, 40 Minutes
Run 4
January 05 2021
  21 Minutes
Invert Hiding All Results Option
  14 Hours, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


POWER9 44c 176t 2021OpenBenchmarking.orgPhoronix Test SuitePOWER9 @ 3.80GHz (44 Cores / 176 Threads)PowerNV T2P9D01 REV 1.0164GB500GB Samsung SSD 860ASPEEDVE2282 x Broadcom NetXtreme BCM5719 PCIeUbuntu 20.105.9.10-050910-generic (ppc64le)X ServerGCC 10.2.0ext41920x1080ProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDisplay ServerCompilerFile-SystemScreen ResolutionPOWER9 44c 176t 2021 BenchmarksSystem Logs- --build=powerpc64le-linux-gnu --disable-multilib --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-xyKMTo/gcc-10-10.2.0/debian/tmp-nvptx/usr --enable-plugin --enable-secureplt --enable-shared --enable-targets=powerpcle-linux --enable-threads=posix --host=powerpc64le-linux-gnu --program-prefix=powerpc64le-linux-gnu- --target=powerpc64le-linux-gnu --with-cpu=power8 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-long-double-128 --with-target-system-zlib=auto --without-cuda-driver -v - SMT (threads per core): 4- Python 3.8.6- itlb_multihit: Not affected + l1tf: Mitigation of RFI Flush L1D private per thread + mds: Not affected + meltdown: Mitigation of RFI Flush L1D private per thread + spec_store_bypass: Mitigation of Kernel entry/exit barrier (eieio) + spectre_v1: Mitigation of __user pointer sanitization ori31 speculation barrier enabled + spectre_v2: Mitigation of Indirect branch cache disabled Software link stack flush + srbds: Not affected + tsx_async_abort: Not affected

POWER9 44c 176t 2021caffe: GoogleNet - CPU - 1000caffe: AlexNet - CPU - 1000gmic: 2D Function Plotting, 1000 Timescaffe: GoogleNet - CPU - 200mnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0build-gcc: Time To Compilenumenta-nab: EXPoSEcaffe: GoogleNet - CPU - 100lammps: 20k Atomsbuild-llvm: Time To Compilebasis: UASTC Level 2 + RDO Post-Processingvpxenc: Speed 0caffe: AlexNet - CPU - 200blosc: blosclzlczero: BLASlibgav1: Chimera 1080p 10-bitaom-av1: Speed 0 Two-Passcaffe: AlexNet - CPU - 100numpy: blender: Barbershop - CPU-Onlyhpcg: aom-av1: Speed 6 Two-Passblender: Pabellon Barcelona - CPU-Onlycompress-7zip: Compress Speed Testrav1e: 5libgav1: Chimera 1080pkvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 4K - Slowrav1e: 6aom-av1: Speed 4 Two-Passlibgav1: Summer Nature 4Kblender: Classroom - CPU-Onlymlpack: scikit_qdaaom-av1: Speed 6 Realtimerav1e: 1blender: Fishy Cat - CPU-Onlypyperformance: raytracenumenta-nab: Earthgecko Skylinerav1e: 10vpxenc: Speed 5node-web-tooling: gegl: Cartoongraphics-magick: HWB Color Spaceencode-wavpack: WAV To WavPackcompress-zstd: 19sqlite-speedtest: Timed Time - Size 1,000stockfish: Total Timepyperformance: 2to3build2: Time To Compileonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUbuild-eigen: Time To Compileblender: BMW27 - CPU-Onlypyperformance: gomlpack: scikit_linearridgeregressionbyte: Dhrystone 2aom-av1: Speed 8 Realtimedcraw: RAW To PPM Image Conversiondav1d: Chimera 1080p 10-bitgegl: Color Enhancegegl: Wavelet Blurkvazaar: Bosphorus 4K - Very Fasthugin: Panorama Photo Assistant + Stitching Timex265: Bosphorus 4Kmlpack: scikit_icaonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUpyperformance: regex_compileonednn: Recurrent Neural Network Inference - u8s8f32 - CPUcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9pyperformance: python_startupgit: Time To Complete Common Git Commandsopencv: DNN - Deep Neural Networklibgav1: Summer Nature 1080pgmic: 3D Elevated Function In Rand Colors, 100 Timesbuild-php: Time To Compilepyperformance: nbodykvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumrawtherapee: Total Benchmark Timepybench: Total For Average Test Timescompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedcompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedbasis: ETC1Sgegl: Rotate 90 Degreesgraphics-magick: Noise-Gaussiangegl: Antialiasgimp: resizenumenta-nab: Bayesian Changepointpyperformance: floatdav1d: Chimera 1080pbuild-linux-kernel: Time To Compileespeak: Text-To-Speech Synthesislibraw: Post-Processing Benchmarkaobench: 2048 x 2048 - Total Timewebp: Quality 100, Lossless, Highest Compressionpyperformance: chaospyperformance: crypto_pyaesencode-opus: WAV To Opus Encodesimdjson: Kostyagraphics-magick: Enhancedgraphics-magick: Sharpengraphics-magick: Rotategraphics-magick: Swirlgraphics-magick: Resizinggegl: Tile Glasscryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Serpent-XTS 512b Encryptionencode-flac: WAV To FLACsimdjson: LargeRandbuild-ffmpeg: Time To Compilemlpack: scikit_svmgegl: Reflectgimp: unsharp-maskcompress-gzip: Linux Source Tree Archiving To .tar.gzsimdjson: PartialTweetssimdjson: DistinctUserIDx265: Bosphorus 1080ptesseract-ocr: Time To OCR 7 Imagesgmic: Plotting Isosurface Of A 3D Volume, 1000 Timesbuild-apache: Time To Compiletnn: CPU - MobileNet v2tachyon: Total Timetnn: CPU - SqueezeNet v1.1kvazaar: Bosphorus 4K - Ultra Fastcompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedclomp: Static OMP Speedupx264: H.264 Video Encodingcompress-zstd: 3gimp: auto-levelsredis: SADDpyperformance: django_templaternnoise: ocrmypdf: Processing 60 Page PDF Documentdolfyn: Computational Fluid Dynamicsunpack-firefox: firefox-84.0.source.tar.xzdav1d: Summer Nature 4Kcryptsetup: Twofish-XTS 512b Decryptionpyperformance: json_loadsnumenta-nab: Relative Entropycryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptioncryptsetup: PBKDF2-whirlpoolcryptsetup: PBKDF2-sha512pyperformance: pathlibkvazaar: Bosphorus 1080p - Very Fastrsvg: SVG Files To PNGpovray: Trace Timeencode-ape: WAV To APEwebp: Quality 100, Losslessbuild-imagemagick: Time To Compilecryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Twofish-XTS 256b Encryptiongimp: rotatebasis: UASTC Level 3cryptsetup: AES-XTS 512b Decryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 256b Encryptionpyperformance: pickle_pure_pythonttsiod-renderer: Phong Rendering With Soft-Shadow Mappingonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUcryptsetup: PBKDF2-sha512cryptsetup: PBKDF2-whirlpoolc-ray: Total Time - 4K, 16 Rays Per Pixelbasis: UASTC Level 2numenta-nab: Windowed Gaussianwebp: Quality 100, Highest Compressionscikit-learn: gegl: Cropsystem-decompress-gzip: onednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUencode-mp3: WAV To MP3kvazaar: Bosphorus 1080p - Ultra Fastgegl: Scaledav1d: Summer Nature 1080ponednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUredis: LPOPredis: LPUSHredis: SEToctave-benchmark: redis: GETwebp: Quality 100basis: UASTC Level 0mafft: Multiple Sequence Alignment - LSU RNAonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUwebp: Defaultlammps: Rhodopsin Proteinonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUsystem-decompress-zlib: tjbench: onednn: Convolution Batch Shapes Auto - f32 - CPUamg: system-decompress-xz: smallpt: Global Illumination Renderer; 128 Samplesonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUcompress-pbzip2: 256MB File CompressionRun 1Run 2Run 3Run 419687836544122959.774364395471.076116.43156.505654.28091.8491431.9051409.86718605915.332536.1671081.0600.581250072982.694613.760.0270400167.76424.4215.05250.25356.471684560.17927.701.871.890.2090.1713.56264.2945.632.320.084233.851.39228.3720.4012.714.01205.8151423128.75977.6170.61631660332798135.39713161.313142.013101.1129.903123.917143.7527089795.25.10113.306109.32106.514106.1595.65104.4526.2489.897566.407532.054537568.8927.96314.592.5172579242.2786.90486.3944767.057.0783.59636009250.938.069240.539.7178.74677.41241470.74520.62170.328384173.0267.85754.41018.8263.88262.10531432743.1801.065344176381203182159.862133.962.740.1820.4855.53944.7651.93551.21449.9291.191.2312.9046.53145.11844.886637.13143.4265611.70614.0410342.49087.535.853.854022.539.529682251.3515038.30937.70438.06630.90989.15132.765.134.69475.162.2343870109113251.918.4132.52925.97021.67029.96629.041133132.628.23926.5532100.12098.12496.22473.01.34782.06842.8257103.62118.53118.53918.09317.66217.52615.9284.93540.827629.769814.39743.2512.639261.559.167334.77018519606.57509296.11584774.658.337800110.6110.96411.26911.26733.197713.56357.86813.86455.29502723.849683101.75157825.608834726815.5554.41726.802610.34962.0782942.762470.655114.12956.148654.38893.4481434.5311439.88114.441568.0081078.2120.582968.594213.420.02168.59426.9419.28510.25358.11706260.17927.631.871.890.2080.1713.42264.3146.122.330.084229.331.38228.6870.4022.694.05208.2851479129.06379.5170.85131552898798135.59813188.213133.113011.5129.73125.577143.8527261721.55.1113.308109.23106.829106.6355.66107.3276.1789.3876677670.664537676.7925.04214.992.1742595142.6687.38986.0474756.977.0383.6763605924338.059299.539.6679.28977.58742170.98222.23869.867384172.0669.43254.91618.7662.83462.09531432643.2031.065344176401205181859.86740.1820.4855.27444.7252.0451.90350.4691.191.2412.746.64145.1344.719637.09243.2359612.03114.0410337.99061.86636.664657.739.381689655.1215038.10938.98538.0930.00689.47132.965.234.5475.162.7344020109113051.618.2731.95126.04521.79529.92828.953133133.928.26526.5532100.52100.12495.82494.41.33776.9442.8246103.388109226634402018.6418.43818.17317.66317.49716.2355.5840.652830.259214.40342.9612.542262.229.077364.77039500314.16501002562275.448.496811922.0610.95911.25911.36333.424213.40697.86914.85255.23622886.062684101.80764725.517634862785.5614.42126.045410.32172.0512975.573474.535116.2956.959663.97291.8791434.1171437.62515.892562.2081077.8250.582980.498413.780.02168.52427.2619.27580.25356.071701050.17927.941.871.890.2090.1713.72264.4943.962.310.084225.991.39231.4010.4012.714.06207.4681489129.18579.6171.61229938062798136.67613157.213254.513147.2129.664122.667153.75272756955.1113.589108.70106.807106.6965.66108.5326.2388.727693.287679.574537534.5727.64914.592.2492639742.4387.16286.3474767.047.0583.35835979224.738.039183.239.6678.98377.82639670.83921.83571.751383174.5369.14454.79118.9162.59462.0731432843.2071.065354146331201182060.16840.2750.4855.66544.6852.10651.22950.2321.191.2412.3246.32545.144.965637.08142.9742611.7661410333.69062.33635.534228.740.538693377.6915038.08138.5338.14930.82189.69132.965.134.05775.162.7344020109226651.818.3732.49825.29321.8329.9929.104132.9133.828.32626.572099.32097.62494.32492.41.33777.72642.9411103.748109226634402018.73218.54217.89517.66217.49816.255.58741.075529.817114.39943.1812.518253.469.163064.76997521050510285.72531519.698.496791993.6910.96111.23511.21233.109113.67657.86515.07155.79952874.757568101.80676125.577435046425.5634.45126.814210.38832.0263013.519.2364OpenBenchmarking.org

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000Run 1400K800K1200K1600K2000KSE +/- 73205.05, N = 319687831. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000Run 1140K280K420K560K700KSE +/- 16049.04, N = 96544121. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesRun 1Run 2Run 36001200180024003000SE +/- 2.49, N = 32959.772942.762975.571. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesRun 1Run 2Run 35001000150020002500Min: 2955.86 / Avg: 2959.77 / Max: 2964.41. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Run 180K160K240K320K400KSE +/- 13183.90, N = 93643951. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 3100200300400500SE +/- 0.27, N = 3471.08470.66474.54MIN: 467.86 / MAX: 482.63MIN: 467.4 / MAX: 665.69MIN: 471.94 / MAX: 484.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 380160240320400Min: 470.56 / Avg: 471.08 / Max: 471.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 3306090120150SE +/- 1.28, N = 3116.43114.13116.29MIN: 113.57 / MAX: 129.96MIN: 113.44 / MAX: 115.23MIN: 114.3 / MAX: 119.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 320406080100Min: 114.23 / Avg: 116.43 / Max: 118.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 31326395265SE +/- 0.09, N = 356.5156.1556.96MIN: 55.43 / MAX: 58.08MIN: 55.46 / MAX: 57.21MIN: 56.01 / MAX: 58.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 31122334455Min: 56.32 / Avg: 56.5 / Max: 56.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 3140280420560700SE +/- 2.29, N = 3654.28654.39663.97MIN: 647.79 / MAX: 673.25MIN: 648.81 / MAX: 680.43MIN: 661.55 / MAX: 673.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 3120240360480600Min: 650.3 / Avg: 654.28 / Max: 658.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 320406080100SE +/- 0.42, N = 391.8593.4591.88MIN: 90.17 / MAX: 94.5MIN: 92.15 / MAX: 95.92MIN: 90.67 / MAX: 93.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 320406080100Min: 91.33 / Avg: 91.85 / Max: 92.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 9.3.0Time To CompileRun 1Run 2Run 330060090012001500SE +/- 1.43, N = 31431.911434.531434.12
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 9.3.0Time To CompileRun 1Run 2Run 330060090012001500Min: 1430.02 / Avg: 1431.91 / Max: 1434.7

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSERun 1Run 2Run 330060090012001500SE +/- 19.77, N = 31409.871439.881437.63
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSERun 1Run 2Run 330060090012001500Min: 1370.77 / Avg: 1409.87 / Max: 1434.55

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Run 140K80K120K160K200KSE +/- 6785.75, N = 121860591. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsRun 1Run 2Run 348121620SE +/- 0.28, N = 915.3314.4415.891. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsRun 1Run 2Run 348121620Min: 13.5 / Avg: 15.33 / Max: 15.881. (CXX) g++ options: -O3 -pthread -lm

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileRun 1Run 2Run 3120240360480600SE +/- 21.50, N = 9536.17568.01562.21
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileRun 1Run 2Run 3100200300400500Min: 421.94 / Avg: 536.17 / Max: 610.83

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingRun 1Run 2Run 32004006008001000SE +/- 1.96, N = 31081.061078.211077.831. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingRun 1Run 2Run 32004006008001000Min: 1079.09 / Avg: 1081.06 / Max: 1084.971. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0Run 1Run 2Run 30.13050.2610.39150.5220.6525SE +/- 0.00, N = 30.580.580.581. (CXX) g++ options: -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0Run 1Run 2Run 3246810Min: 0.58 / Avg: 0.58 / Max: 0.581. (CXX) g++ options: -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Run 130K60K90K120K150KSE +/- 4397.47, N = 121250071. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 1Run 2Run 3Run 46001200180024003000SE +/- 2.78, N = 32982.62968.52980.43013.51. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 1Run 2Run 3Run 45001000150020002500Min: 2977.4 / Avg: 2982.6 / Max: 2986.91. (CXX) g++ options: -rdynamic

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASRun 1Run 2Run 32004006008001000SE +/- 11.95, N = 99469429841. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASRun 1Run 2Run 32004006008001000Min: 883 / Avg: 945.67 / Max: 9921. (CXX) g++ options: -flto -pthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080p 10-bitRun 1Run 2Run 348121620SE +/- 0.02, N = 313.7613.4213.781. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080p 10-bitRun 1Run 2Run 348121620Min: 13.73 / Avg: 13.76 / Max: 13.81. (CXX) g++ options: -O3 -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassRun 1Run 2Run 30.00450.0090.01350.0180.0225SE +/- 0.00, N = 30.020.020.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassRun 1Run 2Run 312345Min: 0.02 / Avg: 0.02 / Max: 0.021. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Run 115K30K45K60K75KSE +/- 2232.82, N = 15704001. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRun 1Run 2Run 34080120160200SE +/- 0.26, N = 3167.76168.59168.52
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRun 1Run 2Run 3306090120150Min: 167.49 / Avg: 167.76 / Max: 168.27

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Barbershop - Compute: CPU-OnlyRun 1Run 2Run 390180270360450SE +/- 0.60, N = 3424.42426.94427.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Barbershop - Compute: CPU-OnlyRun 1Run 2Run 380160240320400Min: 423.26 / Avg: 424.42 / Max: 425.27

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Run 1Run 2Run 3Run 4510152025SE +/- 0.14, N = 1015.0519.2919.2819.241. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Run 1Run 2Run 3Run 4510152025Min: 14.76 / Avg: 15.05 / Max: 16.291. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassRun 1Run 2Run 30.05630.11260.16890.22520.2815SE +/- 0.00, N = 30.250.250.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassRun 1Run 2Run 312345Min: 0.25 / Avg: 0.25 / Max: 0.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Pabellon Barcelona - Compute: CPU-OnlyRun 1Run 2Run 380160240320400SE +/- 0.50, N = 3356.47358.10356.07
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Pabellon Barcelona - Compute: CPU-OnlyRun 1Run 2Run 360120180240300Min: 355.6 / Avg: 356.47 / Max: 357.32

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestRun 1Run 2Run 340K80K120K160K200KSE +/- 1543.67, N = 121684561706261701051. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestRun 1Run 2Run 330K60K90K120K150KMin: 152099 / Avg: 168456.42 / Max: 1718451. (CXX) g++ options: -pipe -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Run 1Run 2Run 30.04030.08060.12090.16120.2015SE +/- 0.000, N = 30.1790.1790.179
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Run 1Run 2Run 312345Min: 0.18 / Avg: 0.18 / Max: 0.18

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080pRun 1Run 2Run 3714212835SE +/- 0.16, N = 327.7027.6327.941. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080pRun 1Run 2Run 3612182430Min: 27.39 / Avg: 27.7 / Max: 27.911. (CXX) g++ options: -O3 -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumRun 1Run 2Run 30.42080.84161.26241.68322.104SE +/- 0.01, N = 31.871.871.871. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumRun 1Run 2Run 3246810Min: 1.86 / Avg: 1.87 / Max: 1.881. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowRun 1Run 2Run 30.42530.85061.27591.70122.1265SE +/- 0.00, N = 31.891.891.891. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowRun 1Run 2Run 3246810Min: 1.88 / Avg: 1.89 / Max: 1.891. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Run 1Run 2Run 30.0470.0940.1410.1880.235SE +/- 0.000, N = 30.2090.2080.209
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Run 1Run 2Run 312345Min: 0.21 / Avg: 0.21 / Max: 0.21

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassRun 1Run 2Run 30.03830.07660.11490.15320.1915SE +/- 0.00, N = 30.170.170.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassRun 1Run 2Run 312345Min: 0.17 / Avg: 0.17 / Max: 0.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 4KRun 1Run 2Run 348121620SE +/- 0.06, N = 313.5613.4213.721. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 4KRun 1Run 2Run 348121620Min: 13.46 / Avg: 13.56 / Max: 13.661. (CXX) g++ options: -O3 -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Classroom - Compute: CPU-OnlyRun 1Run 2Run 360120180240300SE +/- 0.35, N = 3264.29264.31264.49
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Classroom - Compute: CPU-OnlyRun 1Run 2Run 350100150200250Min: 263.74 / Avg: 264.29 / Max: 264.94

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRun 1Run 2Run 31020304050SE +/- 0.55, N = 645.6346.1243.96
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRun 1Run 2Run 3918273645Min: 44.74 / Avg: 45.63 / Max: 48.33

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 1Run 2Run 30.52431.04861.57292.09722.6215SE +/- 0.00, N = 32.322.332.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 1Run 2Run 3246810Min: 2.32 / Avg: 2.32 / Max: 2.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Run 1Run 2Run 30.01890.03780.05670.07560.0945SE +/- 0.000, N = 30.0840.0840.084
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Run 1Run 2Run 312345Min: 0.08 / Avg: 0.08 / Max: 0.09

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Fishy Cat - Compute: CPU-OnlyRun 1Run 2Run 350100150200250SE +/- 1.21, N = 3233.85229.33225.99
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Fishy Cat - Compute: CPU-OnlyRun 1Run 2Run 34080120160200Min: 231.44 / Avg: 233.85 / Max: 235.24

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRun 1Run 2Run 30.31280.62560.93841.25121.564SE +/- 0.00, N = 31.391.381.39
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRun 1Run 2Run 3246810Min: 1.39 / Avg: 1.39 / Max: 1.39

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineRun 1Run 2Run 350100150200250SE +/- 1.04, N = 3228.37228.69231.40
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineRun 1Run 2Run 34080120160200Min: 226.45 / Avg: 228.37 / Max: 230.04

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Run 1Run 2Run 30.09050.1810.27150.3620.4525SE +/- 0.001, N = 30.4010.4020.401
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Run 1Run 2Run 312345Min: 0.4 / Avg: 0.4 / Max: 0.4

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Run 1Run 2Run 30.60981.21961.82942.43923.049SE +/- 0.01, N = 32.712.692.711. (CXX) g++ options: -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Run 1Run 2Run 3246810Min: 2.69 / Avg: 2.71 / Max: 2.731. (CXX) g++ options: -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRun 1Run 2Run 30.91351.8272.74053.6544.5675SE +/- 0.03, N = 34.014.054.061. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRun 1Run 2Run 3246810Min: 3.98 / Avg: 4.01 / Max: 4.061. Nodejs v12.18.2

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonRun 1Run 2Run 350100150200250SE +/- 0.28, N = 3205.82208.29207.47
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonRun 1Run 2Run 34080120160200Min: 205.27 / Avg: 205.81 / Max: 206.21

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceRun 1Run 2Run 330060090012001500SE +/- 32.53, N = 151423147914891. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceRun 1Run 2Run 330060090012001500Min: 1245 / Avg: 1423.47 / Max: 15351. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRun 1Run 2Run 3306090120150SE +/- 0.14, N = 5128.76129.06129.191. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRun 1Run 2Run 320406080100Min: 128.52 / Avg: 128.76 / Max: 129.111. (CXX) g++ options: -rdynamic

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 320406080100SE +/- 1.18, N = 1577.679.579.61. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 31530456075Min: 72 / Avg: 77.6 / Max: 87.51. (CC) gcc options: -O3 -pthread -lz

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1Run 2Run 34080120160200SE +/- 0.17, N = 3170.62170.85171.611. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1Run 2Run 3306090120150Min: 170.4 / Avg: 170.62 / Max: 170.961. (CC) gcc options: -O2 -ldl -lz -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeRun 1Run 2Run 37M14M21M28M35MSE +/- 231184.54, N = 33166033231552898299380621. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeRun 1Run 2Run 35M10M15M20M25MMin: 31219943 / Avg: 31660332.33 / Max: 320025181. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -flto -flto=jobserver

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Run 1Run 2Run 32004006008001000798798798

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 1Run 2Run 3306090120150SE +/- 0.32, N = 3135.40135.60136.68
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 1Run 2Run 3306090120150Min: 134.8 / Avg: 135.4 / Max: 135.89

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURun 1Run 2Run 33K6K9K12K15KSE +/- 21.48, N = 313161.313188.213157.2MIN: 13042.7MIN: 13093.8MIN: 13071.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURun 1Run 2Run 32K4K6K8K10KMin: 13121.3 / Avg: 13161.27 / Max: 13194.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 33K6K9K12K15KSE +/- 13.23, N = 313142.013133.113254.5MIN: 13063.1MIN: 13074.3MIN: 13172.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 32K4K6K8K10KMin: 13119.5 / Avg: 13141.97 / Max: 13165.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 33K6K9K12K15KSE +/- 46.19, N = 313101.113011.513147.2MIN: 12981.5MIN: 12949.8MIN: 13080.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 32K4K6K8K10KMin: 13037.8 / Avg: 13101.07 / Max: 131911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRun 1Run 2Run 3306090120150SE +/- 0.09, N = 3129.90129.73129.66
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRun 1Run 2Run 320406080100Min: 129.81 / Avg: 129.9 / Max: 130.09

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: BMW27 - Compute: CPU-OnlyRun 1Run 2Run 3306090120150SE +/- 0.50, N = 3123.91125.57122.66
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: BMW27 - Compute: CPU-OnlyRun 1Run 2Run 320406080100Min: 122.99 / Avg: 123.91 / Max: 124.69

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goRun 1Run 2Run 3150300450600750SE +/- 0.88, N = 3714714715
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goRun 1Run 2Run 3130260390520650Min: 713 / Avg: 714.33 / Max: 716

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRun 1Run 2Run 30.86631.73262.59893.46524.3315SE +/- 0.02, N = 33.753.853.75
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRun 1Run 2Run 3246810Min: 3.73 / Avg: 3.75 / Max: 3.78

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Run 1Run 2Run 36M12M18M24M30MSE +/- 95417.01, N = 327089795.227261721.527275695.0
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Run 1Run 2Run 35M10M15M20M25MMin: 26989589.7 / Avg: 27089795.17 / Max: 27280547.7

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeRun 1Run 2Run 31.14752.2953.44254.595.7375SE +/- 0.00, N = 35.105.105.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeRun 1Run 2Run 3246810Min: 5.09 / Avg: 5.1 / Max: 5.11. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dcraw

This test times how long it takes to convert several high-resolution RAW NEF image files to PPM image format using dcraw. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image ConversionRun 1Run 2Run 3306090120150SE +/- 0.05, N = 3113.31113.31113.591. (CC) gcc options: -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image ConversionRun 1Run 2Run 320406080100Min: 113.24 / Avg: 113.31 / Max: 113.391. (CC) gcc options: -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitRun 1Run 2Run 320406080100SE +/- 1.08, N = 3109.32109.23108.70MIN: 78.72 / MAX: 166.47MIN: 79.84 / MAX: 157.85MIN: 78.6 / MAX: 162.61. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitRun 1Run 2Run 320406080100Min: 107.41 / Avg: 109.32 / Max: 111.131. (CC) gcc options: -pthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceRun 1Run 2Run 320406080100SE +/- 0.09, N = 3106.51106.83106.81
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceRun 1Run 2Run 320406080100Min: 106.36 / Avg: 106.51 / Max: 106.66

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurRun 1Run 2Run 320406080100SE +/- 0.17, N = 3106.16106.64106.70
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurRun 1Run 2Run 320406080100Min: 105.85 / Avg: 106.16 / Max: 106.43

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastRun 1Run 2Run 31.27352.5473.82055.0946.3675SE +/- 0.01, N = 35.655.665.661. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastRun 1Run 2Run 3246810Min: 5.63 / Avg: 5.65 / Max: 5.671. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 1Run 2Run 320406080100SE +/- 1.65, N = 3104.45107.33108.53
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 1Run 2Run 320406080100Min: 101.58 / Avg: 104.45 / Max: 107.3

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KRun 1Run 2Run 3246810SE +/- 0.02, N = 36.246.176.231. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KRun 1Run 2Run 3246810Min: 6.21 / Avg: 6.24 / Max: 6.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRun 1Run 2Run 320406080100SE +/- 0.06, N = 389.8989.3888.72
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRun 1Run 2Run 320406080100Min: 89.77 / Avg: 89.89 / Max: 89.97

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 1Run 2Run 316003200480064008000SE +/- 31.01, N = 37566.407667.007693.28MIN: 7480.61MIN: 7622.49MIN: 7650.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 1Run 2Run 313002600390052006500Min: 7525.78 / Avg: 7566.4 / Max: 7627.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 316003200480064008000SE +/- 13.00, N = 37532.057670.667679.57MIN: 7462.27MIN: 7637.04MIN: 7649.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 313002600390052006500Min: 7514.45 / Avg: 7532.05 / Max: 7557.421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileRun 1Run 2Run 3100200300400500453453453

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 316003200480064008000SE +/- 57.46, N = 37568.897676.797534.57MIN: 7444.77MIN: 7632.13MIN: 7477.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 313002600390052006500Min: 7509.07 / Avg: 7568.89 / Max: 7683.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Run 1Run 2Run 3714212835SE +/- 0.44, N = 1527.9625.0427.651. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Run 1Run 2Run 3612182430Min: 25.69 / Avg: 27.96 / Max: 31.771. (CC) gcc options: -pthread -fvisibility=hidden -O2

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRun 1Run 2Run 348121620SE +/- 0.03, N = 314.514.914.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRun 1Run 2Run 348121620Min: 14.4 / Avg: 14.47 / Max: 14.5

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsRun 1Run 2Run 320406080100SE +/- 0.19, N = 392.5292.1792.251. git version 2.27.0
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsRun 1Run 2Run 320406080100Min: 92.22 / Avg: 92.52 / Max: 92.881. git version 2.27.0

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 36K12K18K24K30KSE +/- 291.88, N = 152579225951263971. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mcpu=power8 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 35K10K15K20K25KMin: 23099 / Avg: 25791.73 / Max: 270221. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mcpu=power8 -fvisibility=hidden -O3 -shared

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 1080pRun 1Run 2Run 31020304050SE +/- 0.06, N = 342.2742.6642.431. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 1080pRun 1Run 2Run 3918273645Min: 42.21 / Avg: 42.27 / Max: 42.41. (CXX) g++ options: -O3 -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesRun 1Run 2Run 320406080100SE +/- 0.11, N = 386.9087.3987.161. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesRun 1Run 2Run 320406080100Min: 86.7 / Avg: 86.9 / Max: 87.051. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileRun 1Run 2Run 320406080100SE +/- 0.14, N = 386.3986.0586.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileRun 1Run 2Run 31632486480Min: 86.19 / Avg: 86.39 / Max: 86.66

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyRun 1Run 2Run 3100200300400500SE +/- 0.33, N = 3476475476
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyRun 1Run 2Run 380160240320400Min: 475 / Avg: 475.67 / Max: 476

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowRun 1Run 2Run 3246810SE +/- 0.03, N = 37.056.977.041. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowRun 1Run 2Run 33691215Min: 6.99 / Avg: 7.05 / Max: 7.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumRun 1Run 2Run 3246810SE +/- 0.02, N = 37.077.037.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumRun 1Run 2Run 33691215Min: 7.04 / Avg: 7.07 / Max: 7.11. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeRun 1Run 2Run 320406080100SE +/- 0.11, N = 383.6083.6883.361. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeRun 1Run 2Run 31632486480Min: 83.39 / Avg: 83.6 / Max: 83.761. RawTherapee, version 5.8, command line.

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRun 1Run 2Run 38001600240032004000SE +/- 0.67, N = 3360036053597
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRun 1Run 2Run 36001200180024003000Min: 3599 / Avg: 3599.67 / Max: 3601

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedRun 1Run 2Run 32K4K6K8K10KSE +/- 0.94, N = 39250.99243.09224.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedRun 1Run 2Run 316003200480064008000Min: 9249.5 / Avg: 9250.93 / Max: 9252.71. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedRun 1Run 2Run 3918273645SE +/- 0.03, N = 338.0638.0538.031. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedRun 1Run 2Run 3816243240Min: 38.03 / Avg: 38.06 / Max: 38.111. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedRun 1Run 2Run 32K4K6K8K10KSE +/- 1.14, N = 39240.59299.59183.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedRun 1Run 2Run 316003200480064008000Min: 9238.7 / Avg: 9240.5 / Max: 9242.61. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedRun 1Run 2Run 3918273645SE +/- 0.00, N = 339.7139.6639.661. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedRun 1Run 2Run 3816243240Min: 39.71 / Avg: 39.71 / Max: 39.721. (CC) gcc options: -O3

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SRun 1Run 2Run 320406080100SE +/- 0.19, N = 378.7579.2978.981. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SRun 1Run 2Run 31530456075Min: 78.36 / Avg: 78.75 / Max: 78.951. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesRun 1Run 2Run 320406080100SE +/- 0.20, N = 377.4177.5977.83
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesRun 1Run 2Run 31530456075Min: 77.11 / Avg: 77.41 / Max: 77.78

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianRun 1Run 2Run 390180270360450SE +/- 5.70, N = 44144213961. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianRun 1Run 2Run 370140210280350Min: 397 / Avg: 414 / Max: 4211. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasRun 1Run 2Run 31632486480SE +/- 0.03, N = 370.7570.9870.84
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasRun 1Run 2Run 31428425670Min: 70.69 / Avg: 70.75 / Max: 70.79

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeRun 1Run 2Run 3510152025SE +/- 0.34, N = 1520.6222.2421.84
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeRun 1Run 2Run 3510152025Min: 17.73 / Avg: 20.62 / Max: 22.29

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointRun 1Run 2Run 31632486480SE +/- 1.09, N = 370.3369.8771.75
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointRun 1Run 2Run 31428425670Min: 68.14 / Avg: 70.33 / Max: 71.43

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatRun 1Run 2Run 380160240320400384384383

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pRun 1Run 2Run 34080120160200SE +/- 0.51, N = 3173.02172.06174.53MIN: 130.46 / MAX: 248.83MIN: 128.85 / MAX: 241.06MIN: 131.48 / MAX: 245.711. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pRun 1Run 2Run 3306090120150Min: 172.02 / Avg: 173.02 / Max: 173.71. (CC) gcc options: -pthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 31530456075SE +/- 0.56, N = 367.8669.4369.14
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 31326395265Min: 67.02 / Avg: 67.86 / Max: 68.91

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 31224364860SE +/- 0.23, N = 454.4154.9254.791. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 31122334455Min: 53.75 / Avg: 54.41 / Max: 54.811. (CC) gcc options: -O2 -std=c99

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3510152025SE +/- 0.04, N = 318.8218.7618.911. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3510152025Min: 18.75 / Avg: 18.82 / Max: 18.891. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeRun 1Run 2Run 31428425670SE +/- 0.88, N = 363.8862.8362.591. (CC) gcc options: -lm -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeRun 1Run 2Run 31224364860Min: 62.73 / Avg: 63.88 / Max: 65.621. (CC) gcc options: -lm -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionRun 1Run 2Run 31428425670SE +/- 0.02, N = 362.1162.1062.071. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionRun 1Run 2Run 31224364860Min: 62.07 / Avg: 62.11 / Max: 62.151. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosRun 1Run 2Run 370140210280350314314314

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesRun 1Run 2Run 370140210280350327326328

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRun 1Run 2Run 31020304050SE +/- 0.01, N = 543.1843.2043.211. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRun 1Run 2Run 3918273645Min: 43.15 / Avg: 43.18 / Max: 43.211. (CXX) g++ options: -fvisibility=hidden -logg -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaRun 1Run 2Run 30.23850.4770.71550.9541.1925SE +/- 0.00, N = 31.061.061.061. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaRun 1Run 2Run 3246810Min: 1.06 / Avg: 1.06 / Max: 1.061. (CXX) g++ options: -O3 -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedRun 1Run 2Run 3120240360480600SE +/- 1.45, N = 35345345351. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedRun 1Run 2Run 390180270360450Min: 531 / Avg: 533.67 / Max: 5361. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenRun 1Run 2Run 390180270360450SE +/- 0.58, N = 34174174141. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenRun 1Run 2Run 370140210280350Min: 416 / Avg: 417 / Max: 4181. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateRun 1Run 2Run 3140280420560700SE +/- 2.89, N = 36386406331. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateRun 1Run 2Run 3110220330440550Min: 633 / Avg: 638 / Max: 6431. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlRun 1Run 2Run 330060090012001500SE +/- 5.70, N = 31203120512011. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlRun 1Run 2Run 32004006008001000Min: 1192 / Avg: 1203.33 / Max: 12101. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingRun 1Run 2Run 34008001200160020001821181818201. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassRun 1Run 2Run 31326395265SE +/- 0.04, N = 359.8659.8760.17
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassRun 1Run 2Run 31224364860Min: 59.8 / Avg: 59.86 / Max: 59.93

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionRun 1306090120150133.9

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionRun 1142842567062.7

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACRun 1Run 2Run 3918273645SE +/- 0.02, N = 540.1840.1840.281. (CXX) g++ options: -O2 -fvisibility=hidden -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACRun 1Run 2Run 3816243240Min: 40.14 / Avg: 40.18 / Max: 40.261. (CXX) g++ options: -O2 -fvisibility=hidden -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomRun 1Run 2Run 30.1080.2160.3240.4320.54SE +/- 0.00, N = 30.480.480.481. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomRun 1Run 2Run 312345Min: 0.48 / Avg: 0.48 / Max: 0.481. (CXX) g++ options: -O3 -pthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 1Run 2Run 31326395265SE +/- 0.15, N = 355.5455.2755.67
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 1Run 2Run 31122334455Min: 55.24 / Avg: 55.54 / Max: 55.75

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmRun 1Run 2Run 31020304050SE +/- 0.03, N = 344.7644.7244.68
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmRun 1Run 2Run 3918273645Min: 44.71 / Avg: 44.76 / Max: 44.79

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectRun 1Run 2Run 31224364860SE +/- 0.02, N = 351.9452.0452.11
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectRun 1Run 2Run 31020304050Min: 51.9 / Avg: 51.94 / Max: 51.97

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskRun 1Run 2Run 31224364860SE +/- 0.73, N = 351.2151.9051.23
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskRun 1Run 2Run 31020304050Min: 49.98 / Avg: 51.21 / Max: 52.5

Gzip Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using Gzip compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzRun 1Run 2Run 31122334455SE +/- 0.21, N = 349.9350.4750.23
OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzRun 1Run 2Run 31020304050Min: 49.7 / Avg: 49.93 / Max: 50.36

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsRun 1Run 2Run 30.26780.53560.80341.07121.339SE +/- 0.00, N = 31.191.191.191. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsRun 1Run 2Run 3246810Min: 1.19 / Avg: 1.19 / Max: 1.21. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRun 1Run 2Run 30.2790.5580.8371.1161.395SE +/- 0.00, N = 31.231.241.241. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRun 1Run 2Run 3246810Min: 1.23 / Avg: 1.23 / Max: 1.241. (CXX) g++ options: -O3 -pthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pRun 1Run 2Run 33691215SE +/- 0.13, N = 312.9012.7012.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pRun 1Run 2Run 348121620Min: 12.69 / Avg: 12.9 / Max: 13.131. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesRun 1Run 2Run 31122334455SE +/- 0.43, N = 346.5346.6446.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesRun 1Run 2Run 31020304050Min: 46.04 / Avg: 46.53 / Max: 47.39

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesRun 1Run 2Run 31020304050SE +/- 0.02, N = 345.1245.1345.101. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesRun 1Run 2Run 3918273645Min: 45.09 / Avg: 45.12 / Max: 45.141. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 31020304050SE +/- 0.04, N = 344.8944.7244.97
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 3918273645Min: 44.83 / Avg: 44.89 / Max: 44.95

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Run 1Run 2Run 3140280420560700SE +/- 0.25, N = 3637.13637.09637.08MIN: 632.77 / MAX: 642.36MIN: 633.86 / MAX: 640.2MIN: 634.04 / MAX: 639.951. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Run 1Run 2Run 3110220330440550Min: 636.69 / Avg: 637.13 / Max: 637.571. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeRun 1Run 2Run 31020304050SE +/- 0.09, N = 343.4343.2442.971. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeRun 1Run 2Run 3918273645Min: 43.3 / Avg: 43.43 / Max: 43.591. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Run 1Run 2Run 3130260390520650SE +/- 0.04, N = 3611.71612.03611.77MIN: 611.53 / MAX: 611.9MIN: 611.9 / MAX: 612.31MIN: 611.64 / MAX: 613.271. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Run 1Run 2Run 3110220330440550Min: 611.63 / Avg: 611.71 / Max: 611.751. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastRun 1Run 2Run 348121620SE +/- 0.01, N = 314.0414.0414.001. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastRun 1Run 2Run 348121620Min: 14.03 / Avg: 14.04 / Max: 14.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedRun 1Run 2Run 32K4K6K8K10KSE +/- 47.39, N = 310342.410337.910333.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedRun 1Run 2Run 32K4K6K8K10KMin: 10294.3 / Avg: 10342.43 / Max: 10437.21. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedRun 1Run 2Run 32K4K6K8K10KSE +/- 39.26, N = 39087.539061.869062.331. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedRun 1Run 2Run 316003200480064008000Min: 9047.57 / Avg: 9087.53 / Max: 9166.051. (CC) gcc options: -O3

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 1Run 2Run 3246810SE +/- 0.06, N = 35.86.06.01. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 1Run 2Run 3246810Min: 5.7 / Avg: 5.8 / Max: 5.91. (CC) gcc options: -fopenmp -O3 -lm

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingRun 1Run 2Run 31224364860SE +/- 1.62, N = 1553.8536.6635.531. (CC) gcc options: -ldl -lm -lpthread -O3 -ffast-math -maltivec -mabi=altivec -mvsx -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingRun 1Run 2Run 31122334455Min: 33.46 / Avg: 53.85 / Max: 60.771. (CC) gcc options: -ldl -lm -lpthread -O3 -ffast-math -maltivec -mabi=altivec -mvsx -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 310002000300040005000SE +/- 36.98, N = 34022.54657.74228.71. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 38001600240032004000Min: 3958.3 / Avg: 4022.53 / Max: 4086.41. (CC) gcc options: -O3 -pthread -lz

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsRun 1Run 2Run 3918273645SE +/- 0.48, N = 339.5339.3840.54
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsRun 1Run 2Run 3816243240Min: 39 / Avg: 39.53 / Max: 40.49

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDRun 1Run 2Run 3150K300K450K600K750KSE +/- 6623.24, N = 15682251.35689655.12693377.691. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDRun 1Run 2Run 3120K240K360K480K600KMin: 628555.62 / Avg: 682251.35 / Max: 703954.941. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateRun 1Run 2Run 3306090120150150150150

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1Run 2Run 3918273645SE +/- 0.12, N = 338.3138.1138.081. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1Run 2Run 3816243240Min: 38.08 / Avg: 38.31 / Max: 38.471. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentRun 1Run 2Run 3918273645SE +/- 0.47, N = 337.7038.9938.53
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentRun 1Run 2Run 3816243240Min: 36.76 / Avg: 37.7 / Max: 38.22

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsRun 1Run 2Run 3918273645SE +/- 0.05, N = 338.0738.0938.15
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsRun 1Run 2Run 3816243240Min: 37.99 / Avg: 38.07 / Max: 38.16

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 1Run 2Run 3714212835SE +/- 0.31, N = 430.9130.0130.82
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 1Run 2Run 3714212835Min: 30.23 / Avg: 30.91 / Max: 31.65

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KRun 1Run 2Run 320406080100SE +/- 0.17, N = 389.1589.4789.69MIN: 32.47 / MAX: 102.49MIN: 32.82 / MAX: 101.78MIN: 33.55 / MAX: 102.511. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KRun 1Run 2Run 320406080100Min: 88.83 / Avg: 89.15 / Max: 89.421. (CC) gcc options: -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRun 1Run 2Run 3306090120150SE +/- 0.23, N = 3132.7132.9132.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRun 1Run 2Run 320406080100Min: 132.2 / Avg: 132.67 / Max: 132.9

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsRun 1Run 2Run 31530456075SE +/- 0.06, N = 365.165.265.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsRun 1Run 2Run 31326395265Min: 65 / Avg: 65.1 / Max: 65.2

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyRun 1Run 2Run 3816243240SE +/- 0.18, N = 334.6934.5434.06
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyRun 1Run 2Run 3714212835Min: 34.34 / Avg: 34.69 / Max: 34.88

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRun 1Run 2Run 320406080100SE +/- 0.00, N = 375.175.175.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRun 1Run 2Run 31428425670Min: 75.1 / Avg: 75.1 / Max: 75.1

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 1Run 2Run 31428425670SE +/- 0.53, N = 362.262.762.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 1Run 2Run 31224364860Min: 61.1 / Avg: 62.17 / Max: 62.7

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRun 1Run 2Run 370K140K210K280K350KSE +/- 150.00, N = 3343870344020344020
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRun 1Run 2Run 360K120K180K240K300KMin: 343570 / Avg: 343870 / Max: 344020

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Run 1Run 2Run 3200K400K600K800K1000KSE +/- 1134.33, N = 3109113210911301092266
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Run 1Run 2Run 3200K400K600K800K1000KMin: 1088863 / Avg: 1091131.67 / Max: 1092266

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibRun 1Run 2Run 31224364860SE +/- 0.00, N = 351.951.651.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibRun 1Run 2Run 31020304050Min: 51.9 / Avg: 51.9 / Max: 51.9

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastRun 1Run 2Run 3510152025SE +/- 0.09, N = 318.4118.2718.371. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastRun 1Run 2Run 3510152025Min: 18.23 / Avg: 18.41 / Max: 18.551. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGRun 1Run 2Run 3816243240SE +/- 0.26, N = 332.5331.9532.501. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGRun 1Run 2Run 3714212835Min: 32.03 / Avg: 32.53 / Max: 32.871. rsvg-convert version 2.50.1

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeRun 1Run 2Run 3612182430SE +/- 0.07, N = 325.9726.0525.291. (CXX) g++ options: -pipe -O3 -ffast-math -pthread -R/usr/lib -lSDL -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeRun 1Run 2Run 3612182430Min: 25.83 / Avg: 25.97 / Max: 26.041. (CXX) g++ options: -pipe -O3 -ffast-math -pthread -R/usr/lib -lSDL -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 1Run 2Run 3510152025SE +/- 0.05, N = 521.6721.8021.831. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 1Run 2Run 3510152025Min: 21.51 / Avg: 21.67 / Max: 21.851. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessRun 1Run 2Run 3714212835SE +/- 0.02, N = 329.9729.9329.991. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessRun 1Run 2Run 3714212835Min: 29.93 / Avg: 29.97 / Max: 29.991. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileRun 1Run 2Run 3714212835SE +/- 0.01, N = 329.0428.9529.10
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileRun 1Run 2Run 3612182430Min: 29.01 / Avg: 29.04 / Max: 29.06

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionRun 1Run 2Run 3306090120150133.0132.9132.9

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 1Run 2Run 3306090120150SE +/- 1.27, N = 3132.6133.8133.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 1Run 2Run 3306090120150Min: 130.1 / Avg: 132.63 / Max: 133.9

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateRun 1Run 2Run 3714212835SE +/- 0.04, N = 328.2428.2728.33
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateRun 1Run 2Run 3612182430Min: 28.19 / Avg: 28.24 / Max: 28.31

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Run 1Run 2Run 3612182430SE +/- 0.10, N = 326.5526.5526.571. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Run 1Run 2Run 3612182430Min: 26.4 / Avg: 26.55 / Max: 26.731. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 1Run 2Run 35001000150020002500SE +/- 1.32, N = 32100.12098.42089.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 1Run 2Run 3400800120016002000Min: 2097.6 / Avg: 2100.07 / Max: 2102.1

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 1Run 2Run 35001000150020002500SE +/- 0.60, N = 32098.12096.92087.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 1Run 2Run 3400800120016002000Min: 2096.9 / Avg: 2098.1 / Max: 2098.7

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 1Run 2Run 35001000150020002500SE +/- 2.11, N = 32496.22494.02486.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 1Run 2Run 3400800120016002000Min: 2492.6 / Avg: 2496.23 / Max: 2499.9

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 1Run 2Run 35001000150020002500SE +/- 21.68, N = 32473.02488.72483.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 1Run 2Run 3400800120016002000Min: 2429.7 / Avg: 2472.97 / Max: 2497

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonRun 1Run 2Run 30.30150.6030.90451.2061.5075SE +/- 0.00, N = 31.341.331.33
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonRun 1Run 2Run 3246810Min: 1.33 / Avg: 1.34 / Max: 1.34

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingRun 1Run 2Run 32004006008001000SE +/- 1.17, N = 3782.07776.94777.731. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingRun 1Run 2Run 3140280420560700Min: 780.58 / Avg: 782.07 / Max: 784.381. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 31020304050SE +/- 0.01, N = 342.8342.8242.94MIN: 40.13MIN: 40.08MIN: 40.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3918273645Min: 42.8 / Avg: 42.83 / Max: 42.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 320406080100SE +/- 0.31, N = 3103.62103.39103.75MIN: 98.18MIN: 98.31MIN: 99.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 320406080100Min: 103.02 / Avg: 103.62 / Max: 104.021. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupPBKDF2-sha512Run 2Run 3200K400K600K800K1000K10922661092266

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupPBKDF2-whirlpoolRun 2Run 370K140K210K280K350K343570343570

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelRun 1Run 2Run 3510152025SE +/- 0.02, N = 318.5318.6418.731. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelRun 1Run 2Run 3510152025Min: 18.49 / Avg: 18.53 / Max: 18.571. (CC) gcc options: -lm -lpthread -O3

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Run 1Run 2Run 3510152025SE +/- 0.09, N = 318.5418.4418.541. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Run 1Run 2Run 3510152025Min: 18.45 / Avg: 18.54 / Max: 18.721. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianRun 1Run 2Run 348121620SE +/- 0.08, N = 318.0918.1717.90
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianRun 1Run 2Run 3510152025Min: 17.94 / Avg: 18.09 / Max: 18.22

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionRun 1Run 2Run 348121620SE +/- 0.00, N = 317.6617.6617.661. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionRun 1Run 2Run 348121620Min: 17.66 / Avg: 17.66 / Max: 17.671. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Scikit-Learn

Scikit-learn is a Python module for machine learning Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1Run 1Run 2Run 348121620SE +/- 0.01, N = 317.5317.5017.50
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1Run 1Run 2Run 348121620Min: 17.52 / Avg: 17.53 / Max: 17.54

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropRun 1Run 2Run 348121620SE +/- 0.12, N = 315.9316.2416.25
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropRun 1Run 2Run 348121620Min: 15.81 / Avg: 15.93 / Max: 16.17

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 31.25712.51423.77135.02846.2855SE +/- 0.051, N = 144.9355.5805.587
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 3246810Min: 4.88 / Avg: 4.93 / Max: 5.59

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3918273645SE +/- 0.04, N = 340.8340.6541.08MIN: 36.3MIN: 36.16MIN: 36.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3918273645Min: 40.75 / Avg: 40.83 / Max: 40.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 3714212835SE +/- 0.15, N = 329.7730.2629.82MIN: 26.11MIN: 27.23MIN: 26.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 3714212835Min: 29.57 / Avg: 29.77 / Max: 30.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Run 1Run 2Run 348121620SE +/- 0.00, N = 314.4014.4014.401. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Run 1Run 2Run 348121620Min: 14.39 / Avg: 14.4 / Max: 14.41. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastRun 1Run 2Run 31020304050SE +/- 0.05, N = 343.2542.9643.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastRun 1Run 2Run 3918273645Min: 43.17 / Avg: 43.25 / Max: 43.341. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleRun 1Run 2Run 33691215SE +/- 0.01, N = 312.6412.5412.52
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleRun 1Run 2Run 348121620Min: 12.63 / Avg: 12.64 / Max: 12.65

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pRun 1Run 2Run 360120180240300SE +/- 3.54, N = 3261.55262.22253.46MIN: 91.05 / MAX: 314.59MIN: 94.27 / MAX: 310.07MIN: 95.85 / MAX: 301.921. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pRun 1Run 2Run 350100150200250Min: 254.8 / Avg: 261.55 / Max: 266.771. (CC) gcc options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 33691215SE +/- 0.00387, N = 39.167339.077369.16306MIN: 8.32MIN: 8.26MIN: 8.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 33691215Min: 9.16 / Avg: 9.17 / Max: 9.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1Run 2Run 31.07332.14663.21994.29325.3665SE +/- 0.00269, N = 34.770184.770394.76997MIN: 4.24MIN: 4.28MIN: 4.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1Run 2Run 3246810Min: 4.77 / Avg: 4.77 / Max: 4.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPRun 1Run 2Run 3110K220K330K440K550KSE +/- 2892.27, N = 3519606.57500314.16521050.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPRun 1Run 2Run 390K180K270K360K450KMin: 513874.62 / Avg: 519606.57 / Max: 523146.471. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHRun 1Run 2Run 3110K220K330K440K550KSE +/- 4078.83, N = 3509296.11501002.00510285.721. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHRun 1Run 2Run 390K180K270K360K450KMin: 503448.41 / Avg: 509296.11 / Max: 517145.811. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETRun 1Run 2Run 3130K260K390K520K650KSE +/- 3402.58, N = 3584774.65562275.44531519.691. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETRun 1Run 2Run 3100K200K300K400K500KMin: 579840 / Avg: 584774.65 / Max: 591300.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Run 1Run 2Run 3246810SE +/- 0.046, N = 58.3378.4968.496
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Run 1Run 2Run 33691215Min: 8.26 / Avg: 8.34 / Max: 8.52

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETRun 1Run 2Run 3200K400K600K800K1000KSE +/- 4540.22, N = 3800110.61811922.06791993.691. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETRun 1Run 2Run 3140K280K420K560K700KMin: 795544.94 / Avg: 800110.61 / Max: 8091911. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Run 1Run 2Run 33691215SE +/- 0.00, N = 310.9610.9610.961. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Run 1Run 2Run 33691215Min: 10.96 / Avg: 10.96 / Max: 10.971. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Run 1Run 2Run 33691215SE +/- 0.01, N = 311.2711.2611.241. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Run 1Run 2Run 33691215Min: 11.26 / Avg: 11.27 / Max: 11.281. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 1Run 2Run 33691215SE +/- 0.03, N = 311.2711.3611.211. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 1Run 2Run 33691215Min: 11.21 / Avg: 11.27 / Max: 11.31. (CC) gcc options: -std=c99 -O3 -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3816243240SE +/- 0.21, N = 333.2033.4233.11MIN: 30.98MIN: 31.15MIN: 31.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3714212835Min: 32.78 / Avg: 33.2 / Max: 33.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 1Run 2Run 348121620SE +/- 0.06, N = 313.5613.4113.68MIN: 12MIN: 11.94MIN: 12.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 1Run 2Run 348121620Min: 13.46 / Avg: 13.56 / Max: 13.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultRun 1Run 2Run 3246810SE +/- 0.003, N = 37.8687.8697.8651. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultRun 1Run 2Run 33691215Min: 7.86 / Avg: 7.87 / Max: 7.871. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinRun 1Run 2Run 348121620SE +/- 0.38, N = 1513.8614.8515.071. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinRun 1Run 2Run 348121620Min: 11.65 / Avg: 13.86 / Max: 16.081. (CXX) g++ options: -O3 -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 31326395265SE +/- 0.03, N = 355.3055.2455.80MIN: 53.16MIN: 53.33MIN: 53.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 31122334455Min: 55.24 / Avg: 55.29 / Max: 55.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

System ZLIB Decompression

This test measures the time to decompress a Linux kernel tarball using ZLIB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7Run 1Run 2Run 36001200180024003000SE +/- 18.03, N = 102723.852886.062874.76
OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7Run 1Run 2Run 35001000150020002500Min: 2703.27 / Avg: 2723.85 / Max: 2886.04

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark part of libjpeg-turbo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Run 1Run 2Run 320406080100SE +/- 0.02, N = 3101.75101.81101.811. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Run 1Run 2Run 320406080100Min: 101.73 / Avg: 101.75 / Max: 101.791. (CC) gcc options: -O3 -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3612182430SE +/- 0.10, N = 325.6125.5225.58MIN: 24.97MIN: 24.94MIN: 25.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3612182430Min: 25.45 / Avg: 25.61 / Max: 25.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid BenchmarkRun 1Run 2Run 3800K1600K2400K3200K4000KSE +/- 14205.58, N = 33472681348627835046421. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid BenchmarkRun 1Run 2Run 3600K1200K1800K2400K3000KMin: 3455214 / Avg: 3472681 / Max: 35008201. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

System XZ Decompression

This test measures the time to decompress a Linux kernel tarball using XZ. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem XZ DecompressionRun 1Run 2Run 31.25172.50343.75515.00686.2585SE +/- 0.004, N = 35.5555.5615.563
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem XZ DecompressionRun 1Run 2Run 3246810Min: 5.55 / Avg: 5.55 / Max: 5.56

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesRun 1Run 2Run 31.00152.0033.00454.0065.0075SE +/- 0.051, N = 34.4174.4214.4511. (CXX) g++ options: -fopenmp -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesRun 1Run 2Run 3246810Min: 4.32 / Avg: 4.42 / Max: 4.491. (CXX) g++ options: -fopenmp -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3612182430SE +/- 0.11, N = 326.8026.0526.81MIN: 24.26MIN: 24.26MIN: 25.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3612182430Min: 26.59 / Avg: 26.8 / Max: 26.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215SE +/- 0.01, N = 310.3510.3210.39MIN: 9.81MIN: 9.88MIN: 9.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215Min: 10.32 / Avg: 10.35 / Max: 10.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Parallel BZIP2 Compression

This test measures the time needed to compress a file (a .tar package of the Linux kernel source code) using BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.12256MB File CompressionRun 1Run 2Run 30.46760.93521.40281.87042.3382.0782.0512.0261. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

210 Results Shown

Caffe:
  GoogleNet - CPU - 1000
  AlexNet - CPU - 1000
G'MIC
Caffe
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Timed GCC Compilation
Numenta Anomaly Benchmark
Caffe
LAMMPS Molecular Dynamics Simulator
Timed LLVM Compilation
Basis Universal
VP9 libvpx Encoding
Caffe
C-Blosc
LeelaChessZero
libgav1
AOM AV1
Caffe
Numpy Benchmark
Blender
High Performance Conjugate Gradient
AOM AV1
Blender
7-Zip Compression
rav1e
libgav1
Kvazaar:
  Bosphorus 4K - Medium
  Bosphorus 4K - Slow
rav1e
AOM AV1
libgav1
Blender
Mlpack Benchmark
AOM AV1
rav1e
Blender
PyPerformance
Numenta Anomaly Benchmark
rav1e
VP9 libvpx Encoding
Node.js V8 Web Tooling Benchmark
GEGL
GraphicsMagick
WavPack Audio Encoding
Zstd Compression
SQLite Speedtest
Stockfish
PyPerformance
Build2
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
Timed Eigen Compilation
Blender
PyPerformance
Mlpack Benchmark
BYTE Unix Benchmark
AOM AV1
dcraw
dav1d
GEGL:
  Color Enhance
  Wavelet Blur
Kvazaar
Hugin
x265
Mlpack Benchmark
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
PyPerformance
oneDNN
XZ Compression
PyPerformance
Git
OpenCV
libgav1
G'MIC
Timed PHP Compilation
PyPerformance
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
RawTherapee
PyBench
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
  3 - Decompression Speed
  3 - Compression Speed
Basis Universal
GEGL
GraphicsMagick
GEGL
GIMP
Numenta Anomaly Benchmark
PyPerformance
dav1d
Timed Linux Kernel Compilation
eSpeak-NG Speech Engine
LibRaw
AOBench
WebP Image Encode
PyPerformance:
  chaos
  crypto_pyaes
Opus Codec Encoding
simdjson
GraphicsMagick:
  Enhanced
  Sharpen
  Rotate
  Swirl
  Resizing
GEGL
Cryptsetup:
  Twofish-XTS 512b Encryption
  Serpent-XTS 512b Encryption
FLAC Audio Encoding
simdjson
Timed FFmpeg Compilation
Mlpack Benchmark
GEGL
GIMP
Gzip Compression
simdjson:
  PartialTweets
  DistinctUserID
x265
Tesseract OCR
G'MIC
Timed Apache Compilation
TNN
Tachyon
TNN
Kvazaar
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
CLOMP
x264
Zstd Compression
GIMP
Redis
PyPerformance
RNNoise
OCRMyPDF
Dolfyn
Unpacking Firefox
dav1d
Cryptsetup
PyPerformance
Numenta Anomaly Benchmark
Cryptsetup:
  Serpent-XTS 256b Decryption
  Serpent-XTS 256b Encryption
  PBKDF2-whirlpool
  PBKDF2-sha512
PyPerformance
Kvazaar
librsvg
POV-Ray
Monkey Audio Encoding
WebP Image Encode
Timed ImageMagick Compilation
Cryptsetup:
  Twofish-XTS 256b Decryption
  Twofish-XTS 256b Encryption
GIMP
Basis Universal
Cryptsetup:
  AES-XTS 512b Decryption
  AES-XTS 512b Encryption
  AES-XTS 256b Decryption
  AES-XTS 256b Encryption
PyPerformance
TTSIOD 3D Renderer
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
Cryptsetup:
  PBKDF2-sha512
  PBKDF2-whirlpool
C-Ray
Basis Universal
Numenta Anomaly Benchmark
WebP Image Encode
Scikit-Learn
GEGL
System GZIP Decompression
oneDNN:
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 1D - f32 - CPU
LAME MP3 Encoding
Kvazaar
GEGL
dav1d
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
Redis:
  LPOP
  LPUSH
  SET
GNU Octave Benchmark
Redis
WebP Image Encode
Basis Universal
Timed MAFFT Alignment
oneDNN:
  IP Shapes 3D - u8s8f32 - CPU
  IP Shapes 3D - f32 - CPU
WebP Image Encode
LAMMPS Molecular Dynamics Simulator
oneDNN
System ZLIB Decompression
libjpeg-turbo tjbench
oneDNN
Algebraic Multi-Grid Benchmark
System XZ Decompression
Smallpt
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
Parallel BZIP2 Compression