POWER9 44c 176t 2021

POWER9 testing with a PowerNV T2P9D01 REV 1.01 and ASPEED on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101051-HA-POWER944C01
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 5 Tests
AV1 4 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 9 Tests
C/C++ Compiler Tests 27 Tests
Compression Tests 10 Tests
CPU Massive 36 Tests
Creator Workloads 36 Tests
Database Test Suite 2 Tests
Encoding 13 Tests
Fortran Tests 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 11 Tests
Machine Learning 11 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 31 Tests
NVIDIA GPU Compute 2 Tests
OCR 2 Tests
OpenMPI Tests 3 Tests
Productivity 4 Tests
Programmer / Developer System Benchmarks 19 Tests
Python 6 Tests
Raytracing 3 Tests
Renderers 6 Tests
Scientific Computing 5 Tests
Server 4 Tests
Server CPU Tests 21 Tests
Single-Threaded 12 Tests
Speech 2 Tests
Telephony 2 Tests
Video Encoding 8 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
January 03 2021
  1 Day, 14 Hours, 53 Minutes
Run 2
January 05 2021
  9 Hours, 39 Minutes
Run 3
January 05 2021
  9 Hours, 40 Minutes
Run 4
January 05 2021
  21 Minutes
Invert Hiding All Results Option
  14 Hours, 38 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


POWER9 44c 176t 2021OpenBenchmarking.orgPhoronix Test SuitePOWER9 @ 3.80GHz (44 Cores / 176 Threads)PowerNV T2P9D01 REV 1.0164GB500GB Samsung SSD 860ASPEEDVE2282 x Broadcom NetXtreme BCM5719 PCIeUbuntu 20.105.9.10-050910-generic (ppc64le)X ServerGCC 10.2.0ext41920x1080ProcessorMotherboardMemoryDiskGraphicsMonitorNetworkOSKernelDisplay ServerCompilerFile-SystemScreen ResolutionPOWER9 44c 176t 2021 BenchmarksSystem Logs- --build=powerpc64le-linux-gnu --disable-multilib --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-xyKMTo/gcc-10-10.2.0/debian/tmp-nvptx/usr --enable-plugin --enable-secureplt --enable-shared --enable-targets=powerpcle-linux --enable-threads=posix --host=powerpc64le-linux-gnu --program-prefix=powerpc64le-linux-gnu- --target=powerpc64le-linux-gnu --with-cpu=power8 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-long-double-128 --with-target-system-zlib=auto --without-cuda-driver -v - SMT (threads per core): 4- Python 3.8.6- itlb_multihit: Not affected + l1tf: Mitigation of RFI Flush L1D private per thread + mds: Not affected + meltdown: Mitigation of RFI Flush L1D private per thread + spec_store_bypass: Mitigation of Kernel entry/exit barrier (eieio) + spectre_v1: Mitigation of __user pointer sanitization ori31 speculation barrier enabled + spectre_v2: Mitigation of Indirect branch cache disabled Software link stack flush + srbds: Not affected + tsx_async_abort: Not affected

POWER9 44c 176t 2021hpcg: compress-zstd: 3system-decompress-gzip: lammps: 20k Atomsredis: SETgraphics-magick: Noise-Gaussiansystem-decompress-zlib: stockfish: Total Timemlpack: scikit_qdax265: Bosphorus 1080plczero: BLASredis: LPOPhugin: Panorama Photo Assistant + Stitching Timeblender: Fishy Cat - CPU-Onlydav1d: Summer Nature 1080pclomp: Static OMP Speedupocrmypdf: Processing 60 Page PDF Documentunpack-firefox: firefox-84.0.source.tar.xzpovray: Trace Timeonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUgimp: auto-levelspyperformance: python_startupnumenta-nab: Bayesian Changepointlibgav1: Chimera 1080p 10-bitmlpack: scikit_linearridgeregressioncompress-zstd: 19compress-pbzip2: 256MB File Compressionredis: GETblender: BMW27 - CPU-Onlyopencv: DNN - Deep Neural Networkbuild-linux-kernel: Time To Compilelibgav1: Summer Nature 4Knumenta-nab: EXPoSEaobench: 2048 x 2048 - Total Timegegl: Cropmnn: mobilenet-v1-1.0onednn: IP Shapes 3D - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUoctave-benchmark: onednn: Recurrent Neural Network Inference - u8s8f32 - CPUnumenta-nab: Relative Entropyredis: LPUSHrsvg: SVG Files To PNGmnn: SqueezeNetV1.0onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUredis: SADDnumenta-nab: Windowed Gaussianblosc: blosclzmnn: resnet-v2-50mnn: MobileNetV2_224dav1d: Chimera 1080pmafft: Multiple Sequence Alignment - LSU RNAgimp: unsharp-masknumenta-nab: Earthgecko Skylinemlpack: scikit_icacompress-7zip: Compress Speed Testcompress-lz4: 3 - Decompression Speednode-web-tooling: gegl: Cartoonkvazaar: Bosphorus 1080p - Slowx265: Bosphorus 4Klibgav1: Chimera 1080pgmic: 2D Function Plotting, 1000 Timesgraphics-magick: Rotatec-ray: Total Time - 4K, 16 Rays Per Pixelcompress-gzip: Linux Source Tree Archiving To .tar.gztachyon: Total Timeonednn: Recurrent Neural Network Training - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUcryptsetup: Twofish-XTS 256b Encryptiongegl: Scaleonednn: IP Shapes 3D - u8s8f32 - CPUbuild2: Time To Compileespeak: Text-To-Speech Synthesisonednn: Recurrent Neural Network Training - u8s8f32 - CPUlibgav1: Summer Nature 1080pamg: cryptsetup: AES-XTS 256b Encryptionaom-av1: Speed 6 Realtimemnn: inception-v3simdjson: DistinctUserIDcryptsetup: Serpent-XTS 256b Encryptionlibraw: Post-Processing Benchmarksmallpt: Global Illumination Renderer; 128 Sampleskvazaar: Bosphorus 1080p - Very Fastpyperformance: pickle_pure_pythonvpxenc: Speed 5encode-ape: WAV To APEpyperformance: raytracegraphics-magick: Sharpenbuild-ffmpeg: Time To Compilebasis: ETC1Sbyte: Dhrystone 2tesseract-ocr: Time To OCR 7 Imageskvazaar: Bosphorus 1080p - Ultra Fastblender: Barbershop - CPU-Onlyttsiod-renderer: Phong Rendering With Soft-Shadow Mappingonednn: Deconvolution Batch shapes_3d - f32 - CPUpyperformance: crypto_pyaesdav1d: Summer Nature 4Krnnoise: cryptsetup: AES-XTS 512b Encryptionsqlite-speedtest: Timed Time - Size 1,000pyperformance: pathlibdav1d: Chimera 1080p 10-bitblender: Pabellon Barcelona - CPU-Onlykvazaar: Bosphorus 1080p - Mediumbasis: UASTC Level 2gmic: 3D Elevated Function In Rand Colors, 100 Timescryptsetup: AES-XTS 512b Decryptionbuild-apache: Time To Compilegegl: Rotate 90 Degreesbuild-imagemagick: Time To Compilegegl: Tile Glassgegl: Wavelet Blurnumpy: rav1e: 6cryptsetup: AES-XTS 256b Decryptionbuild-php: Time To Compilerawtherapee: Total Benchmark Timegit: Time To Complete Common Git Commandsonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUgegl: Antialiasgraphics-magick: Swirlencode-wavpack: WAV To WavPackgegl: Reflectgimp: rotatebasis: UASTC Level 0basis: UASTC Level 2 + RDO Post-Processinggegl: Color Enhancekvazaar: Bosphorus 4K - Ultra Fastcompress-lz4: 9 - Decompression Speedcompress-lz4: 1 - Compression Speedonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUpyperformance: floatdcraw: RAW To PPM Image Conversionrav1e: 10onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUencode-flac: WAV To FLACpybench: Total For Average Test Timesdolfyn: Computational Fluid Dynamicspyperformance: nbodywebp: Quality 100, Losslessgraphics-magick: Enhancedbuild-eigen: Time To Compilebuild-gcc: Time To Compilemlpack: scikit_svmkvazaar: Bosphorus 4K - Very Fastscikit-learn: graphics-magick: Resizingpyperformance: json_loadscryptsetup: Twofish-XTS 512b Decryptionsystem-decompress-xz: pyperformance: gocryptsetup: PBKDF2-whirlpoolcompress-lz4: 3 - Compression Speedcryptsetup: PBKDF2-sha512cryptsetup: PBKDF2-sha512compress-lz4: 1 - Decompression Speedcompress-lz4: 9 - Compression Speedblender: Classroom - CPU-Onlycryptsetup: Twofish-XTS 256b Decryptiongmic: Plotting Isosurface Of A 3D Volume, 1000 Timesbasis: UASTC Level 3encode-opus: WAV To Opus Encodewebp: Quality 100, Lossless, Highest Compressiontjbench: tnn: CPU - SqueezeNet v1.1webp: Defaultwebp: Quality 100cryptsetup: PBKDF2-whirlpoolencode-mp3: WAV To MP3onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUtnn: CPU - MobileNet v2webp: Quality 100, Highest Compressionpyperformance: django_templatepyperformance: regex_compilepyperformance: chaospyperformance: 2to3cryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Serpent-XTS 512b Encryptioncryptsetup: Serpent-XTS 256b Decryptionvpxenc: Speed 0rav1e: 5rav1e: 1kvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 4K - Slowaom-av1: Speed 8 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 0 Two-Passsimdjson: PartialTweetssimdjson: LargeRandsimdjson: Kostyacaffe: GoogleNet - CPU - 1000caffe: GoogleNet - CPU - 200caffe: GoogleNet - CPU - 100caffe: AlexNet - CPU - 1000caffe: AlexNet - CPU - 200caffe: AlexNet - CPU - 100gimp: resizecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9build-llvm: Time To Compilex264: H.264 Video Encodinggraphics-magick: HWB Color Spacelammps: Rhodopsin ProteinRun 1Run 2Run 3Run 415.05254022.54.93515.332584774.654142723.8496833166033245.6312.90946519606.57104.452233.85261.555.837.70430.90925.97026.802639.52914.570.32813.763.7577.62.078800110.61123.912579267.85713.561409.86763.88215.928116.43113.56357532.058.3377568.8934.694509296.1132.52991.8497566.4029.7698682251.3518.0932982.6654.28056.505173.0211.26751.214228.37289.891684569240.54.01205.8157.056.2427.702959.77463818.53149.92943.426513101.140.827655.29509.16733132.612.63933.1977135.39754.41013142.042.2734726812473.02.32471.0761.2362.218.824.41718.411.342.7121.6701.3941755.53978.74627089795.246.53143.25424.42782.06810.349632789.1538.3092098.1170.61651.9109.32356.477.0718.53986.9042100.144.88677.41229.04159.862106.159167.760.2092496.286.39483.59692.51725.6088103.62170.7451203128.75951.93528.23911.2691081.060106.51414.049250.99087.5342.8257384113.3060.40113161.340.182360038.06647629.966534129.9031431.90544.765.6517.526182165.1132.75.55571439.71109113210342.438.06264.2913345.11826.55343.18062.105101.751578611.7067.86810.96434387014.3974.77018637.13117.662150453314798133.962.775.10.580.1790.0841.871.895.100.250.170.021.190.481.0619687833643951860596544121250077040020.62127.963536.16753.85142313.86419.28514657.75.5814.441562275.444212886.0626843155289846.1212.7942500314.16107.327229.33262.22638.98530.00626.04526.045439.38114.969.86713.423.8579.52.051811922.06125.572595169.43213.421439.88162.83416.235114.12913.40697670.668.4967676.7934.5450100231.95193.448766730.2592689655.1218.1732968.5654.38856.148172.0611.36351.903228.68789.381706269299.54.05208.2856.976.1727.632942.76264018.6450.46943.235913011.540.652855.23629.07736133.912.54233.4242135.59854.91613133.142.6634862782494.42.33470.6551.2462.718.764.42118.271.332.6921.7951.3841755.27479.28927261721.546.64142.96426.94776.9410.321732689.4738.1092100.1170.85151.6109.23358.17.0318.43887.3892100.544.71977.58728.95359.867106.635168.590.2082495.886.04783.67692.17425.5176103.38870.9821205129.06352.0428.26511.2591078.212106.82914.0492439061.8642.8246384113.3080.40213188.240.182360538.0947529.928534129.731434.53144.725.6617.497181865.2132.95.56171434402039.661092266109113010337.938.05264.3113345.1326.55343.20362.095101.807647612.0317.86910.95934402014.4034.77039637.09217.66315045331479875.10.580.1790.0841.871.895.10.250.170.021.190.481.0622.23825.042568.00836.66147914.85219.27584228.75.58715.892531519.693962874.7575682993806243.9612.32984521050108.532225.99253.46638.5330.82125.29326.814240.53814.571.75113.783.7579.62.026791993.69122.662639769.14413.721437.62562.59416.25116.2913.67657679.578.4967534.5734.057510285.7232.49891.8797693.2829.8171693377.6917.8952980.4663.97256.959174.5311.21251.229231.40188.721701059183.24.06207.4687.046.2327.942975.57363318.73250.23242.974213147.241.075555.79959.16306133.812.51833.1091136.67654.79113254.542.4335046422492.42.31474.5351.2462.718.914.45118.371.332.7121.831.3941455.66578.9832727569546.32543.18427.26777.72610.388332889.6938.0812097.6171.61251.8108.70356.077.0518.54287.1622099.344.96577.82629.10460.168106.696168.520.2092494.386.34783.35892.24925.5774103.74870.8391201129.18552.10628.32611.2351077.825106.807149224.79062.3342.9411383113.5890.40113157.240.275359738.14947629.99535129.6641434.11744.685.6617.498182065.1132.95.56371534402039.661092266109226610333.638.03264.49132.945.126.5743.20762.07101.806761611.7667.86510.96134402014.3994.76997637.08117.66215045331479875.10.580.1790.0841.871.895.10.250.170.021.190.481.0621.83527.649562.20835.53148915.07119.23643013.5OpenBenchmarking.org

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Run 1Run 2Run 3Run 4510152025SE +/- 0.14, N = 1015.0519.2919.2819.241. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Run 1Run 2Run 3Run 4510152025Min: 14.76 / Avg: 15.05 / Max: 16.291. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 310002000300040005000SE +/- 36.98, N = 34022.54657.74228.71. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1Run 2Run 38001600240032004000Min: 3958.3 / Avg: 4022.53 / Max: 4086.41. (CC) gcc options: -O3 -pthread -lz

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 31.25712.51423.77135.02846.2855SE +/- 0.051, N = 144.9355.5805.587
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionRun 1Run 2Run 3246810Min: 4.88 / Avg: 4.93 / Max: 5.59

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsRun 1Run 2Run 348121620SE +/- 0.28, N = 915.3314.4415.891. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsRun 1Run 2Run 348121620Min: 13.5 / Avg: 15.33 / Max: 15.881. (CXX) g++ options: -O3 -pthread -lm

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETRun 1Run 2Run 3130K260K390K520K650KSE +/- 3402.58, N = 3584774.65562275.44531519.691. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETRun 1Run 2Run 3100K200K300K400K500KMin: 579840 / Avg: 584774.65 / Max: 591300.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianRun 1Run 2Run 390180270360450SE +/- 5.70, N = 44144213961. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianRun 1Run 2Run 370140210280350Min: 397 / Avg: 414 / Max: 4211. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

System ZLIB Decompression

This test measures the time to decompress a Linux kernel tarball using ZLIB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7Run 1Run 2Run 36001200180024003000SE +/- 18.03, N = 102723.852886.062874.76
OpenBenchmarking.orgms, Fewer Is BetterSystem ZLIB Decompression 1.2.7Run 1Run 2Run 35001000150020002500Min: 2703.27 / Avg: 2723.85 / Max: 2886.04

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeRun 1Run 2Run 37M14M21M28M35MSE +/- 231184.54, N = 33166033231552898299380621. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeRun 1Run 2Run 35M10M15M20M25MMin: 31219943 / Avg: 31660332.33 / Max: 320025181. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -flto -flto=jobserver

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRun 1Run 2Run 31020304050SE +/- 0.55, N = 645.6346.1243.96
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaRun 1Run 2Run 3918273645Min: 44.74 / Avg: 45.63 / Max: 48.33

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pRun 1Run 2Run 33691215SE +/- 0.13, N = 312.9012.7012.321. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pRun 1Run 2Run 348121620Min: 12.69 / Avg: 12.9 / Max: 13.131. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASRun 1Run 2Run 32004006008001000SE +/- 11.95, N = 99469429841. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASRun 1Run 2Run 32004006008001000Min: 883 / Avg: 945.67 / Max: 9921. (CXX) g++ options: -flto -pthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPRun 1Run 2Run 3110K220K330K440K550KSE +/- 2892.27, N = 3519606.57500314.16521050.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPRun 1Run 2Run 390K180K270K360K450KMin: 513874.62 / Avg: 519606.57 / Max: 523146.471. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 1Run 2Run 320406080100SE +/- 1.65, N = 3104.45107.33108.53
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeRun 1Run 2Run 320406080100Min: 101.58 / Avg: 104.45 / Max: 107.3

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Fishy Cat - Compute: CPU-OnlyRun 1Run 2Run 350100150200250SE +/- 1.21, N = 3233.85229.33225.99
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Fishy Cat - Compute: CPU-OnlyRun 1Run 2Run 34080120160200Min: 231.44 / Avg: 233.85 / Max: 235.24

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pRun 1Run 2Run 360120180240300SE +/- 3.54, N = 3261.55262.22253.46MIN: 91.05 / MAX: 314.59MIN: 94.27 / MAX: 310.07MIN: 95.85 / MAX: 301.921. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pRun 1Run 2Run 350100150200250Min: 254.8 / Avg: 261.55 / Max: 266.771. (CC) gcc options: -pthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 1Run 2Run 3246810SE +/- 0.06, N = 35.86.06.01. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 1Run 2Run 3246810Min: 5.7 / Avg: 5.8 / Max: 5.91. (CC) gcc options: -fopenmp -O3 -lm

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentRun 1Run 2Run 3918273645SE +/- 0.47, N = 337.7038.9938.53
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentRun 1Run 2Run 3816243240Min: 36.76 / Avg: 37.7 / Max: 38.22

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 1Run 2Run 3714212835SE +/- 0.31, N = 430.9130.0130.82
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 1Run 2Run 3714212835Min: 30.23 / Avg: 30.91 / Max: 31.65

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeRun 1Run 2Run 3612182430SE +/- 0.07, N = 325.9726.0525.291. (CXX) g++ options: -pipe -O3 -ffast-math -pthread -R/usr/lib -lSDL -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeRun 1Run 2Run 3612182430Min: 25.83 / Avg: 25.97 / Max: 26.041. (CXX) g++ options: -pipe -O3 -ffast-math -pthread -R/usr/lib -lSDL -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3612182430SE +/- 0.11, N = 326.8026.0526.81MIN: 24.26MIN: 24.26MIN: 25.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3612182430Min: 26.59 / Avg: 26.8 / Max: 26.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsRun 1Run 2Run 3918273645SE +/- 0.48, N = 339.5339.3840.54
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsRun 1Run 2Run 3816243240Min: 39 / Avg: 39.53 / Max: 40.49

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRun 1Run 2Run 348121620SE +/- 0.03, N = 314.514.914.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRun 1Run 2Run 348121620Min: 14.4 / Avg: 14.47 / Max: 14.5

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointRun 1Run 2Run 31632486480SE +/- 1.09, N = 370.3369.8771.75
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointRun 1Run 2Run 31428425670Min: 68.14 / Avg: 70.33 / Max: 71.43

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080p 10-bitRun 1Run 2Run 348121620SE +/- 0.02, N = 313.7613.4213.781. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080p 10-bitRun 1Run 2Run 348121620Min: 13.73 / Avg: 13.76 / Max: 13.81. (CXX) g++ options: -O3 -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRun 1Run 2Run 30.86631.73262.59893.46524.3315SE +/- 0.02, N = 33.753.853.75
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionRun 1Run 2Run 3246810Min: 3.73 / Avg: 3.75 / Max: 3.78

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 320406080100SE +/- 1.18, N = 1577.679.579.61. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1Run 2Run 31530456075Min: 72 / Avg: 77.6 / Max: 87.51. (CC) gcc options: -O3 -pthread -lz

Parallel BZIP2 Compression

This test measures the time needed to compress a file (a .tar package of the Linux kernel source code) using BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.12256MB File CompressionRun 1Run 2Run 30.46760.93521.40281.87042.3382.0782.0512.0261. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETRun 1Run 2Run 3200K400K600K800K1000KSE +/- 4540.22, N = 3800110.61811922.06791993.691. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETRun 1Run 2Run 3140K280K420K560K700KMin: 795544.94 / Avg: 800110.61 / Max: 8091911. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: BMW27 - Compute: CPU-OnlyRun 1Run 2Run 3306090120150SE +/- 0.50, N = 3123.91125.57122.66
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: BMW27 - Compute: CPU-OnlyRun 1Run 2Run 320406080100Min: 122.99 / Avg: 123.91 / Max: 124.69

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 36K12K18K24K30KSE +/- 291.88, N = 152579225951263971. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mcpu=power8 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural NetworkRun 1Run 2Run 35K10K15K20K25KMin: 23099 / Avg: 25791.73 / Max: 270221. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -mcpu=power8 -fvisibility=hidden -O3 -shared

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 31530456075SE +/- 0.56, N = 367.8669.4369.14
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileRun 1Run 2Run 31326395265Min: 67.02 / Avg: 67.86 / Max: 68.91

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 4KRun 1Run 2Run 348121620SE +/- 0.06, N = 313.5613.4213.721. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 4KRun 1Run 2Run 348121620Min: 13.46 / Avg: 13.56 / Max: 13.661. (CXX) g++ options: -O3 -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSERun 1Run 2Run 330060090012001500SE +/- 19.77, N = 31409.871439.881437.63
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSERun 1Run 2Run 330060090012001500Min: 1370.77 / Avg: 1409.87 / Max: 1434.55

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeRun 1Run 2Run 31428425670SE +/- 0.88, N = 363.8862.8362.591. (CC) gcc options: -lm -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total TimeRun 1Run 2Run 31224364860Min: 62.73 / Avg: 63.88 / Max: 65.621. (CC) gcc options: -lm -O3

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropRun 1Run 2Run 348121620SE +/- 0.12, N = 315.9316.2416.25
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropRun 1Run 2Run 348121620Min: 15.81 / Avg: 15.93 / Max: 16.17

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 3306090120150SE +/- 1.28, N = 3116.43114.13116.29MIN: 113.57 / MAX: 129.96MIN: 113.44 / MAX: 115.23MIN: 114.3 / MAX: 119.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1Run 2Run 320406080100Min: 114.23 / Avg: 116.43 / Max: 118.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 1Run 2Run 348121620SE +/- 0.06, N = 313.5613.4113.68MIN: 12MIN: 11.94MIN: 12.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 1Run 2Run 348121620Min: 13.46 / Avg: 13.56 / Max: 13.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 316003200480064008000SE +/- 13.00, N = 37532.057670.667679.57MIN: 7462.27MIN: 7637.04MIN: 7649.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1Run 2Run 313002600390052006500Min: 7514.45 / Avg: 7532.05 / Max: 7557.421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Run 1Run 2Run 3246810SE +/- 0.046, N = 58.3378.4968.496
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Run 1Run 2Run 33691215Min: 8.26 / Avg: 8.34 / Max: 8.52

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 316003200480064008000SE +/- 57.46, N = 37568.897676.797534.57MIN: 7444.77MIN: 7632.13MIN: 7477.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 313002600390052006500Min: 7509.07 / Avg: 7568.89 / Max: 7683.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyRun 1Run 2Run 3816243240SE +/- 0.18, N = 334.6934.5434.06
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyRun 1Run 2Run 3714212835Min: 34.34 / Avg: 34.69 / Max: 34.88

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHRun 1Run 2Run 3110K220K330K440K550KSE +/- 4078.83, N = 3509296.11501002.00510285.721. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHRun 1Run 2Run 390K180K270K360K450KMin: 503448.41 / Avg: 509296.11 / Max: 517145.811. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGRun 1Run 2Run 3816243240SE +/- 0.26, N = 332.5331.9532.501. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGRun 1Run 2Run 3714212835Min: 32.03 / Avg: 32.53 / Max: 32.871. rsvg-convert version 2.50.1

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 320406080100SE +/- 0.42, N = 391.8593.4591.88MIN: 90.17 / MAX: 94.5MIN: 92.15 / MAX: 95.92MIN: 90.67 / MAX: 93.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1Run 2Run 320406080100Min: 91.33 / Avg: 91.85 / Max: 92.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 1Run 2Run 316003200480064008000SE +/- 31.01, N = 37566.407667.007693.28MIN: 7480.61MIN: 7622.49MIN: 7650.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 1Run 2Run 313002600390052006500Min: 7525.78 / Avg: 7566.4 / Max: 7627.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 3714212835SE +/- 0.15, N = 329.7730.2629.82MIN: 26.11MIN: 27.23MIN: 26.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 1Run 2Run 3714212835Min: 29.57 / Avg: 29.77 / Max: 30.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDRun 1Run 2Run 3150K300K450K600K750KSE +/- 6623.24, N = 15682251.35689655.12693377.691. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDRun 1Run 2Run 3120K240K360K480K600KMin: 628555.62 / Avg: 682251.35 / Max: 703954.941. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianRun 1Run 2Run 348121620SE +/- 0.08, N = 318.0918.1717.90
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianRun 1Run 2Run 3510152025Min: 17.94 / Avg: 18.09 / Max: 18.22

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 1Run 2Run 3Run 46001200180024003000SE +/- 2.78, N = 32982.62968.52980.43013.51. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzRun 1Run 2Run 3Run 45001000150020002500Min: 2977.4 / Avg: 2982.6 / Max: 2986.91. (CXX) g++ options: -rdynamic

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 3140280420560700SE +/- 2.29, N = 3654.28654.39663.97MIN: 647.79 / MAX: 673.25MIN: 648.81 / MAX: 680.43MIN: 661.55 / MAX: 673.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1Run 2Run 3120240360480600Min: 650.3 / Avg: 654.28 / Max: 658.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 31326395265SE +/- 0.09, N = 356.5156.1556.96MIN: 55.43 / MAX: 58.08MIN: 55.46 / MAX: 57.21MIN: 56.01 / MAX: 58.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1Run 2Run 31122334455Min: 56.32 / Avg: 56.5 / Max: 56.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pRun 1Run 2Run 34080120160200SE +/- 0.51, N = 3173.02172.06174.53MIN: 130.46 / MAX: 248.83MIN: 128.85 / MAX: 241.06MIN: 131.48 / MAX: 245.711. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pRun 1Run 2Run 3306090120150Min: 172.02 / Avg: 173.02 / Max: 173.71. (CC) gcc options: -pthread

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 1Run 2Run 33691215SE +/- 0.03, N = 311.2711.3611.211. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 1Run 2Run 33691215Min: 11.21 / Avg: 11.27 / Max: 11.31. (CC) gcc options: -std=c99 -O3 -lm -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskRun 1Run 2Run 31224364860SE +/- 0.73, N = 351.2151.9051.23
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskRun 1Run 2Run 31020304050Min: 49.98 / Avg: 51.21 / Max: 52.5

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineRun 1Run 2Run 350100150200250SE +/- 1.04, N = 3228.37228.69231.40
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineRun 1Run 2Run 34080120160200Min: 226.45 / Avg: 228.37 / Max: 230.04

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRun 1Run 2Run 320406080100SE +/- 0.06, N = 389.8989.3888.72
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaRun 1Run 2Run 320406080100Min: 89.77 / Avg: 89.89 / Max: 89.97

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestRun 1Run 2Run 340K80K120K160K200KSE +/- 1543.67, N = 121684561706261701051. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestRun 1Run 2Run 330K60K90K120K150KMin: 152099 / Avg: 168456.42 / Max: 1718451. (CXX) g++ options: -pipe -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedRun 1Run 2Run 32K4K6K8K10KSE +/- 1.14, N = 39240.59299.59183.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedRun 1Run 2Run 316003200480064008000Min: 9238.7 / Avg: 9240.5 / Max: 9242.61. (CC) gcc options: -O3

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRun 1Run 2Run 30.91351.8272.74053.6544.5675SE +/- 0.03, N = 34.014.054.061. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRun 1Run 2Run 3246810Min: 3.98 / Avg: 4.01 / Max: 4.061. Nodejs v12.18.2

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonRun 1Run 2Run 350100150200250SE +/- 0.28, N = 3205.82208.29207.47
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonRun 1Run 2Run 34080120160200Min: 205.27 / Avg: 205.81 / Max: 206.21

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowRun 1Run 2Run 3246810SE +/- 0.03, N = 37.056.977.041. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowRun 1Run 2Run 33691215Min: 6.99 / Avg: 7.05 / Max: 7.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KRun 1Run 2Run 3246810SE +/- 0.02, N = 36.246.176.231. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KRun 1Run 2Run 3246810Min: 6.21 / Avg: 6.24 / Max: 6.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080pRun 1Run 2Run 3714212835SE +/- 0.16, N = 327.7027.6327.941. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Chimera 1080pRun 1Run 2Run 3612182430Min: 27.39 / Avg: 27.7 / Max: 27.911. (CXX) g++ options: -O3 -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesRun 1Run 2Run 36001200180024003000SE +/- 2.49, N = 32959.772942.762975.571. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesRun 1Run 2Run 35001000150020002500Min: 2955.86 / Avg: 2959.77 / Max: 2964.41. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateRun 1Run 2Run 3140280420560700SE +/- 2.89, N = 36386406331. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateRun 1Run 2Run 3110220330440550Min: 633 / Avg: 638 / Max: 6431. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelRun 1Run 2Run 3510152025SE +/- 0.02, N = 318.5318.6418.731. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelRun 1Run 2Run 3510152025Min: 18.49 / Avg: 18.53 / Max: 18.571. (CC) gcc options: -lm -lpthread -O3

Gzip Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using Gzip compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzRun 1Run 2Run 31122334455SE +/- 0.21, N = 349.9350.4750.23
OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzRun 1Run 2Run 31020304050Min: 49.7 / Avg: 49.93 / Max: 50.36

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeRun 1Run 2Run 31020304050SE +/- 0.09, N = 343.4343.2442.971. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeRun 1Run 2Run 3918273645Min: 43.3 / Avg: 43.43 / Max: 43.591. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 33K6K9K12K15KSE +/- 46.19, N = 313101.113011.513147.2MIN: 12981.5MIN: 12949.8MIN: 13080.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1Run 2Run 32K4K6K8K10KMin: 13037.8 / Avg: 13101.07 / Max: 131911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3918273645SE +/- 0.04, N = 340.8340.6541.08MIN: 36.3MIN: 36.16MIN: 36.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3918273645Min: 40.75 / Avg: 40.83 / Max: 40.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 31326395265SE +/- 0.03, N = 355.3055.2455.80MIN: 53.16MIN: 53.33MIN: 53.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 31122334455Min: 55.24 / Avg: 55.29 / Max: 55.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 33691215SE +/- 0.00387, N = 39.167339.077369.16306MIN: 8.32MIN: 8.26MIN: 8.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 33691215Min: 9.16 / Avg: 9.17 / Max: 9.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 1Run 2Run 3306090120150SE +/- 1.27, N = 3132.6133.8133.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 1Run 2Run 3306090120150Min: 130.1 / Avg: 132.63 / Max: 133.9

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleRun 1Run 2Run 33691215SE +/- 0.01, N = 312.6412.5412.52
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleRun 1Run 2Run 348121620Min: 12.63 / Avg: 12.64 / Max: 12.65

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3816243240SE +/- 0.21, N = 333.2033.4233.11MIN: 30.98MIN: 31.15MIN: 31.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 3714212835Min: 32.78 / Avg: 33.2 / Max: 33.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 1Run 2Run 3306090120150SE +/- 0.32, N = 3135.40135.60136.68
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 1Run 2Run 3306090120150Min: 134.8 / Avg: 135.4 / Max: 135.89

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 31224364860SE +/- 0.23, N = 454.4154.9254.791. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1Run 2Run 31122334455Min: 53.75 / Avg: 54.41 / Max: 54.811. (CC) gcc options: -O2 -std=c99

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 33K6K9K12K15KSE +/- 13.23, N = 313142.013133.113254.5MIN: 13063.1MIN: 13074.3MIN: 13172.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 1Run 2Run 32K4K6K8K10KMin: 13119.5 / Avg: 13141.97 / Max: 13165.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 1080pRun 1Run 2Run 31020304050SE +/- 0.06, N = 342.2742.6642.431. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgFPS, More Is Betterlibgav1 2019-10-05Video Input: Summer Nature 1080pRun 1Run 2Run 3918273645Min: 42.21 / Avg: 42.27 / Max: 42.41. (CXX) g++ options: -O3 -lpthread

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid BenchmarkRun 1Run 2Run 3800K1600K2400K3200K4000KSE +/- 14205.58, N = 33472681348627835046421. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid BenchmarkRun 1Run 2Run 3600K1200K1800K2400K3000KMin: 3455214 / Avg: 3472681 / Max: 35008201. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 1Run 2Run 35001000150020002500SE +/- 21.68, N = 32473.02488.72483.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 1Run 2Run 3400800120016002000Min: 2429.7 / Avg: 2472.97 / Max: 2497

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 1Run 2Run 30.52431.04861.57292.09722.6215SE +/- 0.00, N = 32.322.332.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeRun 1Run 2Run 3246810Min: 2.32 / Avg: 2.32 / Max: 2.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 3100200300400500SE +/- 0.27, N = 3471.08470.66474.54MIN: 467.86 / MAX: 482.63MIN: 467.4 / MAX: 665.69MIN: 471.94 / MAX: 484.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1Run 2Run 380160240320400Min: 470.56 / Avg: 471.08 / Max: 471.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRun 1Run 2Run 30.2790.5580.8371.1161.395SE +/- 0.00, N = 31.231.241.241. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRun 1Run 2Run 3246810Min: 1.23 / Avg: 1.23 / Max: 1.241. (CXX) g++ options: -O3 -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 1Run 2Run 31428425670SE +/- 0.53, N = 362.262.762.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 1Run 2Run 31224364860Min: 61.1 / Avg: 62.17 / Max: 62.7

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3510152025SE +/- 0.04, N = 318.8218.7618.911. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkRun 1Run 2Run 3510152025Min: 18.75 / Avg: 18.82 / Max: 18.891. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesRun 1Run 2Run 31.00152.0033.00454.0065.0075SE +/- 0.051, N = 34.4174.4214.4511. (CXX) g++ options: -fopenmp -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesRun 1Run 2Run 3246810Min: 4.32 / Avg: 4.42 / Max: 4.491. (CXX) g++ options: -fopenmp -O3

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastRun 1Run 2Run 3510152025SE +/- 0.09, N = 318.4118.2718.371. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastRun 1Run 2Run 3510152025Min: 18.23 / Avg: 18.41 / Max: 18.551. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonRun 1Run 2Run 30.30150.6030.90451.2061.5075SE +/- 0.00, N = 31.341.331.33
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonRun 1Run 2Run 3246810Min: 1.33 / Avg: 1.34 / Max: 1.34

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Run 1Run 2Run 30.60981.21961.82942.43923.049SE +/- 0.01, N = 32.712.692.711. (CXX) g++ options: -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Run 1Run 2Run 3246810Min: 2.69 / Avg: 2.71 / Max: 2.731. (CXX) g++ options: -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 1Run 2Run 3510152025SE +/- 0.05, N = 521.6721.8021.831. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 1Run 2Run 3510152025Min: 21.51 / Avg: 21.67 / Max: 21.851. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRun 1Run 2Run 30.31280.62560.93841.25121.564SE +/- 0.00, N = 31.391.381.39
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRun 1Run 2Run 3246810Min: 1.39 / Avg: 1.39 / Max: 1.39

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenRun 1Run 2Run 390180270360450SE +/- 0.58, N = 34174174141. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenRun 1Run 2Run 370140210280350Min: 416 / Avg: 417 / Max: 4181. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 1Run 2Run 31326395265SE +/- 0.15, N = 355.5455.2755.67
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 1Run 2Run 31122334455Min: 55.24 / Avg: 55.54 / Max: 55.75

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SRun 1Run 2Run 320406080100SE +/- 0.19, N = 378.7579.2978.981. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SRun 1Run 2Run 31530456075Min: 78.36 / Avg: 78.75 / Max: 78.951. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Run 1Run 2Run 36M12M18M24M30MSE +/- 95417.01, N = 327089795.227261721.527275695.0
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Run 1Run 2Run 35M10M15M20M25MMin: 26989589.7 / Avg: 27089795.17 / Max: 27280547.7

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesRun 1Run 2Run 31122334455SE +/- 0.43, N = 346.5346.6446.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesRun 1Run 2Run 31020304050Min: 46.04 / Avg: 46.53 / Max: 47.39

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastRun 1Run 2Run 31020304050SE +/- 0.05, N = 343.2542.9643.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastRun 1Run 2Run 3918273645Min: 43.17 / Avg: 43.25 / Max: 43.341. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Barbershop - Compute: CPU-OnlyRun 1Run 2Run 390180270360450SE +/- 0.60, N = 3424.42426.94427.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Barbershop - Compute: CPU-OnlyRun 1Run 2Run 380160240320400Min: 423.26 / Avg: 424.42 / Max: 425.27

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingRun 1Run 2Run 32004006008001000SE +/- 1.17, N = 3782.07776.94777.731. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingRun 1Run 2Run 3140280420560700Min: 780.58 / Avg: 782.07 / Max: 784.381. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -lSDL -fopenmp -fwhole-program -lstdc++

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215SE +/- 0.01, N = 310.3510.3210.39MIN: 9.81MIN: 9.88MIN: 9.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 1Run 2Run 33691215Min: 10.32 / Avg: 10.35 / Max: 10.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesRun 1Run 2Run 370140210280350327326328

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KRun 1Run 2Run 320406080100SE +/- 0.17, N = 389.1589.4789.69MIN: 32.47 / MAX: 102.49MIN: 32.82 / MAX: 101.78MIN: 33.55 / MAX: 102.511. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KRun 1Run 2Run 320406080100Min: 88.83 / Avg: 89.15 / Max: 89.421. (CC) gcc options: -pthread

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1Run 2Run 3918273645SE +/- 0.12, N = 338.3138.1138.081. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1Run 2Run 3816243240Min: 38.08 / Avg: 38.31 / Max: 38.471. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 1Run 2Run 35001000150020002500SE +/- 0.60, N = 32098.12096.92087.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 1Run 2Run 3400800120016002000Min: 2096.9 / Avg: 2098.1 / Max: 2098.7

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1Run 2Run 34080120160200SE +/- 0.17, N = 3170.62170.85171.611. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1Run 2Run 3306090120150Min: 170.4 / Avg: 170.62 / Max: 170.961. (CC) gcc options: -O2 -ldl -lz -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibRun 1Run 2Run 31224364860SE +/- 0.00, N = 351.951.651.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibRun 1Run 2Run 31020304050Min: 51.9 / Avg: 51.9 / Max: 51.9

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitRun 1Run 2Run 320406080100SE +/- 1.08, N = 3109.32109.23108.70MIN: 78.72 / MAX: 166.47MIN: 79.84 / MAX: 157.85MIN: 78.6 / MAX: 162.61. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitRun 1Run 2Run 320406080100Min: 107.41 / Avg: 109.32 / Max: 111.131. (CC) gcc options: -pthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing is supported. This system/blender test profile makes use of the system-supplied Blender. Use pts/blender if wishing to stick to a fixed version of Blender. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Pabellon Barcelona - Compute: CPU-OnlyRun 1Run 2Run 380160240320400SE +/- 0.50, N = 3356.47358.10356.07
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.83.5Blend File: Pabellon Barcelona - Compute: CPU-OnlyRun 1Run 2Run 360120180240300Min: 355.6 / Avg: 356.47 / Max: 357.32

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumRun 1Run 2Run 3246810SE +/- 0.02, N = 37.077.037.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumRun 1Run 2Run 33691215Min: 7.04 / Avg: 7.07 / Max: 7.11. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Run 1Run 2Run 3510152025SE +/- 0.09, N = 318.5418.4418.541. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Run 1Run 2Run 3510152025Min: 18.45 / Avg: 18.54 / Max: 18.721. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesRun 1Run 2Run 320406080100SE +/- 0.11, N = 386.9087.3987.161. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesRun 1Run 2Run 320406080100Min: 86.7 / Avg: 86.9 / Max: 87.051. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 1Run 2Run 35001000150020002500SE +/- 1.32, N = 32100.12098.42089.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 1Run 2Run 3400800120016002000Min: 2097.6 / Avg: 2100.07 / Max: 2102.1

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 31020304050SE +/- 0.04, N = 344.8944.7244.97
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRun 1Run 2Run 3918273645Min: 44.83 / Avg: 44.89 / Max: 44.95

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesRun 1Run 2Run 320406080100SE +/- 0.20, N = 377.4177.5977.83
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesRun 1Run 2Run 31530456075Min: 77.11 / Avg: 77.41 / Max: 77.78

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileRun 1Run 2Run 3714212835SE +/- 0.01, N = 329.0428.9529.10
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileRun 1Run 2Run 3612182430Min: 29.01 / Avg: 29.04 / Max: 29.06

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassRun 1Run 2Run 31326395265SE +/- 0.04, N = 359.8659.8760.17
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassRun 1Run 2Run 31224364860Min: 59.8 / Avg: 59.86 / Max: 59.93

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurRun 1Run 2Run 320406080100SE +/- 0.17, N = 3106.16106.64106.70
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurRun 1Run 2Run 320406080100Min: 105.85 / Avg: 106.16 / Max: 106.43

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRun 1Run 2Run 34080120160200SE +/- 0.26, N = 3167.76168.59168.52
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRun 1Run 2Run 3306090120150Min: 167.49 / Avg: 167.76 / Max: 168.27

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Run 1Run 2Run 30.0470.0940.1410.1880.235SE +/- 0.000, N = 30.2090.2080.209
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Run 1Run 2Run 312345Min: 0.21 / Avg: 0.21 / Max: 0.21

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 1Run 2Run 35001000150020002500SE +/- 2.11, N = 32496.22494.02486.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 1Run 2Run 3400800120016002000Min: 2492.6 / Avg: 2496.23 / Max: 2499.9

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileRun 1Run 2Run 320406080100SE +/- 0.14, N = 386.3986.0586.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileRun 1Run 2Run 31632486480Min: 86.19 / Avg: 86.39 / Max: 86.66

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeRun 1Run 2Run 320406080100SE +/- 0.11, N = 383.6083.6883.361. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeRun 1Run 2Run 31632486480Min: 83.39 / Avg: 83.6 / Max: 83.761. RawTherapee, version 5.8, command line.

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsRun 1Run 2Run 320406080100SE +/- 0.19, N = 392.5292.1792.251. git version 2.27.0
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsRun 1Run 2Run 320406080100Min: 92.22 / Avg: 92.52 / Max: 92.881. git version 2.27.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3612182430SE +/- 0.10, N = 325.6125.5225.58MIN: 24.97MIN: 24.94MIN: 25.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1Run 2Run 3612182430Min: 25.45 / Avg: 25.61 / Max: 25.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 320406080100SE +/- 0.31, N = 3103.62103.39103.75MIN: 98.18MIN: 98.31MIN: 99.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 1Run 2Run 320406080100Min: 103.02 / Avg: 103.62 / Max: 104.021. (CXX) g++ options: -O3 -std=c++11 -fopenmp -mcpu=native -fPIC -pie -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasRun 1Run 2Run 31632486480SE +/- 0.03, N = 370.7570.9870.84
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasRun 1Run 2Run 31428425670Min: 70.69 / Avg: 70.75 / Max: 70.79

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlRun 1Run 2Run 330060090012001500SE +/- 5.70, N = 31203120512011. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlRun 1Run 2Run 32004006008001000Min: 1192 / Avg: 1203.33 / Max: 12101. (CC) gcc options: -fopenmp -O2 -pthread -ljpeg -lxml2 -lz -lm -lpthread

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRun 1Run 2Run 3306090120150SE +/- 0.14, N = 5128.76129.06129.191. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is Better