Rocket Lake Core i9 11900K

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103298-IB-ROCKETLAK21
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 5 Tests
AV1 4 Tests
Bioinformatics 4 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 6 Tests
Web Browsers 1 Tests
CAD 2 Tests
Chess Test Suite 7 Tests
Timed Code Compilation 14 Tests
C/C++ Compiler Tests 34 Tests
Compression Tests 6 Tests
CPU Massive 60 Tests
Creator Workloads 63 Tests
Cryptography 6 Tests
Encoding 15 Tests
Finance 2 Tests
Fortran Tests 8 Tests
Game Development 8 Tests
HPC - High Performance Computing 31 Tests
Imaging 16 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Linear Algebra 2 Tests
Machine Learning 12 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 5 Tests
Multi-Core 61 Tests
NVIDIA GPU Compute 12 Tests
OCR 2 Tests
Intel oneAPI 5 Tests
OpenCL 3 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 11 Tests
Productivity 5 Tests
Programmer / Developer System Benchmarks 24 Tests
Python 4 Tests
Raytracing 6 Tests
Renderers 14 Tests
Scientific Computing 16 Tests
Software Defined Radio 4 Tests
Server 4 Tests
Server CPU Tests 36 Tests
Single-Threaded 20 Tests
Speech 4 Tests
Telephony 4 Tests
Texture Compression 3 Tests
Video Encoding 10 Tests
Common Workstation Benchmarks 8 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Enabled
March 26 2021
  21 Hours, 40 Minutes
Repeat
March 27 2021
  21 Hours, 36 Minutes
Run
March 28 2021
  21 Hours, 29 Minutes
Invert Hiding All Results Option
  21 Hours, 35 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Rocket Lake Core i9 11900KOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads)ASUS ROG MAXIMUS XIII HERO (0610 BIOS)Intel Tiger Lake-H32GB1000GB Western Digital WD_BLACK SN850 1TB + 2000GBAMD Radeon RX 6800/6800 XT / 6900 16GB (2575/1000MHz)Intel Tiger Lake-H HD AudioASUS MG28U2 x Intel I225-V + Intel Device 2725Ubuntu 21.045.12.0-051200rc3daily20210315-generic (x86_64) 20210314GNOME Shell 3.38.3X Server 1.20.10 + Wayland4.6 Mesa 21.1.0-devel (git-616720d 2021-03-16 hirsute-oibaf-ppa) (LLVM 12.0.0)GCC 10.2.1 20210312ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionRocket Lake Core I9 11900K BenchmarksSystem Logs- Transparent Huge Pages: madvise- DEBUGINFOD_URLS=- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-p9aljy/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-p9aljy/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x39 - Thermald 2.4.3- OpenJDK Runtime Environment (build 11.0.11-ea+4-Ubuntu-0ubuntu2)- Python 3.9.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EnabledRepeatRunResult OverviewPhoronix Test Suite100%104%108%113%117%ACES DGEMMCloverLeafSwet

Rocket Lake Core i9 11900Kmt-dgemm: Sustained Floating-Point Rateonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUaom-av1: Speed 8 Realtime - Bosphorus 4Kopenvkl: vklBenchmarkStructuredVolumestress-ng: Context Switchingcloverleaf: Lagrangian-Eulerian Hydrodynamicsviennacl: CPU BLAS - dGEMM-NNncnn: CPU - googlenetncnn: CPU - blazefaceonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUgraphics-magick: Rotateonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUselenium: Jetstream 2 - Firefoxonednn: Recurrent Neural Network Training - u8s8f32 - CPUswet: Averageonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUselenium: Speedometer - Firefoxaom-av1: Speed 9 Realtime - Bosphorus 1080pfftw: Float + SSE - 2D FFT Size 4096ncnn: CPU - vgg16ncnn: CPU - resnet18radiance: SMP Parallelncnn: CPU - mnasnetncnn: CPU - resnet50embree: Pathtracer - Asian Dragon Objstress-ng: CPU Cachestress-ng: Atomicstockfish: Total Timex265: Bosphorus 4Ksimdjson: DistinctUserIDttsiod-renderer: Phong Rendering With Soft-Shadow Mappingembree: Pathtracer ISPC - Asian Dragon Objaom-av1: Speed 9 Realtime - Bosphorus 4Konednn: Deconvolution Batch shapes_1d - f32 - CPUaskap: tConvolve OpenMP - Griddinggnuradio: Five Back to Back FIR Filterstensorflow-lite: NASNet Mobilejava-gradle-perf: Reactorncnn: CPU - mobilenetncnn: CPU - efficientnet-b0aom-av1: Speed 8 Realtime - Bosphorus 1080pnpb: EP.Ccompress-zstd: 8, Long Mode - Compression Speeddacapobench: Tradesoapjohn-the-ripper: MD5selenium: WASM imageConvolute - Firefoxncnn: CPU-v2-v2 - mobilenet-v2viennacl: CPU BLAS - dGEMM-NTstockfish: Total Timenpb: LU.Copenvkl: vklBenchmarkopenvkl: vklBenchmarkVdbVolumedacapobench: Jythonncnn: CPU - alexnetgraphics-magick: HWB Color Spaceviennacl: CPU BLAS - dGEMM-TTcompress-zstd: 19, Long Mode - Decompression Speeddeepspeech: CPUospray: San Miguel - SciVisliquid-dsp: 2 - 256 - 57numpy: stress-ng: Vector Mathrodinia: OpenMP Leukocyteliquid-dsp: 8 - 256 - 57selenium: WASM imageConvolute - Google Chromencnn: CPU-v3-v3 - mobilenet-v3compress-zstd: 8 - Compression Speedngspice: C2670blake2: avifenc: 6, Losslessngspice: C7552svt-vp9: VMAF Optimized - Bosphorus 1080ptachyon: Total Timesvt-hevc: 10 - Bosphorus 1080plczero: BLASincompact3d: input.i3d 129 Cells Per Directiongraphics-magick: Resizingmnn: inception-v3mafft: Multiple Sequence Alignment - LSU RNAviennacl: CPU BLAS - dGEMM-TNasmfish: 1024 Hash Memory, 26 Depthdarktable: Server Rack - CPU-onlyliquid-dsp: 1 - 256 - 57lczero: Eigenonnx: fcn-resnet101-11 - OpenMP CPUperl-benchmark: Interpreterembree: Pathtracer ISPC - Crownjpegxl: JPEG - 7ospray: XFrog Forest - Path Tracerlibreoffice: 20 Documents To PDFaskap: Hogbom Clean OpenMPpyperformance: python_startuprodinia: OpenMP CFD Solverjpegxl-decode: Allviennacl: CPU BLAS - sAXPYluxcorerender: DLSChugin: Panorama Photo Assistant + Stitching Timesysbench: RAM / Memoryluajit: Compositebotan: AES-256 - Decryptselenium: Kraken - Google Chromemnn: MobileNetV2_224mnn: SqueezeNetV1.0botan: AES-256aom-av1: Speed 8 Realtimeviennacl: CPU BLAS - sDOTrsvg: SVG Files To PNGphpbench: PHP Benchmark Suiteoptcarrot: Optimized Benchmarkfinancebench: Repo OpenMPaom-av1: Speed 6 Realtime - Bosphorus 4Kwebp2: Defaultselenium: StyleBench - Google Chromeappleseed: Disney Materialopenscad: Leonardo Phone Case Slimaom-av1: Speed 6 Realtime - Bosphorus 1080pembree: Pathtracer - Crownviennacl: CPU BLAS - dCOPYluaradio: Five Back to Back FIR Filtersbuild2: Time To Compileviennacl: CPU BLAS - dAXPYappleseed: Material Testerjohn-the-ripper: Blowfishbuild-linux-kernel: Time To Compilencnn: CPU - shufflenet-v2viennacl: CPU BLAS - sCOPYmnn: mobilenet-v1-1.0kvazaar: Bosphorus 4K - Ultra Fastplaidml: No - Inference - VGG19 - CPUstress-ng: CPU Stresssvt-vp9: Visual Quality Optimized - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUpybench: Total For Average Test Timescompress-lz4: 1 - Decompression Speedsrslte: OFDM_Teststress-ng: Matrix Mathselenium: ARES-6 - Firefoxoidn: Memorialmrbayes: Primate Phylogeny Analysisavifenc: 10aom-av1: Speed 4 Two-Passembree: Pathtracer - Asian Dragonsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080ponednn: Convolution Batch Shapes Auto - f32 - CPUsvt-av1: Enc Mode 8 - 1080pbuild-llvm: Time To Compilebuild-gdb: Time To Compilefinancebench: Bonds OpenMPonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUctx-clock: Context Switch Timejpegxl-decode: 1ospray: XFrog Forest - SciVisliquid-dsp: 4 - 256 - 57gmic: 2D Function Plotting, 1000 Timesrodinia: OpenMP LavaMDcompress-zstd: 8, Long Mode - Decompression Speedmocassin: Dust 2D tau100.0simdjson: PartialTweetscryptsetup: PBKDF2-sha512neatbench: CPUpyperformance: json_loadsbrl-cad: VGR Performance Metricoctave-benchmark: tjbench: Decompression Throughputespeak: Text-To-Speech Synthesisbuild-erlang: Time To Compilev-ray: CPUsvt-av1: Enc Mode 4 - 1080pcompress-7zip: Compress Speed Testwebp2: Quality 75, Compression Effort 7embree: Pathtracer ISPC - Asian Dragonavifenc: 10, Losslessaom-av1: Speed 6 Two-Passviennacl: CPU BLAS - dDOTliquid-dsp: 12 - 256 - 57blosc: blosclzaom-av1: Speed 4 Two-Pass - Bosphorus 4Kgnuradio: Hilbert Transformonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUwireguard: onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUkvazaar: Bosphorus 1080p - Very Fastluaradio: Complex Phasecryptsetup: AES-XTS 256b Decryptionstress-ng: Memory Copyingopenscad: Pistolmnn: resnet-v2-50luxcorerender: Rainbow Colors and Prismsqlite-speedtest: Timed Time - Size 1,000graphics-magick: Enhancedavifenc: 0svt-hevc: 7 - Bosphorus 1080popenscad: Mini-ITX Casencnn: CPU - regnety_400monnx: super-resolution-10 - OpenMP CPUjpegxl: JPEG - 5ncnn: CPU - squeezenet_ssdospray: NASA Streamlines - Path Tracercryptsetup: PBKDF2-whirlpooldarktable: Server Room - CPU-onlyselenium: PSPDFKit WASM - Firefoxcryptsetup: AES-XTS 512b Encryptionselenium: PSPDFKit WASM - Google Chromeocrmypdf: Processing 60 Page PDF Documentpovray: Trace Timeplaidml: No - Inference - VGG16 - CPUviennacl: CPU BLAS - dGEMV-Tpyperformance: crypto_pyaesbuild-mesa: Time To Compilecryptsetup: AES-XTS 256b Encryptionavifenc: 2openfoam: Motorbike 30Mtnn: CPU - SqueezeNet v1.1basis: ETC1Skvazaar: Bosphorus 4K - Very Fastaom-av1: Speed 6 Two-Pass - Bosphorus 4Krawtherapee: Total Benchmark Timelulesh: build-wasmer: Time To Compileaskap: tConvolve OpenMP - Degriddingfftw: Stock - 2D FFT Size 4096cryptsetup: AES-XTS 512b Decryptionplaidml: No - Inference - ResNet 50 - CPUjpegxl: PNG - 5coremark: CoreMark Size 666 - Iterations Per Secondgimp: unsharp-masksrslte: PHY_DL_Testonednn: Deconvolution Batch shapes_3d - f32 - CPUbuild-imagemagick: Time To Compilequantlib: build-eigen: Time To Compileonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUncnn: CPU - yolov4-tinyopenscad: Projector Mount Swivelonnx: shufflenet-v2-10 - OpenMP CPUaom-av1: Speed 6 Two-Pass - Bosphorus 1080pbotan: AES-256perl-benchmark: Pod2htmlkvazaar: Bosphorus 1080p - Ultra Fastetcpak: DXT1darktable: Masskrug - CPU-onlypyperformance: pickle_pure_pythonsvt-hevc: 1 - Bosphorus 1080pgraphics-magick: Noise-Gaussiangimp: auto-levelsappleseed: Emilycompress-zstd: 19, Long Mode - Compression Speedgegl: Scalecompress-zstd: 19 - Compression Speedbotan: Twofishavifenc: 6indigobench: CPU - Supercarlibraw: Post-Processing Benchmarkbotan: ChaCha20Poly1305build-apache: Time To Compileneat: pyperformance: django_templatenamd: ATPase Simulation - 327,506 Atomscompress-zstd: 8 - Decompression Speedcryptsetup: Twofish-XTS 256b Encryptionselenium: Kraken - Firefoxgit: Time To Complete Common Git Commandssimdjson: Kostyacryptsetup: Serpent-XTS 256b Encryptiononednn: IP Shapes 1D - f32 - CPUpyperformance: chaoscryptsetup: Twofish-XTS 512b Encryptionbuild-ffmpeg: Time To Compilecrafty: Elapsed Timegmpbench: Total Timeaom-av1: Speed 4 Two-Pass - Bosphorus 1080pdarktable: Boat - CPU-onlycompress-zstd: 19 - Decompression Speedbuild-nodejs: Time To Compilewebp: Quality 100, Highest Compressionpyperformance: floatbuild-php: Time To Compilex265: Bosphorus 1080ponnx: bertsquad-10 - OpenMP CPUincompact3d: input.i3d 193 Cells Per Directionwebp2: Quality 100, Compression Effort 5onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUtesseract-ocr: Time To OCR 7 Imagesprimesieve: 1e12 Prime Number Generationsrslte: PHY_DL_Testbuild-godot: Time To Compilecompress-zstd: 3, Long Mode - Decompression Speedonnx: yolov4 - OpenMP CPUdav1d: Summer Nature 1080pcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9viennacl: CPU BLAS - dGEMV-Ncaffe: AlexNet - CPU - 100vpxenc: Speed 5dolfyn: Computational Fluid Dynamicsonednn: IP Shapes 1D - u8s8f32 - CPUsynthmark: VoiceMark_100aom-av1: Speed 6 Realtimegnuradio: FM Deemphasis Filterjpegxl: PNG - 7gimp: rotatetscp: AI Chess Performanceencode-ape: WAV To APEcython-bench: N-Queensrays1bench: Large Sceneyafaray: Total Time For Sample Scenegnuradio: Signal Source (Cosine)gegl: Rotate 90 Degreesencode-mp3: WAV To MP3x264: H.264 Video Encodingcryptsetup: Serpent-XTS 512b Decryptionopenscad: Retro Carcompress-lz4: 1 - Compression Speedgimp: resizecompress-lz4: 3 - Decompression Speeddav1d: Summer Nature 4Kkvazaar: Bosphorus 4K - Mediumbotan: ChaCha20Poly1305 - Decryptrodinia: OpenMP Streamclusterbotan: KASUMI - Decryptgegl: Color Enhanceunpack-firefox: firefox-84.0.source.tar.xzselenium: WASM collisionDetection - Google Chromeastcenc: Mediumindigobench: CPU - Bedroomonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUbasis: UASTC Level 0encode-opus: WAV To Opus Encodeetcpak: ETC2botan: Twofishn-queens: Elapsed Timecryptsetup: Serpent-XTS 512b Encryptionbotan: Blowfishopenvkl: vklBenchmarkUnstructuredVolumewebp: Quality 100, Losslessbotan: Blowfish - Decryptaskap: tConvolve MT - Degriddingpyperformance: nbodydcraw: RAW To PPM Image Conversionhmmer: Pfam Database Searchamg: blender: BMW27 - CPU-Onlycryptsetup: Serpent-XTS 256b Decryptionkvazaar: Bosphorus 1080p - Mediumvpxenc: Speed 0cryptsetup: Twofish-XTS 512b Decryptioncryptsetup: Twofish-XTS 256b Decryptiontnn: CPU - MobileNet v2compress-rar: Linux Source Tree Archiving To RARluaradio: Hilbert Transformradiance: Serialgegl: Cropencode-wavpack: WAV To WavPackcryptopp: Unkeyed Algorithmspolyhedron: rnflowgnuradio: FIR Filtercompress-lz4: 9 - Compression Speedcompress-lz4: 3 - Compression Speedtensorflow-lite: Mobilenet Floatcompress-lz4: 9 - Decompression Speedonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUbotan: Twofish - Decrypthimeno: Poisson Pressure Solverbotan: KASUMIonednn: IP Shapes 1D - bf16bf16bf16 - CPUastcenc: Exhaustivernnoise: basis: UASTC Level 2astcenc: Thoroughbotan: KASUMItensorflow-lite: Mobilenet Quantbasis: UASTC Level 3tensorflow-lite: Inception V4stress-ng: Cryptoonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUaskap: tConvolve MT - Griddinggnuradio: IIR Filtersmallpt: Global Illumination Renderer; 128 Samplestensorflow-lite: SqueezeNetselenium: WASM collisionDetection - Firefoxm-queens: Time To Solveencode-flac: WAV To FLACsysbench: CPUbotan: Blowfishcaffe: GoogleNet - CPU - 100c-ray: Total Time - 4K, 16 Rays Per Pixeltensorflow-lite: Inception ResNet V2etcpak: ETC1botan: CAST-256luaradio: FM Deemphasis Filteraircrack-ng: botan: CAST-256 - Decryptbotan: CAST-256pyperformance: regex_compilepyperformance: raytracepyperformance: pathlibpyperformance: 2to3pyperformance: goaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Kospray: Magnetic Reconnection - Path Tracerospray: Magnetic Reconnection - SciVisospray: NASA Streamlines - SciVisospray: San Miguel - Path Tracergraphics-magick: Sharpensimdjson: LargeRandpolyhedron: linpkselenium: StyleBench - Firefoxcompress-zstd: 3, Long Mode - Compression Speedrodinia: OpenMP HotSpot3DEnabledRepeatRun2.48261110.57124.6082342.80746351525101477.04159.7845.212.161.352.861263091.743092.8110731816.65102.7813092.809618865311814.661812.22134152.402622056.1012.60152.1383.5024.0512.799722.88364917.652238934215.045.60550.72814.468352.826.168251236.021527.4133779173.54715.725.63135.811554.98386.23200206533325.14.4244.42881312131194.2519424682690318211.36127246.84252.360.4452022.06156866667499.4061555.92103.84462987666726.57383.63314.5108.7284.4051.50987.394185.9874.8839266.0046234.1286405110123.0637.99948.1319955260.16582626333840840.0006778514.809874.701.755.561209.6456.6121.079229.8638.81.9638.73425461.391832.957840.111616.61.9713.7517854.712114.1641.316.4031010404186.1326926.40690116.563.53345.95182.32769914.32231.1112.504823.61541.1122.76236.3174.0120142086474.5093.7625.11.92131.7317.734991.24157.071.4364171410096.917693333356981.9138.1213.4457.2532.8557.6713.9492191.7413.948437.852548.63492.23441370.7708333.4266016256.703.3432101666777.043208.7045071.81745.22212765017.817.91525525.422233.49700424.71692.218123464.25557489191.35116.38615.11222.8038.466215000011661.23.96594.412.1173130.0331.2676565.54787.15415.81533.4877.84619.7892.1444.86221860.402135.7935.21711.06780174.6118.104.578525113.46827794811.9281315.51038.18921.4948.272.846.0575414.430.971228.01264.38121.44917.717.5952.3855096.445852.8532072.067133.54790.05.5274.76328827.76193911.214335.84.1891925.7683323.053.7343.4975623.056.9341768523.177826.3120.08690658120.062387.9814.5733059.263109.323329.63363331.44.71133.0427.59311.0504.72544.35938.77917.43813.76034.71.299024763.0488.6844.140.9673.59774.84.0407874.9487.845.606102305256449.17.814.3004390.6425.3036.09582.149.23767.47852118.3889759.61816.740818.49818.039132.7113.1034865.2457707.7329.79746.73471232.9115.0640.703196928.21229.191084.410.109.027172333910.15220.36871.82121.4483244.429.5396.521118.95729.83.66210283.656.21210672.3189.806.41935.21816.997103.65342.89314.745280.37655.03072.0510.8223506.2577.230209.634427.84113.036774.5545.449270858215.297535.8471745.7083.031.08696.026259341600132.84729.627.899.34488.2488.7263.33936.158107.2484.5796.47913.197440.92365911.66734.064.6765.9511022110679.115.8234426.1045671.424874105.7008.3973287.811720.88627.60511.6727105.77511240551.55723441302217.9416.45201200.82837.78.880163084337.868.5987.10134101.32545.7478873967.1302119427369.312162.627547.036396.535162.323162.50612033912.82311840.50.16333.3320251.591651.212.141171302.670.4162.6779309.492684.3827743.24725603775095907.11159.4046.812.151.352.775612988.392998.5910641762.4899.7403001.609905783451762.421760.75137.9156.452616556.3512.74148.2823.5324.5912.788823.02362585.302271228915.145.72556.31214.452351.826.193141246.171499.4135089171.06616.005.68137.231565.92382.83242205466725.04.4944.72846941831174.9719725063015313411.19127547.34315.859.5638422.06155633333492.3162160.59104.95862946666726.64903.65312.9107.2644.4651.57286.228184.7075.0159262.5446733.6924540110322.9047.90048.7316040400.16381644000831840.0006743814.637574.861.735.530210.9716.6521.102229.9938.71.9638.33925202.791828.047761.236610.41.9803.7897778.529113.9641.316.3021017890186.0026809.90364616.543.54545.78181.85017714.19631.0512.611823.61528.4121.75236.3174.6112992086274.0223.7725.11.92331.8917.794964.68156.261.4428871610047.517803333357137.4138.0913.4257.6372.8677.6413.8588191.1913.858837.761547.94092.09641346.5182293.4172116356.713.3231909666777.449208.7945042.41735.25212336017.717.81532585.392233.54791324.62291.887123084.25457574190.52816.29915.08522.8938.465875333311655.73.96593.712.0573129.8881.2674565.72784.45427.01533.9378.06719.6962.1344.87021860.125135.1735.13611.04780074.9418.184.558552833.45827764824.1282315.49038.34921.5848.473.045.8685425.931.097227.61264.76521.39017.717.5652.2825087.760053.0602080.127121.24799.95.5475.02328744.22308011.174335.74.1742725.6773312.453.6373.4947423.116.9491767123.207848.7470.08701769120.192387.9014.5713059.293109.304329.91249231.34.72633.1426.30411.0614.73044.29941.53217.46413.72034.81.295294760.7487.2842.240.8763.60774.04.0323875.1487.145.497102478836432.57.824.3024401.8425.1396.08381.949.17467.44853118.1163869.60316.726618.50518.029132.4112.9654875.4457708.6329.80746.73477032.9015.0480.704665928.09229.131082.210.119.036172658310.13520.38971.69121.2363240.729.5126.532119.08729.63.66010277.676.21310655.3189.526.42936.20717.010103.63742.86714.742280.30335.03812.0520.8225196.2507.222209.360427.36913.050774.9545.418270958715.316535.8661743.5683.031.11196.051259622700132.87729.727.929.35487.9488.2263.46636.161107.2484.886.48513.209441.06240811.67733.464.6766.0011016610683.215.8212425.8755672.861468105.7558.3929787.852020.89627.59911.6705105.73411239351.53623432102218.6116.45001200.37837.48.883163043337.968.5787.10334092.54545.7268872067.1252118960369.329162.596546.936393.960162.331162.51812033912.82311840.50.16333.3320251.591651.212.141191327.170.2462.29479010.56974.6495240.83760293294881747.94166.0347.012.601.402.872003066.193075.3110411802.00100.0013066.569745996211800.951802.18136.7152.032685557.5712.93151.1313.5924.6512.500023.42356543.312220368514.825.68561.83014.193651.886.078071223.271506.3136271174.16315.945.73138.161581.87389.43255208833325.44.4645.12837223830717.6119624792840317411.35125647.54308.360.3030621.74157906667499.0562436.51103.48762109333326.94953.60310.2108.1004.4352.20886.385183.5074.0164262.4746833.7688675108923.1947.98448.7317669790.16582631000841850.0006823314.637975.571.745.593208.6236.6821.300227.6139.11.9438.66625429.681846.727816.265610.71.9913.7537854.050113.0641.716.4591020071187.7526676.76953216.413.56546.19183.47294214.23930.8412.545623.81536.0122.58036.6175.4375392103274.6233.7925.31.93631.6517.865001.07155.961.4327371110026.817680000057377.4537.8613.3557.3262.8747.6913.9158190.5013.930637.612545.19592.66941115.4765623.4383716256.373.3331928666776.990209.9235065.81735.24212908817.817.81524085.398234.79284324.58092.392122794.23257264190.32816.38075.09222.7738.665911666711601.83.94591.412.0965130.5341.2736765.40783.35425.71526.5678.21419.7882.1445.07221760.353135.4135.29711.09783574.9418.164.578529653.47327884823.2282515.55538.30021.5748.473.145.9415430.131.050227.09265.45121.36317.647.5652.4895076.378452.8762074.757147.54800.55.5274.75327648.06176711.204336.94.1752625.7683311.353.8253.4853923.136.9251762423.127852.8890.08672429119.792380.0874.5583069.273099.334330.69066331.34.72333.1426.79611.0284.71644.42940.82817.48913.74434.81.296074774.4487.6844.640.9923.6774.94.0433274.9487.345.618102208816438.87.804.2914394.0426.1976.08082.049.29167.60854118.3544829.62516.702618.46318.070132.6113.2204864.6456709.2829.86246.63478632.8415.0320.704534926.29029.141084.210.099.044172441810.13320.35171.69121.3283238.829.5636.526119.15729.53.65610267.046.22210670.1189.506.41936.65417.023103.49642.93214.764280.72085.03452.0490.8213236.2487.232209.564427.28213.033775.1544.748270612715.315535.2081745.7082.931.07496.140259329700132.99729.627.929.35488488.3263.60236.193107.1485.0296.48213.204441.322554733.764.7266.0011024510675.715.8127426.1245669.601972105.7508.3959187.856620.89327.61211.6754105.73111243951.55023436802217.7516.44581200.48837.48.883163094337.968.5897.10334093.33545.8588871867.1152119127369.255162.602547.036390.208162.314162.50212033912.82311840.500.16333.3320251.591651.211181303.872.239OpenBenchmarking.org

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEnabledRepeatRun0.60251.2051.80752.413.0125SE +/- 0.032867, N = 15SE +/- 0.035851, N = 3SE +/- 0.028308, N = 42.4826112.6779302.2947901. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEnabledRepeatRun246810Min: 2.27 / Avg: 2.48 / Max: 2.73Min: 2.61 / Avg: 2.68 / Max: 2.73Min: 2.21 / Avg: 2.29 / Max: 2.341. (CC) gcc options: -O3 -march=native -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEnabledRepeatRun3691215SE +/- 0.00888, N = 3SE +/- 0.00778, N = 3SE +/- 0.00258, N = 310.571209.4926810.56970MIN: 10.5MIN: 9.43MIN: 10.511. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEnabledRepeatRun3691215Min: 10.56 / Avg: 10.57 / Max: 10.59Min: 9.48 / Avg: 9.49 / Max: 9.51Min: 10.56 / Avg: 10.57 / Max: 10.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun1.04612.09223.13834.18445.2305SE +/- 0.03141, N = 3SE +/- 0.01779, N = 3SE +/- 0.03164, N = 34.608234.382774.64952MIN: 4.13MIN: 4.04MIN: 4.151. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun246810Min: 4.56 / Avg: 4.61 / Max: 4.67Min: 4.36 / Avg: 4.38 / Max: 4.42Min: 4.61 / Avg: 4.65 / Max: 4.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KEnabledRepeatRun1020304050SE +/- 0.06, N = 3SE +/- 0.21, N = 3SE +/- 0.43, N = 1542.8043.2440.831. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KEnabledRepeatRun918273645Min: 42.67 / Avg: 42.8 / Max: 42.88Min: 42.82 / Avg: 43.24 / Max: 43.48Min: 38.19 / Avg: 40.83 / Max: 42.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeEnabledRepeatRun16M32M48M64M80MSE +/- 576939.06, N = 15SE +/- 357980.91, N = 3SE +/- 1052274.46, N = 3746351527256037776029329MIN: 1917397 / MAX: 657699912MIN: 1941493 / MAX: 544635720MIN: 1954651 / MAX: 659359440
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeEnabledRepeatRun13M26M39M52M65MMin: 70297862 / Avg: 74635152.47 / Max: 77516698Min: 71927877 / Avg: 72560376.67 / Max: 73167159Min: 74023929 / Avg: 76029328.67 / Max: 77584861

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEnabledRepeatRun1.1M2.2M3.3M4.4M5.5MSE +/- 56528.36, N = 3SE +/- 63682.03, N = 3SE +/- 58907.92, N = 45101477.045095907.114881747.941. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEnabledRepeatRun900K1800K2700K3600K4500KMin: 4988478.61 / Avg: 5101477.04 / Max: 5161120.01Min: 5019146.8 / Avg: 5095907.11 / Max: 5222304.85Min: 4763897.23 / Avg: 4881747.94 / Max: 5038805.71. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEnabledRepeatRun4080120160200SE +/- 0.09, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 3159.78159.40166.031. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEnabledRepeatRun306090120150Min: 159.62 / Avg: 159.78 / Max: 159.95Min: 159.16 / Avg: 159.4 / Max: 159.84Min: 165.95 / Avg: 166.03 / Max: 166.161. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNRepeatEnabledRun1122334455SE +/- 0.19, N = 3SE +/- 1.10, N = 3SE +/- 0.03, N = 346.845.247.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNRepeatEnabledRun1020304050Min: 46.4 / Avg: 46.77 / Max: 47Min: 43.2 / Avg: 45.17 / Max: 47Min: 46.9 / Avg: 46.97 / Max: 471. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEnabledRepeatRun3691215SE +/- 0.36, N = 3SE +/- 0.34, N = 4SE +/- 0.39, N = 312.1612.1512.60MIN: 11.71 / MAX: 13.31MIN: 11.72 / MAX: 13.4MIN: 11.72 / MAX: 13.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEnabledRepeatRun48121620Min: 11.79 / Avg: 12.16 / Max: 12.88Min: 11.79 / Avg: 12.15 / Max: 13.18Min: 11.83 / Avg: 12.6 / Max: 13.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceEnabledRepeatRun0.3150.630.9451.261.575SE +/- 0.04, N = 3SE +/- 0.03, N = 4SE +/- 0.04, N = 31.351.351.40MIN: 1.26 / MAX: 1.47MIN: 1.27 / MAX: 1.48MIN: 1.28 / MAX: 1.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceEnabledRepeatRun246810Min: 1.29 / Avg: 1.35 / Max: 1.43Min: 1.31 / Avg: 1.35 / Max: 1.45Min: 1.32 / Avg: 1.4 / Max: 1.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun0.64621.29241.93862.58483.231SE +/- 0.00634, N = 3SE +/- 0.01085, N = 3SE +/- 0.00257, N = 32.861262.775612.87200MIN: 2.81MIN: 2.69MIN: 2.81. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun246810Min: 2.85 / Avg: 2.86 / Max: 2.87Min: 2.75 / Avg: 2.78 / Max: 2.79Min: 2.87 / Avg: 2.87 / Max: 2.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEnabledRepeatRun7001400210028003500SE +/- 2.69, N = 3SE +/- 4.19, N = 3SE +/- 3.51, N = 33091.742988.393066.19MIN: 3081.74MIN: 2975.82MIN: 3055.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEnabledRepeatRun5001000150020002500Min: 3086.41 / Avg: 3091.74 / Max: 3095.07Min: 2980.02 / Avg: 2988.39 / Max: 2992.99Min: 3060.92 / Avg: 3066.19 / Max: 3072.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun7001400210028003500SE +/- 1.99, N = 3SE +/- 6.46, N = 3SE +/- 17.72, N = 33092.812998.593075.31MIN: 3084.13MIN: 2981.81MIN: 3049.811. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun5001000150020002500Min: 3088.96 / Avg: 3092.81 / Max: 3095.58Min: 2987.52 / Avg: 2998.59 / Max: 3009.9Min: 3054.15 / Avg: 3075.31 / Max: 3110.521. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateEnabledRepeatRun2004006008001000SE +/- 6.00, N = 3SE +/- 4.10, N = 3SE +/- 1.20, N = 31073106410411. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateEnabledRepeatRun2004006008001000Min: 1067 / Avg: 1073 / Max: 1085Min: 1056 / Avg: 1063.67 / Max: 1070Min: 1039 / Avg: 1041.33 / Max: 10431. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun400800120016002000SE +/- 1.95, N = 3SE +/- 4.58, N = 3SE +/- 3.82, N = 31816.651762.481802.00MIN: 1809.76MIN: 1751.25MIN: 1791.171. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun30060090012001500Min: 1812.78 / Avg: 1816.65 / Max: 1818.91Min: 1754.31 / Avg: 1762.48 / Max: 1770.16Min: 1795.19 / Avg: 1802 / Max: 1808.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxEnabledRepeatRun20406080100SE +/- 0.91, N = 3SE +/- 0.71, N = 3SE +/- 0.45, N = 3102.7899.74100.001. firefox 86.0
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxEnabledRepeatRun20406080100Min: 100.99 / Avg: 102.78 / Max: 103.99Min: 98.8 / Avg: 99.74 / Max: 101.13Min: 99.22 / Avg: 100 / Max: 100.791. firefox 86.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun7001400210028003500SE +/- 1.66, N = 3SE +/- 5.44, N = 3SE +/- 5.99, N = 33092.803001.603066.56MIN: 3086.05MIN: 2989.75MIN: 3053.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun5001000150020002500Min: 3090.26 / Avg: 3092.8 / Max: 3095.92Min: 2993.91 / Avg: 3001.6 / Max: 3012.11Min: 3059.17 / Avg: 3066.56 / Max: 3078.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Swet

Swet is a synthetic CPU/RAM benchmark, includes multi-processor test cases. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEnabledRepeatRun200M400M600M800M1000MSE +/- 10585074.03, N = 3SE +/- 8655985.35, N = 8SE +/- 7469192.19, N = 159618865319905783459745996211. (CC) gcc options: -lm -lpthread -lcurses -lrt
OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEnabledRepeatRun200M400M600M800M1000MMin: 950804877 / Avg: 961886530.67 / Max: 983048788Min: 946845873 / Avg: 990578345.38 / Max: 1019976243Min: 913419477 / Avg: 974599621.13 / Max: 10062083331. (CC) gcc options: -lm -lpthread -lcurses -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun400800120016002000SE +/- 2.19, N = 3SE +/- 1.53, N = 3SE +/- 1.75, N = 31814.661762.421800.95MIN: 1808.15MIN: 1756.24MIN: 1793.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun30060090012001500Min: 1810.5 / Avg: 1814.66 / Max: 1817.93Min: 1759.46 / Avg: 1762.42 / Max: 1764.59Min: 1797.99 / Avg: 1800.95 / Max: 1804.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEnabledRepeatRun400800120016002000SE +/- 1.39, N = 3SE +/- 1.82, N = 3SE +/- 3.10, N = 31812.221760.751802.18MIN: 1806.43MIN: 1753.88MIN: 1794.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEnabledRepeatRun30060090012001500Min: 1809.46 / Avg: 1812.22 / Max: 1813.93Min: 1757.58 / Avg: 1760.75 / Max: 1763.9Min: 1799.02 / Avg: 1802.18 / Max: 1808.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: FirefoxEnabledRepeatRun306090120150SE +/- 0.67, N = 3SE +/- 1.09, N = 3SE +/- 1.34, N = 3134.0137.9136.71. firefox 86.0
OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: FirefoxEnabledRepeatRun306090120150Min: 133 / Avg: 134.33 / Max: 135Min: 135.7 / Avg: 137.87 / Max: 139.2Min: 134 / Avg: 136.67 / Max: 138.31. firefox 86.0

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pEnabledRepeatRun306090120150SE +/- 2.05, N = 3SE +/- 1.46, N = 15SE +/- 1.88, N = 4152.40156.45152.031. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pEnabledRepeatRun306090120150Min: 149.13 / Avg: 152.4 / Max: 156.19Min: 144.79 / Avg: 156.45 / Max: 163.98Min: 149.46 / Avg: 152.03 / Max: 157.621. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096EnabledRepeatRun6K12K18K24K30KSE +/- 33.51, N = 3SE +/- 336.17, N = 3SE +/- 300.18, N = 32622026165268551. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096EnabledRepeatRun5K10K15K20K25KMin: 26163 / Avg: 26219.67 / Max: 26279Min: 25763 / Avg: 26165.33 / Max: 26833Min: 26517 / Avg: 26855.33 / Max: 274541. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16EnabledRepeatRun1326395265SE +/- 0.10, N = 3SE +/- 0.11, N = 4SE +/- 1.26, N = 356.1056.3557.57MIN: 55.69 / MAX: 58.35MIN: 55.59 / MAX: 62.94MIN: 55.66 / MAX: 733.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16EnabledRepeatRun1122334455Min: 55.98 / Avg: 56.1 / Max: 56.3Min: 56.02 / Avg: 56.35 / Max: 56.51Min: 56.05 / Avg: 57.57 / Max: 60.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EnabledRepeatRun3691215SE +/- 0.32, N = 3SE +/- 0.26, N = 4SE +/- 0.03, N = 312.6012.7412.93MIN: 11.81 / MAX: 13.19MIN: 11.82 / MAX: 13.44MIN: 12.69 / MAX: 13.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EnabledRepeatRun48121620Min: 11.96 / Avg: 12.6 / Max: 12.93Min: 11.96 / Avg: 12.74 / Max: 13.02Min: 12.87 / Avg: 12.93 / Max: 12.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SMP ParallelEnabledRepeatRun306090120150152.14148.28151.13

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEnabledRepeatRun0.80781.61562.42343.23124.039SE +/- 0.09, N = 3SE +/- 0.11, N = 4SE +/- 0.08, N = 33.503.533.59MIN: 3.34 / MAX: 4.22MIN: 3.34 / MAX: 4.34MIN: 3.38 / MAX: 4.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEnabledRepeatRun246810Min: 3.4 / Avg: 3.5 / Max: 3.68Min: 3.41 / Avg: 3.53 / Max: 3.84Min: 3.45 / Avg: 3.59 / Max: 3.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50EnabledRepeatRun612182430SE +/- 0.50, N = 3SE +/- 0.04, N = 4SE +/- 0.05, N = 324.0524.5924.65MIN: 22.83 / MAX: 32.85MIN: 24.27 / MAX: 31.94MIN: 24.32 / MAX: 25.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50EnabledRepeatRun612182430Min: 23.05 / Avg: 24.05 / Max: 24.63Min: 24.49 / Avg: 24.59 / Max: 24.67Min: 24.55 / Avg: 24.65 / Max: 24.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjEnabledRepeatRun3691215SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 312.8012.7912.50MIN: 12.73 / MAX: 13.04MIN: 12.68 / MAX: 13.03MIN: 12.24 / MAX: 12.8
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjEnabledRepeatRun48121620Min: 12.79 / Avg: 12.8 / Max: 12.81Min: 12.74 / Avg: 12.79 / Max: 12.84Min: 12.36 / Avg: 12.5 / Max: 12.6

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEnabledRepeatRun612182430SE +/- 0.19, N = 3SE +/- 0.29, N = 3SE +/- 0.21, N = 322.8823.0223.421. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEnabledRepeatRun510152025Min: 22.5 / Avg: 22.88 / Max: 23.13Min: 22.6 / Avg: 23.02 / Max: 23.57Min: 23.17 / Avg: 23.42 / Max: 23.831. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEnabledRepeatRun80K160K240K320K400KSE +/- 4616.85, N = 3SE +/- 2974.51, N = 9SE +/- 4355.47, N = 4364917.65362585.30356543.311. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEnabledRepeatRun60K120K180K240K300KMin: 360194.63 / Avg: 364917.65 / Max: 374150.53Min: 351899.71 / Avg: 362585.3 / Max: 376537.6Min: 352001.89 / Avg: 356543.31 / Max: 369606.881. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEnabledRepeatRun5M10M15M20M25MSE +/- 188309.24, N = 3SE +/- 77091.76, N = 3SE +/- 240951.84, N = 32238934222712289222036851. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEnabledRepeatRun4M8M12M16M20MMin: 22130898 / Avg: 22389342 / Max: 22755811Min: 22579311 / Avg: 22712289 / Max: 22846358Min: 21916054 / Avg: 22203685.33 / Max: 226823511. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEnabledRepeatRun48121620SE +/- 0.21, N = 3SE +/- 0.12, N = 9SE +/- 0.13, N = 315.0415.1414.821. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEnabledRepeatRun48121620Min: 14.69 / Avg: 15.04 / Max: 15.42Min: 14.39 / Avg: 15.14 / Max: 15.52Min: 14.57 / Avg: 14.82 / Max: 15.031. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDEnabledRepeatRun1.2872.5743.8615.1486.435SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 35.605.725.681. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDEnabledRepeatRun246810Min: 5.59 / Avg: 5.6 / Max: 5.6Min: 5.72 / Avg: 5.72 / Max: 5.73Min: 5.61 / Avg: 5.68 / Max: 5.721. (CXX) g++ options: -O3 -pthread

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingEnabledRepeatRun120240360480600SE +/- 1.36, N = 3SE +/- 1.14, N = 3SE +/- 0.40, N = 3550.73556.31561.831. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingEnabledRepeatRun100200300400500Min: 548.76 / Avg: 550.73 / Max: 553.34Min: 555.16 / Avg: 556.31 / Max: 558.6Min: 561.04 / Avg: 561.83 / Max: 562.341. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjEnabledRepeatRun48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 314.4714.4514.19MIN: 14.33 / MAX: 14.85MIN: 14.33 / MAX: 14.87MIN: 14.03 / MAX: 14.6
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjEnabledRepeatRun48121620Min: 14.43 / Avg: 14.47 / Max: 14.49Min: 14.43 / Avg: 14.45 / Max: 14.49Min: 14.12 / Avg: 14.19 / Max: 14.29

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KEnabledRepeatRun1224364860SE +/- 0.39, N = 14SE +/- 0.63, N = 15SE +/- 0.03, N = 352.8251.8251.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KEnabledRepeatRun1122334455Min: 47.94 / Avg: 52.82 / Max: 53.82Min: 46.86 / Avg: 51.82 / Max: 53.94Min: 51.82 / Avg: 51.88 / Max: 51.941. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEnabledRepeatRun246810SE +/- 0.07203, N = 15SE +/- 0.09537, N = 15SE +/- 0.06939, N = 156.168256.193146.07807MIN: 3.67MIN: 3.68MIN: 3.691. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEnabledRepeatRun246810Min: 5.76 / Avg: 6.17 / Max: 6.72Min: 5.5 / Avg: 6.19 / Max: 6.85Min: 5.68 / Avg: 6.08 / Max: 6.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEnabledRepeatRun30060090012001500SE +/- 14.15, N = 4SE +/- 5.15, N = 3SE +/- 4.97, N = 31236.021246.171223.271. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEnabledRepeatRun2004006008001000Min: 1204.78 / Avg: 1236.02 / Max: 1267.89Min: 1238.4 / Avg: 1246.17 / Max: 1255.92Min: 1215.78 / Avg: 1223.27 / Max: 1232.671. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersEnabledRepeatRun30060090012001500SE +/- 5.99, N = 3SE +/- 14.53, N = 3SE +/- 14.23, N = 31527.41499.41506.31. 3.8.2.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersEnabledRepeatRun30060090012001500Min: 1515.4 / Avg: 1527.37 / Max: 1533.9Min: 1475.4 / Avg: 1499.4 / Max: 1525.6Min: 1478.2 / Avg: 1506.3 / Max: 1524.31. 3.8.2.0

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEnabledRepeatRun30K60K90K120K150KSE +/- 436.89, N = 3SE +/- 1468.08, N = 3SE +/- 1287.15, N = 3133779135089136271
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEnabledRepeatRun20K40K60K80K100KMin: 132914 / Avg: 133779.33 / Max: 134317Min: 132834 / Avg: 135089 / Max: 137845Min: 133819 / Avg: 136271 / Max: 138176

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorEnabledRepeatRun4080120160200SE +/- 2.16, N = 12SE +/- 1.79, N = 12SE +/- 1.63, N = 12173.55171.07174.16
OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorEnabledRepeatRun306090120150Min: 165.84 / Avg: 173.55 / Max: 186.36Min: 164.91 / Avg: 171.07 / Max: 184.85Min: 167.1 / Avg: 174.16 / Max: 185.83

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEnabledRepeatRun48121620SE +/- 0.03, N = 3SE +/- 0.18, N = 4SE +/- 0.23, N = 315.7216.0015.94MIN: 15.49 / MAX: 16.88MIN: 15.6 / MAX: 16.91MIN: 15.49 / MAX: 16.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEnabledRepeatRun48121620Min: 15.67 / Avg: 15.72 / Max: 15.77Min: 15.8 / Avg: 16 / Max: 16.54Min: 15.67 / Avg: 15.94 / Max: 16.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EnabledRepeatRun1.28932.57863.86795.15726.4465SE +/- 0.05, N = 3SE +/- 0.11, N = 4SE +/- 0.12, N = 35.635.685.73MIN: 5.52 / MAX: 6.51MIN: 5.5 / MAX: 6.87MIN: 5.53 / MAX: 6.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EnabledRepeatRun246810Min: 5.58 / Avg: 5.63 / Max: 5.72Min: 5.55 / Avg: 5.68 / Max: 6Min: 5.59 / Avg: 5.73 / Max: 5.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pEnabledRepeatRun306090120150SE +/- 1.42, N = 13SE +/- 1.83, N = 3SE +/- 0.15, N = 3135.81137.23138.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pEnabledRepeatRun306090120150Min: 120.89 / Avg: 135.81 / Max: 139.63Min: 133.6 / Avg: 137.23 / Max: 139.49Min: 137.96 / Avg: 138.16 / Max: 138.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEnabledRepeatRun30060090012001500SE +/- 14.97, N = 15SE +/- 14.97, N = 15SE +/- 10.80, N = 31554.981565.921581.871. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEnabledRepeatRun30060090012001500Min: 1477.26 / Avg: 1554.98 / Max: 1627.78Min: 1477.4 / Avg: 1565.92 / Max: 1620.28Min: 1561.92 / Avg: 1581.87 / Max: 1599.031. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedEnabledRepeatRun80160240320400SE +/- 3.48, N = 3SE +/- 1.95, N = 3SE +/- 1.58, N = 3386.2382.8389.41. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Compression SpeedEnabledRepeatRun70140210280350Min: 379.3 / Avg: 386.17 / Max: 390.6Min: 379.2 / Avg: 382.77 / Max: 385.9Min: 387.8 / Avg: 389.43 / Max: 392.61. (CC) gcc options: -O3 -pthread -lz -llzma

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapEnabledRepeatRun7001400210028003500SE +/- 31.79, N = 4SE +/- 28.33, N = 8SE +/- 37.15, N = 4320032423255
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapEnabledRepeatRun6001200180024003000Min: 3139 / Avg: 3200.25 / Max: 3286Min: 3153 / Avg: 3241.63 / Max: 3377Min: 3154 / Avg: 3255 / Max: 3333

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EnabledRepeatRun400K800K1200K1600K2000KSE +/- 13383.24, N = 3SE +/- 12333.33, N = 3SE +/- 10588.25, N = 32065333205466720883331. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EnabledRepeatRun400K800K1200K1600K2000KMin: 2050000 / Avg: 2065333.33 / Max: 2092000Min: 2039000 / Avg: 2054666.67 / Max: 2079000Min: 2074000 / Avg: 2088333.33 / Max: 21090001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxEnabledRepeatRun612182430SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.34, N = 325.125.025.41. firefox 86.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxEnabledRepeatRun612182430Min: 24.9 / Avg: 25.13 / Max: 25.3Min: 24.9 / Avg: 25.03 / Max: 25.1Min: 25 / Avg: 25.43 / Max: 26.11. firefox 86.0

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2EnabledRepeatRun1.01032.02063.03094.04125.0515SE +/- 0.05, N = 3SE +/- 0.13, N = 4SE +/- 0.09, N = 34.424.494.46MIN: 4.24 / MAX: 6.41MIN: 4.23 / MAX: 5.82MIN: 4.23 / MAX: 5.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2EnabledRepeatRun246810Min: 4.36 / Avg: 4.42 / Max: 4.52Min: 4.34 / Avg: 4.49 / Max: 4.88Min: 4.36 / Avg: 4.46 / Max: 4.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTRepeatEnabledRun1020304050SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 344.744.445.11. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTRepeatEnabledRun918273645Min: 44.4 / Avg: 44.7 / Max: 45Min: 44.1 / Avg: 44.37 / Max: 44.5Min: 45.1 / Avg: 45.13 / Max: 45.21. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeEnabledRepeatRun6M12M18M24M30MSE +/- 213446.67, N = 3SE +/- 139588.95, N = 3SE +/- 88725.14, N = 32881312128469418283722381. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeEnabledRepeatRun5M10M15M20M25MMin: 28575532 / Avg: 28813121 / Max: 29239067Min: 28237027 / Avg: 28469418 / Max: 28719595Min: 28262391 / Avg: 28372238 / Max: 285478541. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEnabledRepeatRun7K14K21K28K35KSE +/- 10.27, N = 3SE +/- 75.77, N = 3SE +/- 35.20, N = 331194.2531174.9730717.611. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEnabledRepeatRun5K10K15K20K25KMin: 31179.71 / Avg: 31194.25 / Max: 31214.09Min: 31036.28 / Avg: 31174.97 / Max: 31297.2Min: 30650.55 / Avg: 30717.61 / Max: 30769.711. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.1.0

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEnabledRepeatRun4080120160200SE +/- 1.00, N = 3SE +/- 0.33, N = 3SE +/- 0.58, N = 3194197196MIN: 1 / MAX: 790MIN: 1 / MAX: 806MIN: 1 / MAX: 807
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEnabledRepeatRun4080120160200Min: 192 / Avg: 194 / Max: 195Min: 196 / Avg: 196.67 / Max: 197Min: 195 / Avg: 196 / Max: 197

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeEnabledRepeatRun5M10M15M20M25MSE +/- 44284.02, N = 3SE +/- 283788.09, N = 3SE +/- 120334.61, N = 3246826902506301524792840MIN: 1560523 / MAX: 94703364MIN: 1580423 / MAX: 105264432MIN: 1602573 / MAX: 99801972
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeEnabledRepeatRun4M8M12M16M20MMin: 24618000 / Avg: 24682689.67 / Max: 24767424Min: 24545482 / Avg: 25063015.33 / Max: 25523590Min: 24566433 / Avg: 24792840 / Max: 24976727

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonEnabledRepeatRun7001400210028003500SE +/- 19.57, N = 4SE +/- 13.99, N = 4SE +/- 26.85, N = 4318231343174
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonEnabledRepeatRun6001200180024003000Min: 3145 / Avg: 3181.5 / Max: 3230Min: 3093 / Avg: 3134 / Max: 3156Min: 3114 / Avg: 3174 / Max: 3242

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetEnabledRepeatRun3691215SE +/- 0.18, N = 3SE +/- 0.01, N = 4SE +/- 0.17, N = 311.3611.1911.35MIN: 11 / MAX: 11.98MIN: 10.99 / MAX: 11.53MIN: 10.99 / MAX: 12.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetEnabledRepeatRun3691215Min: 11.17 / Avg: 11.36 / Max: 11.72Min: 11.17 / Avg: 11.19 / Max: 11.21Min: 11.18 / Avg: 11.35 / Max: 11.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceEnabledRepeatRun30060090012001500SE +/- 0.33, N = 3SE +/- 0.88, N = 3SE +/- 0.88, N = 31272127512561. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceEnabledRepeatRun2004006008001000Min: 1271 / Avg: 1271.67 / Max: 1272Min: 1273 / Avg: 1274.67 / Max: 1276Min: 1255 / Avg: 1256.33 / Max: 12581. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTRepeatEnabledRun1122334455SE +/- 0.00, N = 3SE +/- 0.45, N = 3SE +/- 0.06, N = 347.346.847.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTRepeatEnabledRun1020304050Min: 47.3 / Avg: 47.3 / Max: 47.3Min: 45.9 / Avg: 46.8 / Max: 47.3Min: 47.4 / Avg: 47.5 / Max: 47.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedEnabledRepeatRun9001800270036004500SE +/- 38.86, N = 3SE +/- 4.83, N = 3SE +/- 3.52, N = 34252.34315.84308.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Decompression SpeedEnabledRepeatRun7001400210028003500Min: 4185.5 / Avg: 4252.3 / Max: 4320.1Min: 4306.1 / Avg: 4315.77 / Max: 4320.6Min: 4301.3 / Avg: 4308.33 / Max: 4311.91. (CC) gcc options: -O3 -pthread -lz -llzma

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUEnabledRepeatRun1428425670SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 360.4559.5660.30
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUEnabledRepeatRun1224364860Min: 60.34 / Avg: 60.45 / Max: 60.63Min: 59.45 / Avg: 59.56 / Max: 59.73Min: 60.19 / Avg: 60.3 / Max: 60.5

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEnabledRepeatRun510152025SE +/- 0.16, N = 3SE +/- 0.16, N = 3SE +/- 0.00, N = 322.0622.0621.74MIN: 21.28 / MAX: 22.22MIN: 21.28 / MAX: 22.22MIN: 20 / MAX: 22.22
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEnabledRepeatRun510152025Min: 21.74 / Avg: 22.06 / Max: 22.22Min: 21.74 / Avg: 22.06 / Max: 22.22Min: 21.74 / Avg: 21.74 / Max: 21.74

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun30M60M90M120M150MSE +/- 1078368.11, N = 3SE +/- 1484620.42, N = 6SE +/- 33829.64, N = 31568666671556333331579066671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun30M60M90M120M150MMin: 154710000 / Avg: 156866666.67 / Max: 157960000Min: 149630000 / Avg: 155633333.33 / Max: 157990000Min: 157840000 / Avg: 157906666.67 / Max: 1579500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEnabledRepeatRun110220330440550SE +/- 0.35, N = 3SE +/- 0.30, N = 3SE +/- 0.47, N = 3499.40492.31499.05
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEnabledRepeatRun90180270360450Min: 498.72 / Avg: 499.4 / Max: 499.9Min: 491.78 / Avg: 492.31 / Max: 492.82Min: 498.44 / Avg: 499.05 / Max: 499.97

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEnabledRepeatRun13K26K39K52K65KSE +/- 97.11, N = 3SE +/- 100.32, N = 3SE +/- 57.06, N = 361555.9262160.5962436.511. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEnabledRepeatRun11K22K33K44K55KMin: 61450.73 / Avg: 61555.92 / Max: 61749.91Min: 62039.66 / Avg: 62160.59 / Max: 62359.7Min: 62354.72 / Avg: 62436.51 / Max: 62546.341. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEnabledRepeatRun20406080100SE +/- 0.42, N = 3SE +/- 0.64, N = 3SE +/- 0.20, N = 3103.84104.96103.491. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEnabledRepeatRun20406080100Min: 103.1 / Avg: 103.84 / Max: 104.54Min: 104.29 / Avg: 104.96 / Max: 106.24Min: 103.17 / Avg: 103.49 / Max: 103.851. (CXX) g++ options: -O2 -lOpenCL

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun130M260M390M520M650MSE +/- 463656.96, N = 3SE +/- 286724.80, N = 3SE +/- 8429373.91, N = 36298766676294666676210933331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun110M220M330M440M550MMin: 628950000 / Avg: 629876666.67 / Max: 630370000Min: 628930000 / Avg: 629466666.67 / Max: 629910000Min: 604240000 / Avg: 621093333.33 / Max: 6298900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeEnabledRepeatRun612182430SE +/- 0.05, N = 3SE +/- 0.35, N = 3SE +/- 0.16, N = 326.5726.6526.951. chrome 89.0.4389.90
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeEnabledRepeatRun612182430Min: 26.49 / Avg: 26.57 / Max: 26.65Min: 25.97 / Avg: 26.65 / Max: 27.17Min: 26.64 / Avg: 26.95 / Max: 27.181. chrome 89.0.4389.90

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EnabledRepeatRun0.82131.64262.46393.28524.1065SE +/- 0.06, N = 3SE +/- 0.07, N = 4SE +/- 0.04, N = 33.633.653.60MIN: 3.48 / MAX: 4.52MIN: 3.49 / MAX: 4.52MIN: 3.48 / MAX: 4.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EnabledRepeatRun246810Min: 3.56 / Avg: 3.63 / Max: 3.76Min: 3.57 / Avg: 3.65 / Max: 3.87Min: 3.55 / Avg: 3.6 / Max: 3.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedEnabledRepeatRun70140210280350SE +/- 2.05, N = 3SE +/- 1.09, N = 3SE +/- 0.86, N = 3314.5312.9310.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Compression SpeedEnabledRepeatRun60120180240300Min: 312 / Avg: 314.53 / Max: 318.6Min: 310.7 / Avg: 312.87 / Max: 314.2Min: 308.9 / Avg: 310.17 / Max: 311.81. (CC) gcc options: -O3 -pthread -lz -llzma

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EnabledRepeatRun20406080100SE +/- 0.32, N = 3SE +/- 0.69, N = 3SE +/- 0.30, N = 3108.73107.26108.101. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EnabledRepeatRun20406080100Min: 108.15 / Avg: 108.73 / Max: 109.27Min: 106.45 / Avg: 107.26 / Max: 108.63Min: 107.58 / Avg: 108.1 / Max: 108.611. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

BLAKE2

This is a benchmark of BLAKE2 using the blake2s binary. BLAKE2 is a high-performance crypto alternative to MD5 and SHA-2/3. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCycles Per Byte, Fewer Is BetterBLAKE2 20170307EnabledRepeatRun1.00352.0073.01054.0145.0175SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 34.404.464.431. (CC) gcc options: -O3 -march=native -lcrypto -lz
OpenBenchmarking.orgCycles Per Byte, Fewer Is BetterBLAKE2 20170307EnabledRepeatRun246810Min: 4.37 / Avg: 4.4 / Max: 4.46Min: 4.46 / Avg: 4.46 / Max: 4.46Min: 4.37 / Avg: 4.43 / Max: 4.471. (CC) gcc options: -O3 -march=native -lcrypto -lz

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessEnabledRepeatRun1224364860SE +/- 0.11, N = 3SE +/- 0.19, N = 3SE +/- 0.03, N = 351.5151.5752.211. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessEnabledRepeatRun1020304050Min: 51.29 / Avg: 51.51 / Max: 51.62Min: 51.32 / Avg: 51.57 / Max: 51.95Min: 52.17 / Avg: 52.21 / Max: 52.261. (CXX) g++ options: -O3 -fPIC -lm

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EnabledRepeatRun20406080100SE +/- 0.57, N = 3SE +/- 0.29, N = 3SE +/- 0.23, N = 387.3986.2386.391. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EnabledRepeatRun20406080100Min: 86.77 / Avg: 87.39 / Max: 88.54Min: 85.65 / Avg: 86.23 / Max: 86.53Min: 86.15 / Avg: 86.39 / Max: 86.851. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pEnabledRepeatRun4080120160200SE +/- 1.51, N = 3SE +/- 2.07, N = 4SE +/- 1.87, N = 3185.98184.70183.501. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pEnabledRepeatRun306090120150Min: 182.98 / Avg: 185.98 / Max: 187.78Min: 178.51 / Avg: 184.7 / Max: 187.12Min: 179.86 / Avg: 183.5 / Max: 186.051. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEnabledRepeatRun20406080100SE +/- 0.54, N = 3SE +/- 0.53, N = 15SE +/- 0.43, N = 1574.8875.0274.021. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEnabledRepeatRun1428425670Min: 73.81 / Avg: 74.88 / Max: 75.46Min: 73.11 / Avg: 75.02 / Max: 81.27Min: 72.95 / Avg: 74.02 / Max: 79.651. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pEnabledRepeatRun60120180240300SE +/- 0.21, N = 3SE +/- 0.14, N = 3SE +/- 0.33, N = 3266.00262.54262.471. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pEnabledRepeatRun50100150200250Min: 265.6 / Avg: 266 / Max: 266.31Min: 262.35 / Avg: 262.54 / Max: 262.81Min: 261.89 / Avg: 262.47 / Max: 263.041. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEnabledRepeatRun100200300400500SE +/- 5.17, N = 3SE +/- 4.00, N = 34624674681. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEnabledRepeatRun80160240320400Min: 457 / Avg: 467.33 / Max: 473Min: 460 / Avg: 468 / Max: 4721. (CXX) g++ options: -flto -pthread

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionEnabledRepeatRun816243240SE +/- 0.29, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 334.1333.6933.771. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionEnabledRepeatRun714212835Min: 33.79 / Avg: 34.13 / Max: 34.71Min: 33.66 / Avg: 33.69 / Max: 33.73Min: 33.73 / Avg: 33.77 / Max: 33.831. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingEnabledRepeatRun2004006008001000SE +/- 1.20, N = 3SE +/- 1.33, N = 3SE +/- 3.53, N = 31101110310891. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingEnabledRepeatRun2004006008001000Min: 1099 / Avg: 1100.67 / Max: 1103Min: 1100 / Avg: 1102.67 / Max: 1104Min: 1082 / Avg: 1088.67 / Max: 10941. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3EnabledRepeatRun612182430SE +/- 0.23, N = 3SE +/- 0.15, N = 3SE +/- 0.12, N = 323.0622.9023.19MIN: 22.7 / MAX: 25.12MIN: 22.59 / MAX: 25.48MIN: 22.9 / MAX: 25.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3EnabledRepeatRun510152025Min: 22.79 / Avg: 23.06 / Max: 23.53Min: 22.68 / Avg: 22.9 / Max: 23.19Min: 23 / Avg: 23.19 / Max: 23.421. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEnabledRepeatRun246810SE +/- 0.007, N = 3SE +/- 0.063, N = 3SE +/- 0.040, N = 37.9997.9007.9841. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEnabledRepeatRun3691215Min: 7.99 / Avg: 8 / Max: 8.01Min: 7.78 / Avg: 7.9 / Max: 7.98Min: 7.91 / Avg: 7.98 / Max: 8.051. (CC) gcc options: -std=c99 -O3 -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNRepeatEnabledRun1122334455SE +/- 0.00, N = 2SE +/- 0.57, N = 3SE +/- 0.03, N = 348.748.148.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNRepeatEnabledRun1020304050Min: 48.7 / Avg: 48.7 / Max: 48.7Min: 47 / Avg: 48.13 / Max: 48.7Min: 48.7 / Avg: 48.73 / Max: 48.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEnabledRepeatRun7M14M21M28M35MSE +/- 401112.02, N = 3SE +/- 145328.33, N = 3SE +/- 260684.52, N = 3319955263160404031766979
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEnabledRepeatRun6M12M18M24M30MMin: 31308041 / Avg: 31995526.33 / Max: 32697313Min: 31369825 / Avg: 31604039.67 / Max: 31870206Min: 31325598 / Avg: 31766978.67 / Max: 32227995

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Server Rack - Acceleration: CPU-onlyEnabledRepeatRun0.03710.07420.11130.14840.1855SE +/- 0.002, N = 15SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1650.1630.165
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Server Rack - Acceleration: CPU-onlyEnabledRepeatRun12345Min: 0.16 / Avg: 0.17 / Max: 0.18Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.17 / Max: 0.17

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun20M40M60M80M100MSE +/- 4910.31, N = 3SE +/- 978008.35, N = 3SE +/- 8504.90, N = 38262633381644000826310001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun14M28M42M56M70MMin: 82620000 / Avg: 82626333.33 / Max: 82636000Min: 79688000 / Avg: 81644000 / Max: 82629000Min: 82622000 / Avg: 82631000 / Max: 826480001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEnabledRepeatRun2004006008001000SE +/- 8.54, N = 3SE +/- 4.04, N = 3SE +/- 7.88, N = 38408318411. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEnabledRepeatRun150300450600750Min: 830 / Avg: 840 / Max: 857Min: 824 / Avg: 831 / Max: 838Min: 825 / Avg: 840.67 / Max: 8501. (CXX) g++ options: -flto -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEnabledRepeatRun20406080100SE +/- 0.17, N = 3SE +/- 0.00, N = 38484851. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEnabledRepeatRun1632486480Min: 84 / Avg: 84.33 / Max: 84.5Min: 84.5 / Avg: 84.5 / Max: 84.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEnabledRepeatRun0.00020.00040.00060.00080.001SE +/- 0.00000726, N = 3SE +/- 0.00000063, N = 3SE +/- 0.00000526, N = 30.000677850.000674380.00068233
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEnabledRepeatRun12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownEnabledRepeatRun48121620SE +/- 0.04, N = 3SE +/- 0.13, N = 3SE +/- 0.05, N = 314.8114.6414.64MIN: 14.58 / MAX: 15.49MIN: 13.95 / MAX: 15.33MIN: 14.37 / MAX: 15.3
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownEnabledRepeatRun48121620Min: 14.77 / Avg: 14.81 / Max: 14.88Min: 14.38 / Avg: 14.64 / Max: 14.81Min: 14.56 / Avg: 14.64 / Max: 14.74

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7EnabledRepeatRun20406080100SE +/- 0.18, N = 3SE +/- 0.31, N = 3SE +/- 0.49, N = 374.7074.8675.571. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7EnabledRepeatRun1530456075Min: 74.35 / Avg: 74.7 / Max: 74.92Min: 74.26 / Avg: 74.86 / Max: 75.25Min: 74.98 / Avg: 75.57 / Max: 76.551. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEnabledRepeatRun0.39380.78761.18141.57521.969SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.751.731.74MIN: 1.73 / MAX: 1.78MIN: 1.72 / MAX: 1.77MIN: 1.65 / MAX: 1.78
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEnabledRepeatRun246810Min: 1.75 / Avg: 1.75 / Max: 1.75Min: 1.73 / Avg: 1.73 / Max: 1.74Min: 1.74 / Avg: 1.74 / Max: 1.75

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFEnabledRepeatRun1.25842.51683.77525.03366.292SE +/- 0.042, N = 10SE +/- 0.038, N = 13SE +/- 0.049, N = 85.5615.5305.5931. LibreOffice 7.1.2.1 10(Build:1)
OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFEnabledRepeatRun246810Min: 5.39 / Avg: 5.56 / Max: 5.91Min: 5.36 / Avg: 5.53 / Max: 5.91Min: 5.5 / Avg: 5.59 / Max: 5.931. LibreOffice 7.1.2.1 10(Build:1)

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPEnabledRepeatRun50100150200250SE +/- 0.44, N = 3SE +/- 0.26, N = 3SE +/- 0.15, N = 3209.65210.97208.621. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPEnabledRepeatRun4080120160200Min: 209.21 / Avg: 209.65 / Max: 210.53Min: 210.53 / Avg: 210.97 / Max: 211.42Min: 208.33 / Avg: 208.62 / Max: 208.771. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEnabledRepeatRun246810SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 46.616.656.68
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEnabledRepeatRun3691215Min: 6.58 / Avg: 6.61 / Max: 6.63Min: 6.57 / Avg: 6.65 / Max: 6.69Min: 6.56 / Avg: 6.68 / Max: 6.91

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEnabledRepeatRun510152025SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.13, N = 321.0821.1021.301. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEnabledRepeatRun510152025Min: 20.9 / Avg: 21.08 / Max: 21.17Min: 20.98 / Avg: 21.1 / Max: 21.17Min: 21.17 / Avg: 21.3 / Max: 21.551. (CXX) g++ options: -O2 -lOpenCL

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllEnabledRepeatRun50100150200250SE +/- 0.19, N = 3SE +/- 0.21, N = 3SE +/- 0.23, N = 3229.86229.99227.61
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: AllEnabledRepeatRun4080120160200Min: 229.6 / Avg: 229.86 / Max: 230.23Min: 229.73 / Avg: 229.99 / Max: 230.41Min: 227.2 / Avg: 227.61 / Max: 228

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYRepeatEnabledRun918273645SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 338.738.839.11. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYRepeatEnabledRun816243240Min: 38.5 / Avg: 38.73 / Max: 38.9Min: 38.7 / Avg: 38.8 / Max: 38.9Min: 39 / Avg: 39.07 / Max: 39.11. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEnabledRepeatRun0.4410.8821.3231.7642.205SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.961.961.94MIN: 1.92 / MAX: 2.02MIN: 1.92 / MAX: 2.01MIN: 1.89 / MAX: 1.99
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEnabledRepeatRun246810Min: 1.95 / Avg: 1.96 / Max: 1.98Min: 1.96 / Avg: 1.96 / Max: 1.97Min: 1.94 / Avg: 1.94 / Max: 1.95

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEnabledRepeatRun918273645SE +/- 0.08, N = 3SE +/- 0.24, N = 3SE +/- 0.18, N = 338.7338.3438.67
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEnabledRepeatRun816243240Min: 38.56 / Avg: 38.73 / Max: 38.82Min: 37.94 / Avg: 38.34 / Max: 38.76Min: 38.4 / Avg: 38.67 / Max: 39.02

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryEnabledRepeatRun5K10K15K20K25KSE +/- 74.39, N = 3SE +/- 57.62, N = 3SE +/- 87.98, N = 325461.3925202.7925429.681. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryEnabledRepeatRun4K8K12K16K20KMin: 25373.95 / Avg: 25461.39 / Max: 25609.35Min: 25100.29 / Avg: 25202.79 / Max: 25299.65Min: 25262.85 / Avg: 25429.68 / Max: 25561.561. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

LuaJIT

This test profile is a collection of Lua scripts/benchmarks run against a locally-built copy of LuaJIT upstream. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterLuaJIT 2.1-gitTest: CompositeEnabledRepeatRun400800120016002000SE +/- 1.28, N = 3SE +/- 1.32, N = 3SE +/- 3.15, N = 31832.951828.041846.721. (CC) gcc options: -lm -ldl -O2 -fomit-frame-pointer -U_FORTIFY_SOURCE -fno-stack-protector
OpenBenchmarking.orgMflops, More Is BetterLuaJIT 2.1-gitTest: CompositeEnabledRepeatRun30060090012001500Min: 1830.55 / Avg: 1832.95 / Max: 1834.9Min: 1825.47 / Avg: 1828.04 / Max: 1829.89Min: 1840.48 / Avg: 1846.72 / Max: 1850.61. (CC) gcc options: -lm -ldl -O2 -fomit-frame-pointer -U_FORTIFY_SOURCE -fno-stack-protector

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptRepeatEnabledRun2K4K6K8K10KSE +/- 77.84, N = 3SE +/- 0.09, N = 3SE +/- 22.19, N = 37761.247840.117816.271. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptRepeatEnabledRun14002800420056007000Min: 7605.56 / Avg: 7761.24 / Max: 7839.74Min: 7839.94 / Avg: 7840.11 / Max: 7840.25Min: 7771.9 / Avg: 7816.27 / Max: 7839.471. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeEnabledRepeatRun130260390520650SE +/- 1.50, N = 3SE +/- 0.93, N = 3SE +/- 1.66, N = 3616.6610.4610.71. chrome 89.0.4389.90
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeEnabledRepeatRun110220330440550Min: 613.6 / Avg: 616.6 / Max: 618.1Min: 608.7 / Avg: 610.4 / Max: 611.9Min: 607.4 / Avg: 610.7 / Max: 612.61. chrome 89.0.4389.90

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224EnabledRepeatRun0.4480.8961.3441.7922.24SE +/- 0.006, N = 3SE +/- 0.013, N = 3SE +/- 0.004, N = 31.9711.9801.991MIN: 1.92 / MAX: 2.79MIN: 1.92 / MAX: 2.88MIN: 1.94 / MAX: 2.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224EnabledRepeatRun246810Min: 1.96 / Avg: 1.97 / Max: 1.98Min: 1.96 / Avg: 1.98 / Max: 2Min: 1.98 / Avg: 1.99 / Max: 21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0EnabledRepeatRun0.85251.7052.55753.414.2625SE +/- 0.008, N = 3SE +/- 0.032, N = 3SE +/- 0.018, N = 33.7513.7893.753MIN: 3.69 / MAX: 11.5MIN: 3.72 / MAX: 11.42MIN: 3.69 / MAX: 4.531. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0EnabledRepeatRun246810Min: 3.74 / Avg: 3.75 / Max: 3.76Min: 3.75 / Avg: 3.79 / Max: 3.85Min: 3.72 / Avg: 3.75 / Max: 3.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256RepeatEnabledRun2K4K6K8K10KSE +/- 74.80, N = 3SE +/- 0.15, N = 3SE +/- 0.75, N = 37778.537854.717854.051. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256RepeatEnabledRun14002800420056007000Min: 7628.94 / Avg: 7778.53 / Max: 7854.83Min: 7854.49 / Avg: 7854.71 / Max: 7855.01Min: 7852.54 / Avg: 7854.05 / Max: 7854.881. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeEnabledRepeatRun306090120150SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3114.16113.96113.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 8 RealtimeEnabledRepeatRun20406080100Min: 113.94 / Avg: 114.16 / Max: 114.28Min: 113.82 / Avg: 113.96 / Max: 114.09Min: 112.83 / Avg: 113.06 / Max: 113.241. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTRepeatEnabledRun1020304050SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 341.341.341.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTRepeatEnabledRun918273645Min: 41.3 / Avg: 41.33 / Max: 41.4Min: 41.2 / Avg: 41.33 / Max: 41.4Min: 41.6 / Avg: 41.67 / Max: 41.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEnabledRepeatRun48121620SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 316.4016.3016.461. rsvg-convert version 2.50.3
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEnabledRepeatRun48121620Min: 16.24 / Avg: 16.4 / Max: 16.53Min: 16.27 / Avg: 16.3 / Max: 16.35Min: 16.39 / Avg: 16.46 / Max: 16.531. rsvg-convert version 2.50.3

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEnabledRepeatRun200K400K600K800K1000KSE +/- 4727.53, N = 3SE +/- 1962.09, N = 3SE +/- 3145.44, N = 3101040410178901020071
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEnabledRepeatRun200K400K600K800K1000KMin: 1001036 / Avg: 1010404.33 / Max: 1016195Min: 1015456 / Avg: 1017890.33 / Max: 1021773Min: 1013948 / Avg: 1020071.33 / Max: 1024382

Optcarrot

Optcarrot is an NES emulator benchmark for the Ruby language. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOptcarrotOptimized BenchmarkEnabledRepeatRun4080120160200SE +/- 0.64, N = 4SE +/- 1.14, N = 4SE +/- 0.90, N = 4186.13186.00187.751. ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux-gnu]
OpenBenchmarking.orgFPS, More Is BetterOptcarrotOptimized BenchmarkEnabledRepeatRun306090120150Min: 185.31 / Avg: 186.13 / Max: 188.03Min: 182.9 / Avg: 186 / Max: 187.9Min: 185.34 / Avg: 187.75 / Max: 189.671. ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux-gnu]

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEnabledRepeatRun6K12K18K24K30KSE +/- 151.48, N = 3SE +/- 46.34, N = 3SE +/- 74.39, N = 326926.4126809.9026676.771. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEnabledRepeatRun5K10K15K20K25KMin: 26655.63 / Avg: 26926.41 / Max: 27179.47Min: 26759.47 / Avg: 26809.9 / Max: 26902.46Min: 26541.99 / Avg: 26676.77 / Max: 26798.711. (CXX) g++ options: -O3 -march=native -fopenmp

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KEnabledRepeatRun48121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 316.5616.5416.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KEnabledRepeatRun48121620Min: 16.51 / Avg: 16.56 / Max: 16.58Min: 16.51 / Avg: 16.54 / Max: 16.55Min: 16.38 / Avg: 16.41 / Max: 16.451. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultEnabledRepeatRun0.80211.60422.40633.20844.0105SE +/- 0.015, N = 3SE +/- 0.012, N = 3SE +/- 0.004, N = 33.5333.5453.5651. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultEnabledRepeatRun246810Min: 3.5 / Avg: 3.53 / Max: 3.55Min: 3.52 / Avg: 3.55 / Max: 3.56Min: 3.56 / Avg: 3.57 / Max: 3.571. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeEnabledRepeatRun1020304050SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 345.9545.7846.191. chrome 89.0.4389.90
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeEnabledRepeatRun918273645Min: 45.7 / Avg: 45.95 / Max: 46.14Min: 45.6 / Avg: 45.78 / Max: 46.04Min: 45.9 / Avg: 46.19 / Max: 46.441. chrome 89.0.4389.90

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialEnabledRepeatRun4080120160200182.33181.85183.47

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case SlimEnabledRepeatRun48121620SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 314.3214.2014.241. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case SlimEnabledRepeatRun48121620Min: 14.25 / Avg: 14.32 / Max: 14.39Min: 14.16 / Avg: 14.2 / Max: 14.24Min: 14.21 / Avg: 14.24 / Max: 14.261. OpenSCAD version 2021.01

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pEnabledRepeatRun714212835SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.21, N = 331.1131.0530.841. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pEnabledRepeatRun714212835Min: 31.04 / Avg: 31.11 / Max: 31.17Min: 30.85 / Avg: 31.05 / Max: 31.16Min: 30.44 / Avg: 30.84 / Max: 31.151. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownEnabledRepeatRun3691215SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 312.5012.6112.55MIN: 12.26 / MAX: 12.92MIN: 12.39 / MAX: 13MIN: 12.38 / MAX: 12.9
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownEnabledRepeatRun48121620Min: 12.37 / Avg: 12.5 / Max: 12.64Min: 12.5 / Avg: 12.61 / Max: 12.67Min: 12.5 / Avg: 12.55 / Max: 12.62

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYRepeatEnabledRun612182430SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 323.623.623.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYRepeatEnabledRun612182430Min: 23.6 / Avg: 23.6 / Max: 23.6Min: 23.6 / Avg: 23.6 / Max: 23.6Min: 23.7 / Avg: 23.77 / Max: 23.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersEnabledRepeatRun30060090012001500SE +/- 1.82, N = 3SE +/- 8.83, N = 3SE +/- 4.43, N = 31541.11528.41536.0
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersEnabledRepeatRun30060090012001500Min: 1537.5 / Avg: 1541.13 / Max: 1543.1Min: 1515.8 / Avg: 1528.37 / Max: 1545.4Min: 1527.3 / Avg: 1536.03 / Max: 1541.7

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEnabledRepeatRun306090120150SE +/- 0.35, N = 3SE +/- 0.19, N = 3SE +/- 0.34, N = 3122.76121.75122.58
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEnabledRepeatRun20406080100Min: 122.15 / Avg: 122.76 / Max: 123.36Min: 121.4 / Avg: 121.75 / Max: 122.04Min: 122.2 / Avg: 122.58 / Max: 123.26

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYRepeatEnabledRun816243240SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 336.336.336.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYRepeatEnabledRun816243240Min: 36.3 / Avg: 36.3 / Max: 36.3Min: 36.3 / Avg: 36.33 / Max: 36.4Min: 36.5 / Avg: 36.6 / Max: 36.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterEnabledRepeatRun4080120160200174.01174.61175.44

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEnabledRepeatRun5K10K15K20K25KSE +/- 115.33, N = 3SE +/- 87.55, N = 3SE +/- 35.55, N = 32086420862210321. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEnabledRepeatRun4K8K12K16K20KMin: 20714 / Avg: 20864.33 / Max: 21091Min: 20719 / Avg: 20862 / Max: 21021Min: 20964 / Avg: 21032 / Max: 210841. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To CompileEnabledRepeatRun20406080100SE +/- 0.76, N = 3SE +/- 0.81, N = 3SE +/- 0.89, N = 374.5174.0274.62
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To CompileEnabledRepeatRun1428425670Min: 73.47 / Avg: 74.51 / Max: 75.98Min: 73.15 / Avg: 74.02 / Max: 75.64Min: 73.44 / Avg: 74.62 / Max: 76.36

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2EnabledRepeatRun0.85281.70562.55843.41124.264SE +/- 0.01, N = 3SE +/- 0.01, N = 4SE +/- 0.03, N = 33.763.773.79MIN: 3.7 / MAX: 4.31MIN: 3.7 / MAX: 5.82MIN: 3.69 / MAX: 6.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2EnabledRepeatRun246810Min: 3.74 / Avg: 3.76 / Max: 3.78Min: 3.74 / Avg: 3.77 / Max: 3.81Min: 3.74 / Avg: 3.79 / Max: 3.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYRepeatEnabledRun612182430SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 325.125.125.31. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYRepeatEnabledRun612182430Min: 25.1 / Avg: 25.13 / Max: 25.2Min: 25.1 / Avg: 25.13 / Max: 25.2Min: 25.2 / Avg: 25.27 / Max: 25.31. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0EnabledRepeatRun0.43560.87121.30681.74242.178SE +/- 0.007, N = 3SE +/- 0.005, N = 3SE +/- 0.012, N = 31.9211.9231.936MIN: 1.87 / MAX: 2.71MIN: 1.88 / MAX: 2.74MIN: 1.88 / MAX: 3.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0EnabledRepeatRun246810Min: 1.91 / Avg: 1.92 / Max: 1.93Min: 1.92 / Avg: 1.92 / Max: 1.93Min: 1.92 / Avg: 1.94 / Max: 1.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEnabledRepeatRun714212835SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 331.7331.8931.651. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEnabledRepeatRun714212835Min: 31.68 / Avg: 31.73 / Max: 31.78Min: 31.88 / Avg: 31.89 / Max: 31.91Min: 31.57 / Avg: 31.65 / Max: 31.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEnabledRepeatRun48121620SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 317.7317.7917.86
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEnabledRepeatRun510152025Min: 17.66 / Avg: 17.73 / Max: 17.81Min: 17.72 / Avg: 17.79 / Max: 17.86Min: 17.8 / Avg: 17.86 / Max: 17.94

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEnabledRepeatRun11002200330044005500SE +/- 5.24, N = 3SE +/- 12.12, N = 3SE +/- 21.71, N = 34991.244964.685001.071. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEnabledRepeatRun9001800270036004500Min: 4982.34 / Avg: 4991.24 / Max: 5000.48Min: 4940.75 / Avg: 4964.68 / Max: 4980.04Min: 4973.43 / Avg: 5001.07 / Max: 5043.91. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEnabledRepeatRun306090120150SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.23, N = 3157.07156.26155.961. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEnabledRepeatRun306090120150Min: 156.93 / Avg: 157.07 / Max: 157.29Min: 156.11 / Avg: 156.26 / Max: 156.44Min: 155.52 / Avg: 155.96 / Max: 156.31. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun0.32460.64920.97381.29841.623SE +/- 0.01018, N = 3SE +/- 0.00726, N = 3SE +/- 0.01056, N = 31.436411.442881.43273MIN: 1.37MIN: 1.36MIN: 1.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun246810Min: 1.42 / Avg: 1.44 / Max: 1.45Min: 1.43 / Avg: 1.44 / Max: 1.46Min: 1.42 / Avg: 1.43 / Max: 1.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEnabledRepeatRun150300450600750SE +/- 2.85, N = 3SE +/- 2.08, N = 3SE +/- 1.00, N = 3714716711
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEnabledRepeatRun130260390520650Min: 708 / Avg: 713.67 / Max: 717Min: 712 / Avg: 716 / Max: 719Min: 709 / Avg: 711 / Max: 712

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEnabledRepeatRun2K4K6K8K10KSE +/- 31.07, N = 3SE +/- 6.63, N = 3SE +/- 31.08, N = 310096.910047.510026.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEnabledRepeatRun2K4K6K8K10KMin: 10042.4 / Avg: 10096.9 / Max: 10150Min: 10036.5 / Avg: 10047.47 / Max: 10059.4Min: 9965 / Avg: 10026.77 / Max: 10063.71. (CC) gcc options: -O3

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsLTE 20.10.1Test: OFDM_TestEnabledRepeatRun40M80M120M160M200MSE +/- 260341.66, N = 3SE +/- 2512855.83, N = 3SE +/- 1473091.99, N = 31769333331780333331768000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgSamples / Second, More Is BettersrsLTE 20.10.1Test: OFDM_TestEnabledRepeatRun30M60M90M120M150MMin: 176500000 / Avg: 176933333.33 / Max: 177400000Min: 173600000 / Avg: 178033333.33 / Max: 182300000Min: 174300000 / Avg: 176800000 / Max: 1794000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEnabledRepeatRun12K24K36K48K60KSE +/- 126.41, N = 3SE +/- 122.73, N = 3SE +/- 210.46, N = 356981.9157137.4157377.451. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEnabledRepeatRun10K20K30K40K50KMin: 56783.62 / Avg: 56981.91 / Max: 57216.89Min: 56931.35 / Avg: 57137.41 / Max: 57355.95Min: 57072.07 / Avg: 57377.45 / Max: 57781.011. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxEnabledRepeatRun918273645SE +/- 0.20, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 338.1238.0937.861. firefox 86.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxEnabledRepeatRun816243240Min: 37.77 / Avg: 38.12 / Max: 38.46Min: 37.89 / Avg: 38.09 / Max: 38.36Min: 37.76 / Avg: 37.86 / Max: 37.981. firefox 86.0

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEnabledRepeatRun3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 313.4413.4213.35
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEnabledRepeatRun48121620Min: 13.43 / Avg: 13.44 / Max: 13.47Min: 13.41 / Avg: 13.42 / Max: 13.42Min: 13.33 / Avg: 13.35 / Max: 13.37

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEnabledRepeatRun1326395265SE +/- 0.41, N = 3SE +/- 0.47, N = 3SE +/- 0.74, N = 357.2557.6457.331. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEnabledRepeatRun1122334455Min: 56.59 / Avg: 57.25 / Max: 58.02Min: 56.8 / Avg: 57.64 / Max: 58.41Min: 56.49 / Avg: 57.33 / Max: 58.811. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10EnabledRepeatRun0.64671.29341.94012.58683.2335SE +/- 0.003, N = 3SE +/- 0.007, N = 3SE +/- 0.008, N = 32.8552.8672.8741. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10EnabledRepeatRun246810Min: 2.85 / Avg: 2.86 / Max: 2.86Min: 2.86 / Avg: 2.87 / Max: 2.88Min: 2.86 / Avg: 2.87 / Max: 2.881. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassEnabledRepeatRun246810SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 37.677.647.691. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 4 Two-PassEnabledRepeatRun3691215Min: 7.62 / Avg: 7.67 / Max: 7.76Min: 7.63 / Avg: 7.64 / Max: 7.65Min: 7.64 / Avg: 7.69 / Max: 7.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonEnabledRepeatRun48121620SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 313.9513.8613.92MIN: 13.68 / MAX: 14.36MIN: 13.61 / MAX: 14.36MIN: 13.63 / MAX: 14.38
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonEnabledRepeatRun48121620Min: 13.73 / Avg: 13.95 / Max: 14.15Min: 13.67 / Avg: 13.86 / Max: 14.13Min: 13.69 / Avg: 13.92 / Max: 14.19

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEnabledRepeatRun4080120160200SE +/- 0.25, N = 3SE +/- 0.23, N = 3SE +/- 0.55, N = 3191.74191.19190.501. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEnabledRepeatRun4080120160200Min: 191.31 / Avg: 191.74 / Max: 192.18Min: 190.94 / Avg: 191.19 / Max: 191.64Min: 189.82 / Avg: 190.5 / Max: 191.61. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEnabledRepeatRun48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 313.9513.8613.93MIN: 13.88MIN: 13.78MIN: 13.851. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEnabledRepeatRun48121620Min: 13.94 / Avg: 13.95 / Max: 13.97Min: 13.85 / Avg: 13.86 / Max: 13.87Min: 13.92 / Avg: 13.93 / Max: 13.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEnabledRepeatRun918273645SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 337.8537.7637.611. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEnabledRepeatRun816243240Min: 37.69 / Avg: 37.85 / Max: 37.94Min: 37.7 / Avg: 37.76 / Max: 37.87Min: 37.53 / Avg: 37.61 / Max: 37.681. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEnabledRepeatRun120240360480600SE +/- 0.76, N = 3SE +/- 3.91, N = 3SE +/- 1.42, N = 3548.63547.94545.20
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEnabledRepeatRun100200300400500Min: 547.28 / Avg: 548.63 / Max: 549.91Min: 542.08 / Avg: 547.94 / Max: 555.36Min: 543.1 / Avg: 545.19 / Max: 547.9

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEnabledRepeatRun20406080100SE +/- 0.06, N = 3SE +/- 0.98, N = 3SE +/- 0.48, N = 392.2392.1092.67
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEnabledRepeatRun20406080100Min: 92.16 / Avg: 92.23 / Max: 92.35Min: 90.94 / Avg: 92.1 / Max: 94.06Min: 91.85 / Avg: 92.67 / Max: 93.5

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEnabledRepeatRun9K18K27K36K45KSE +/- 58.76, N = 3SE +/- 12.45, N = 3SE +/- 79.47, N = 341370.7741346.5241115.481. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEnabledRepeatRun7K14K21K28K35KMin: 41294.22 / Avg: 41370.77 / Max: 41486.28Min: 41325.09 / Avg: 41346.52 / Max: 41368.22Min: 40956.63 / Avg: 41115.48 / Max: 41199.391. (CXX) g++ options: -O3 -march=native -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEnabledRepeatRun0.77361.54722.32083.09443.868SE +/- 0.00397, N = 3SE +/- 0.00297, N = 3SE +/- 0.00271, N = 33.426603.417213.43837MIN: 3.35MIN: 3.34MIN: 3.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEnabledRepeatRun246810Min: 3.42 / Avg: 3.43 / Max: 3.43Min: 3.41 / Avg: 3.42 / Max: 3.42Min: 3.44 / Avg: 3.44 / Max: 3.441. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeEnabledRepeatRun4080120160200SE +/- 0.67, N = 3162163162
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeEnabledRepeatRun306090120150Min: 162 / Avg: 163.33 / Max: 164

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1EnabledRepeatRun1326395265SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 356.7056.7156.37
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.3CPU Threads: 1EnabledRepeatRun1122334455Min: 56.65 / Avg: 56.7 / Max: 56.77Min: 56.61 / Avg: 56.71 / Max: 56.78Min: 56.12 / Avg: 56.37 / Max: 56.53

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEnabledRepeatRun0.75151.5032.25453.0063.7575SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.343.323.33MIN: 3.29 / MAX: 3.4MIN: 3.28 / MAX: 3.4MIN: 3.27 / MAX: 3.41
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEnabledRepeatRun246810Min: 3.32 / Avg: 3.34 / Max: 3.36Min: 3.3 / Avg: 3.32 / Max: 3.34Min: 3.31 / Avg: 3.33 / Max: 3.36

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun70M140M210M280M350MSE +/- 158359.65, N = 3SE +/- 817176.71, N = 3SE +/- 968234.36, N = 33210166673190966673192866671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun60M120M180M240M300MMin: 320700000 / Avg: 321016666.67 / Max: 321180000Min: 318230000 / Avg: 319096666.67 / Max: 320730000Min: 318130000 / Avg: 319286666.67 / Max: 3212100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesEnabledRepeatRun20406080100SE +/- 0.25, N = 3SE +/- 0.39, N = 3SE +/- 0.37, N = 377.0477.4576.99
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesEnabledRepeatRun1530456075Min: 76.54 / Avg: 77.04 / Max: 77.31Min: 76.99 / Avg: 77.45 / Max: 78.23Min: 76.32 / Avg: 76.99 / Max: 77.6

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEnabledRepeatRun50100150200250SE +/- 1.22, N = 3SE +/- 1.60, N = 3SE +/- 2.75, N = 3208.70208.79209.921. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEnabledRepeatRun4080120160200Min: 207.45 / Avg: 208.7 / Max: 211.14Min: 207.14 / Avg: 208.79 / Max: 211.98Min: 206.99 / Avg: 209.92 / Max: 215.421. (CXX) g++ options: -O2 -lOpenCL

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedEnabledRepeatRun11002200330044005500SE +/- 6.01, N = 3SE +/- 16.73, N = 3SE +/- 2.66, N = 35071.85042.45065.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8, Long Mode - Decompression SpeedEnabledRepeatRun9001800270036004500Min: 5059.8 / Avg: 5071.8 / Max: 5078.5Min: 5009 / Avg: 5042.4 / Max: 5060.7Min: 5060.6 / Avg: 5065.83 / Max: 5069.31. (CC) gcc options: -O3 -pthread -lz -llzma

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EnabledRepeatRun4080120160200SE +/- 0.67, N = 3SE +/- 0.33, N = 31741731731. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EnabledRepeatRun306090120150Min: 173 / Avg: 173.67 / Max: 175Min: 173 / Avg: 173.33 / Max: 1741. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lrt -lz

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsEnabledRepeatRun1.18132.36263.54394.72525.9065SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 35.225.255.241. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsEnabledRepeatRun246810Min: 5.17 / Avg: 5.22 / Max: 5.25Min: 5.24 / Avg: 5.25 / Max: 5.25Min: 5.22 / Avg: 5.24 / Max: 5.251. (CXX) g++ options: -O3 -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512EnabledRepeatRun500K1000K1500K2000K2500KSE +/- 5717.67, N = 3SE +/- 4689.76, N = 3212337021233602129088
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512EnabledRepeatRun400K800K1200K1600K2000KMin: 2111935 / Avg: 2123370.33 / Max: 2129088Min: 2114064 / Avg: 2123360.33 / Max: 2129088

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUEnabledRepeatRun48121620SE +/- 0.03, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 317.817.717.8
OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUEnabledRepeatRun510152025Min: 17.7 / Avg: 17.77 / Max: 17.8Min: 17.5 / Avg: 17.73 / Max: 18Min: 17.6 / Avg: 17.77 / Max: 17.9

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsEnabledRepeatRun48121620SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 317.917.817.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsEnabledRepeatRun510152025Min: 17.8 / Avg: 17.87 / Max: 17.9Min: 17.8 / Avg: 17.83 / Max: 17.9Min: 17.8 / Avg: 17.83 / Max: 17.9

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEnabledRepeatRun30K60K90K120K150K1525521532581524081. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.1.1~hg.2021.01.26EnabledRepeatRun1.222.443.664.886.1SE +/- 0.013, N = 5SE +/- 0.010, N = 5SE +/- 0.007, N = 55.4225.3925.398
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.1.1~hg.2021.01.26EnabledRepeatRun246810Min: 5.39 / Avg: 5.42 / Max: 5.45Min: 5.37 / Avg: 5.39 / Max: 5.43Min: 5.39 / Avg: 5.4 / Max: 5.42

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark part of libjpeg-turbo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Test: Decompression ThroughputEnabledRepeatRun50100150200250SE +/- 0.18, N = 3SE +/- 1.35, N = 3SE +/- 0.76, N = 3233.50233.55234.791. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.0.2Test: Decompression ThroughputEnabledRepeatRun4080120160200Min: 233.15 / Avg: 233.5 / Max: 233.72Min: 230.94 / Avg: 233.55 / Max: 235.43Min: 233.6 / Avg: 234.79 / Max: 236.21. (CC) gcc options: -O3 -rdynamic

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEnabledRepeatRun612182430SE +/- 0.08, N = 4SE +/- 0.04, N = 4SE +/- 0.06, N = 424.7224.6224.581. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEnabledRepeatRun612182430Min: 24.51 / Avg: 24.72 / Max: 24.91Min: 24.56 / Avg: 24.62 / Max: 24.74Min: 24.45 / Avg: 24.58 / Max: 24.721. (CC) gcc options: -O2 -std=c99

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To CompileEnabledRepeatRun20406080100SE +/- 0.02, N = 3SE +/- 0.17, N = 3SE +/- 0.20, N = 392.2291.8992.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To CompileEnabledRepeatRun20406080100Min: 92.19 / Avg: 92.22 / Max: 92.26Min: 91.58 / Avg: 91.89 / Max: 92.19Min: 92.03 / Avg: 92.39 / Max: 92.72

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEnabledRepeatRun3K6K9K12K15KSE +/- 14.74, N = 3SE +/- 42.28, N = 3SE +/- 23.78, N = 3123461230812279
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEnabledRepeatRun2K4K6K8K10KMin: 12324 / Avg: 12346 / Max: 12374Min: 12232 / Avg: 12308.33 / Max: 12378Min: 12232 / Avg: 12279.33 / Max: 12307

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEnabledRepeatRun0.95741.91482.87223.82964.787SE +/- 0.010, N = 3SE +/- 0.028, N = 3SE +/- 0.023, N = 34.2554.2544.2321. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEnabledRepeatRun246810Min: 4.24 / Avg: 4.25 / Max: 4.27Min: 4.2 / Avg: 4.25 / Max: 4.29Min: 4.19 / Avg: 4.23 / Max: 4.271. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEnabledRepeatRun12K24K36K48K60KSE +/- 321.88, N = 3SE +/- 56.50, N = 3SE +/- 173.26, N = 35748957574572641. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEnabledRepeatRun10K20K30K40K50KMin: 56851 / Avg: 57488.67 / Max: 57884Min: 57481 / Avg: 57573.67 / Max: 57676Min: 56948 / Avg: 57264.33 / Max: 575451. (CXX) g++ options: -pipe -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EnabledRepeatRun4080120160200SE +/- 0.34, N = 3SE +/- 0.25, N = 3SE +/- 0.18, N = 3191.35190.53190.331. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EnabledRepeatRun4080120160200Min: 190.97 / Avg: 191.35 / Max: 192.02Min: 190.17 / Avg: 190.53 / Max: 191.01Min: 190.1 / Avg: 190.33 / Max: 190.691. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonEnabledRepeatRun48121620SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 316.3916.3016.38MIN: 16.23 / MAX: 16.83MIN: 15.75 / MAX: 16.79MIN: 16.13 / MAX: 16.84
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonEnabledRepeatRun48121620Min: 16.34 / Avg: 16.39 / Max: 16.43Min: 16.11 / Avg: 16.3 / Max: 16.43Min: 16.25 / Avg: 16.38 / Max: 16.47

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessEnabledRepeatRun1.15022.30043.45064.60085.751SE +/- 0.017, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 35.1125.0855.0921. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessEnabledRepeatRun246810Min: 5.08 / Avg: 5.11 / Max: 5.13Min: 5.08 / Avg: 5.08 / Max: 5.09Min: 5.09 / Avg: 5.09 / Max: 5.11. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassEnabledRepeatRun510152025SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 322.8022.8922.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 Two-PassEnabledRepeatRun510152025Min: 22.67 / Avg: 22.8 / Max: 22.88Min: 22.83 / Avg: 22.89 / Max: 22.93Min: 22.73 / Avg: 22.77 / Max: 22.851. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTRepeatEnabledRun918273645SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.12, N = 338.438.438.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTRepeatEnabledRun816243240Min: 38.3 / Avg: 38.37 / Max: 38.4Min: 38.4 / Avg: 38.4 / Max: 38.4Min: 38.4 / Avg: 38.6 / Max: 38.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 12 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun140M280M420M560M700MSE +/- 2697671.84, N = 3SE +/- 489534.93, N = 3SE +/- 162924.66, N = 36621500006587533336591166671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 12 - Buffer Length: 256 - Filter Length: 57EnabledRepeatRun110M220M330M440M550MMin: 658480000 / Avg: 662150000 / Max: 667410000Min: 657980000 / Avg: 658753333.33 / Max: 659660000Min: 658920000 / Avg: 659116666.67 / Max: 6594400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEnabledRepeatRun2K4K6K8K10KSE +/- 9.27, N = 3SE +/- 3.41, N = 3SE +/- 20.07, N = 311661.211655.711601.81. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEnabledRepeatRun2K4K6K8K10KMin: 11643.3 / Avg: 11661.2 / Max: 11674.3Min: 11649.9 / Avg: 11655.67 / Max: 11661.7Min: 11565.9 / Avg: 11601.77 / Max: 11635.31. (CXX) g++ options: -rdynamic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KEnabledRepeatRun0.8911.7822.6733.5644.455SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 33.963.963.941. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KEnabledRepeatRun246810Min: 3.95 / Avg: 3.96 / Max: 3.97Min: 3.95 / Avg: 3.96 / Max: 3.96Min: 3.92 / Avg: 3.94 / Max: 3.951. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformEnabledRepeatRun130260390520650SE +/- 1.46, N = 3SE +/- 0.84, N = 3SE +/- 0.59, N = 3594.4593.7591.41. 3.8.2.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformEnabledRepeatRun100200300400500Min: 592.1 / Avg: 594.37 / Max: 597.1Min: 592.3 / Avg: 593.73 / Max: 595.2Min: 590.5 / Avg: 591.37 / Max: 592.51. 3.8.2.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun3691215SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.00, N = 312.1212.0612.10MIN: 12.06MIN: 11.93MIN: 12.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun48121620Min: 12.11 / Avg: 12.12 / Max: 12.13Min: 11.98 / Avg: 12.06 / Max: 12.19Min: 12.09 / Avg: 12.1 / Max: 12.11. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEnabledRepeatRun306090120150SE +/- 0.56, N = 3SE +/- 0.50, N = 3SE +/- 1.20, N = 3130.03129.89130.53
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEnabledRepeatRun20406080100Min: 129.19 / Avg: 130.03 / Max: 131.11Min: 129.02 / Avg: 129.89 / Max: 130.77Min: 128.38 / Avg: 130.53 / Max: 132.53

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun0.28660.57320.85981.14641.433SE +/- 0.00109, N = 3SE +/- 0.00115, N = 3SE +/- 0.00054, N = 31.267651.267451.27367MIN: 1.22MIN: 1.22MIN: 1.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun246810Min: 1.27 / Avg: 1.27 / Max: 1.27Min: 1.27 / Avg: 1.27 / Max: 1.27Min: 1.27 / Avg: 1.27 / Max: 1.271. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastEnabledRepeatRun1530456075SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 365.5465.7265.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastEnabledRepeatRun1326395265Min: 65.49 / Avg: 65.54 / Max: 65.61Min: 65.61 / Avg: 65.72 / Max: 65.88Min: 65.3 / Avg: 65.4 / Max: 65.551. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseEnabledRepeatRun2004006008001000SE +/- 0.32, N = 3SE +/- 2.01, N = 3SE +/- 1.86, N = 3787.1784.4783.3
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseEnabledRepeatRun140280420560700Min: 786.7 / Avg: 787.07 / Max: 787.7Min: 780.5 / Avg: 784.43 / Max: 787.1Min: 779.7 / Avg: 783.3 / Max: 785.9

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionEnabledRepeatRun12002400360048006000SE +/- 15.52, N = 3SE +/- 8.66, N = 3SE +/- 9.91, N = 35430.15427.05425.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionEnabledRepeatRun9001800270036004500Min: 5410.6 / Avg: 5430.13 / Max: 5460.8Min: 5409.7 / Avg: 5427 / Max: 5436.5Min: 5414.5 / Avg: 5425.73 / Max: 5445.5

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEnabledRepeatRun30060090012001500SE +/- 2.01, N = 3SE +/- 3.20, N = 3SE +/- 2.42, N = 31533.481533.931526.561. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEnabledRepeatRun30060090012001500Min: 1530.46 / Avg: 1533.48 / Max: 1537.3Min: 1530.09 / Avg: 1533.93 / Max: 1540.29Min: 1521.82 / Avg: 1526.56 / Max: 1529.811. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: PistolEnabledRepeatRun20406080100SE +/- 0.08, N = 3SE +/- 0.24, N = 3SE +/- 0.05, N = 377.8578.0778.211. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: PistolEnabledRepeatRun1530456075Min: 77.76 / Avg: 77.85 / Max: 78Min: 77.8 / Avg: 78.07 / Max: 78.55Min: 78.14 / Avg: 78.21 / Max: 78.31. OpenSCAD version 2021.01

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50EnabledRepeatRun510152025SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 319.7919.7019.79MIN: 19.55 / MAX: 21.46MIN: 19.53 / MAX: 21.51MIN: 19.59 / MAX: 21.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50EnabledRepeatRun510152025Min: 19.64 / Avg: 19.79 / Max: 20Min: 19.62 / Avg: 19.7 / Max: 19.75Min: 19.66 / Avg: 19.79 / Max: 19.971. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEnabledRepeatRun0.48150.9631.44451.9262.4075SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.142.132.14MIN: 2.09 / MAX: 2.16MIN: 2.08 / MAX: 2.16MIN: 2.07 / MAX: 2.17
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEnabledRepeatRun246810Min: 2.14 / Avg: 2.14 / Max: 2.14Min: 2.11 / Avg: 2.13 / Max: 2.15Min: 2.13 / Avg: 2.14 / Max: 2.16

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EnabledRepeatRun1020304050SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 344.8644.8745.071. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EnabledRepeatRun918273645Min: 44.85 / Avg: 44.86 / Max: 44.88Min: 44.77 / Avg: 44.87 / Max: 44.95Min: 45.02 / Avg: 45.07 / Max: 45.121. (CC) gcc options: -O2 -ldl -lz -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedEnabledRepeatRun501001502002502182182171. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0EnabledRepeatRun1428425670SE +/- 0.16, N = 3SE +/- 0.23, N = 3SE +/- 0.14, N = 360.4060.1360.351. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0EnabledRepeatRun1224364860Min: 60.18 / Avg: 60.4 / Max: 60.71Min: 59.66 / Avg: 60.12 / Max: 60.36Min: 60.16 / Avg: 60.35 / Max: 60.631. (CXX) g++ options: -O3 -fPIC -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pEnabledRepeatRun306090120150SE +/- 0.28, N = 3SE +/- 0.08, N = 3SE +/- 0.14, N = 3135.79135.17135.411. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pEnabledRepeatRun306090120150Min: 135.29 / Avg: 135.79 / Max: 136.24Min: 135.01 / Avg: 135.17 / Max: 135.29Min: 135.23 / Avg: 135.41 / Max: 135.691. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX CaseEnabledRepeatRun816243240SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 335.2235.1435.301. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX CaseEnabledRepeatRun816243240Min: 35.09 / Avg: 35.22 / Max: 35.36Min: 35.07 / Avg: 35.14 / Max: 35.25Min: 35.2 / Avg: 35.3 / Max: 35.461. OpenSCAD version 2021.01

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEnabledRepeatRun3691215SE +/- 0.09, N = 3SE +/- 0.07, N = 4SE +/- 0.07, N = 311.0611.0411.09MIN: 10.79 / MAX: 12.12MIN: 10.75 / MAX: 12.06MIN: 10.88 / MAX: 12.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEnabledRepeatRun3691215Min: 10.89 / Avg: 11.06 / Max: 11.21Min: 10.85 / Avg: 11.04 / Max: 11.16Min: 10.96 / Avg: 11.09 / Max: 11.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEnabledRepeatRun2K4K6K8K10KSE +/- 21.10, N = 3SE +/- 36.97, N = 3SE +/- 2.20, N = 37801780078351. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEnabledRepeatRun14002800420056007000Min: 7778.5 / Avg: 7800.83 / Max: 7843Min: 7758 / Avg: 7800.33 / Max: 7874Min: 7832 / Avg: 7835.33 / Max: 7839.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5EnabledRepeatRun20406080100SE +/- 0.36, N = 3SE +/- 0.11, N = 3SE +/- 0.07, N = 374.6174.9474.941. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5EnabledRepeatRun1428425670Min: 73.92 / Avg: 74.61 / Max: 75.16Min: 74.75 / Avg: 74.94 / Max: 75.13Min: 74.87 / Avg: 74.94 / Max: 75.081. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdEnabledRepeatRun48121620SE +/- 0.10, N = 3SE +/- 0.06, N = 4SE +/- 0.03, N = 318.1018.1818.16MIN: 17.75 / MAX: 18.89MIN: 17.83 / MAX: 18.68MIN: 17.79 / MAX: 26.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdEnabledRepeatRun510152025Min: 17.99 / Avg: 18.1 / Max: 18.3Min: 18.06 / Avg: 18.18 / Max: 18.32Min: 18.1 / Avg: 18.16 / Max: 18.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEnabledRepeatRun1.02832.05663.08494.11325.1415SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 34.574.554.57MIN: 4.44 / MAX: 4.76MIN: 4.44 / MAX: 4.78MIN: 4.42 / MAX: 4.74
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEnabledRepeatRun246810Min: 4.55 / Avg: 4.57 / Max: 4.61Min: 4.52 / Avg: 4.55 / Max: 4.59Min: 4.55 / Avg: 4.57 / Max: 4.61

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolEnabledRepeatRun200K400K600K800K1000KSE +/- 1384.00, N = 3SE +/- 805.40, N = 3SE +/- 924.33, N = 3852505855283852965
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolEnabledRepeatRun150K300K450K600K750KMin: 849737 / Avg: 852505 / Max: 853889Min: 853889 / Avg: 855283.33 / Max: 856679Min: 851116 / Avg: 852964.67 / Max: 853889

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Server Room - Acceleration: CPU-onlyEnabledRepeatRun0.78141.56282.34423.12563.907SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.007, N = 33.4683.4583.473
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Server Room - Acceleration: CPU-onlyEnabledRepeatRun246810Min: 3.46 / Avg: 3.47 / Max: 3.48Min: 3.46 / Avg: 3.46 / Max: 3.46Min: 3.47 / Avg: 3.47 / Max: 3.49

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: FirefoxEnabledRepeatRun6001200180024003000SE +/- 10.17, N = 3SE +/- 3.33, N = 3SE +/- 9.87, N = 32779277627881. firefox 86.0
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: FirefoxEnabledRepeatRun5001000150020002500Min: 2763 / Avg: 2779.33 / Max: 2798Min: 2769 / Avg: 2775.67 / Max: 2779Min: 2768 / Avg: 2787.67 / Max: 27991. firefox 86.0

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionEnabledRepeatRun10002000300040005000SE +/- 3.80, N = 3SE +/- 6.39, N = 3SE +/- 8.27, N = 34819.24824.14823.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionEnabledRepeatRun8001600240032004000Min: 4813.3 / Avg: 4819.2 / Max: 4826.3Min: 4814.1 / Avg: 4824.1 / Max: 4836Min: 4814.5 / Avg: 4823.17 / Max: 4839.7

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeEnabledRepeatRun6001200180024003000SE +/- 6.24, N = 3SE +/- 2.31, N = 32813282328251. chrome 89.0.4389.90
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeEnabledRepeatRun5001000150020002500Min: 2804 / Avg: 2813 / Max: 2825Min: 2819 / Avg: 2823 / Max: 28271. chrome 89.0.4389.90

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentEnabledRepeatRun48121620SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 315.5115.4915.56
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF DocumentEnabledRepeatRun48121620Min: 15.46 / Avg: 15.51 / Max: 15.57Min: 15.45 / Avg: 15.49 / Max: 15.55Min: 15.5 / Avg: 15.55 / Max: 15.66

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEnabledRepeatRun918273645SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 338.1938.3538.301. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEnabledRepeatRun816243240Min: 38.15 / Avg: 38.19 / Max: 38.21Min: 38.2 / Avg: 38.35 / Max: 38.51Min: 38.18 / Avg: 38.3 / Max: 38.381. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEnabledRepeatRun510152025SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 321.4921.5821.57
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEnabledRepeatRun510152025Min: 21.45 / Avg: 21.49 / Max: 21.54Min: 21.47 / Avg: 21.58 / Max: 21.65Min: 21.54 / Avg: 21.57 / Max: 21.61

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TRepeatEnabledRun1122334455SE +/- 0.03, N = 3SE +/- 0.17, N = 3SE +/- 0.06, N = 348.448.248.41. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TRepeatEnabledRun1020304050Min: 48.3 / Avg: 48.37 / Max: 48.4Min: 47.9 / Avg: 48.23 / Max: 48.4Min: 48.3 / Avg: 48.4 / Max: 48.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEnabledRepeatRun1632486480SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 372.873.073.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEnabledRepeatRun1428425670Min: 72.8 / Avg: 72.83 / Max: 72.9Min: 72.9 / Avg: 73.03 / Max: 73.1Min: 73 / Avg: 73.13 / Max: 73.3

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileEnabledRepeatRun1020304050SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 346.0645.8745.94
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileEnabledRepeatRun918273645Min: 45.9 / Avg: 46.06 / Max: 46.26Min: 45.73 / Avg: 45.87 / Max: 45.98Min: 45.84 / Avg: 45.94 / Max: 46.02

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionEnabledRepeatRun12002400360048006000SE +/- 13.50, N = 3SE +/- 10.24, N = 3SE +/- 7.67, N = 35430.25425.95430.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionEnabledRepeatRun9001800270036004500Min: 5416.5 / Avg: 5430.2 / Max: 5457.2Min: 5405.5 / Avg: 5425.9 / Max: 5437.7Min: 5420.1 / Avg: 5430.13 / Max: 5445.2

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2EnabledRepeatRun714212835SE +/- 0.19, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 330.9731.1031.051. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2EnabledRepeatRun714212835Min: 30.73 / Avg: 30.97 / Max: 31.34Min: 30.91 / Avg: 31.1 / Max: 31.38Min: 30.94 / Avg: 31.05 / Max: 31.141. (CXX) g++ options: -O3 -fPIC -lm

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEnabledRepeatRun50100150200250SE +/- 0.17, N = 3SE +/- 0.44, N = 3SE +/- 0.22, N = 3228.01227.61227.091. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEnabledRepeatRun4080120160200Min: 227.68 / Avg: 228.01 / Max: 228.26Min: 226.99 / Avg: 227.61 / Max: 228.47Min: 226.83 / Avg: 227.09 / Max: 227.531. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EnabledRepeatRun60120180240300SE +/- 0.05, N = 3SE +/- 0.36, N = 3SE +/- 0.68, N = 3264.38264.77265.45MIN: 262.68 / MAX: 267.22MIN: 263.03 / MAX: 266.49MIN: 263.15 / MAX: 269.591. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EnabledRepeatRun50100150200250Min: 264.33 / Avg: 264.38 / Max: 264.48Min: 264.34 / Avg: 264.76 / Max: 265.47Min: 264.48 / Avg: 265.45 / Max: 266.761. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SEnabledRepeatRun510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 321.4521.3921.361. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1SEnabledRepeatRun510152025Min: 21.43 / Avg: 21.45 / Max: 21.46Min: 21.36 / Avg: 21.39 / Max: 21.43Min: 21.3 / Avg: 21.36 / Max: 21.41. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEnabledRepeatRun48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 317.7117.7117.641. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEnabledRepeatRun48121620Min: 17.7 / Avg: 17.71 / Max: 17.72Min: 17.69 / Avg: 17.71 / Max: 17.74Min: 17.62 / Avg: 17.64 / Max: 17.671. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KEnabledRepeatRun246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 37.597.567.561. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KEnabledRepeatRun3691215Min: 7.58 / Avg: 7.59 / Max: 7.59Min: 7.54 / Avg: 7.56 / Max: 7.58Min: 7.49 / Avg: 7.56 / Max: 7.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEnabledRepeatRun1224364860SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.13, N = 352.3952.2852.491. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEnabledRepeatRun1122334455Min: 52.34 / Avg: 52.38 / Max: 52.45Min: 52.26 / Avg: 52.28 / Max: 52.31Min: 52.31 / Avg: 52.49 / Max: 52.731. RawTherapee, version 5.8, command line.

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EnabledRepeatRun11002200330044005500SE +/- 9.45, N = 3SE +/- 11.80, N = 3SE +/- 14.75, N = 35096.455087.765076.381. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EnabledRepeatRun9001800270036004500Min: 5077.8 / Avg: 5096.45 / Max: 5108.37Min: 5066.69 / Avg: 5087.76 / Max: 5107.51Min: 5048.01 / Avg: 5076.38 / Max: 5097.541. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To CompileEnabledRepeatRun1224364860SE +/- 0.35, N = 3SE +/- 0.27, N = 3SE +/- 0.26, N = 352.8553.0652.881. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To CompileEnabledRepeatRun1122334455Min: 52.16 / Avg: 52.85 / Max: 53.22Min: 52.69 / Avg: 53.06 / Max: 53.58Min: 52.49 / Avg: 52.88 / Max: 53.381. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEnabledRepeatRun400800120016002000SE +/- 4.65, N = 4SE +/- 0.00, N = 3SE +/- 5.37, N = 32072.062080.122074.751. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEnabledRepeatRun400800120016002000Min: 2064 / Avg: 2072.06 / Max: 2080.12Min: 2080.12 / Avg: 2080.12 / Max: 2080.12Min: 2064 / Avg: 2074.75 / Max: 2080.121. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096EnabledRepeatRun15003000450060007500SE +/- 25.44, N = 3SE +/- 30.65, N = 3SE +/- 30.66, N = 37133.57121.27147.51. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096EnabledRepeatRun12002400360048006000Min: 7082.7 / Avg: 7133.53 / Max: 7161Min: 7077.3 / Avg: 7121.2 / Max: 7180.2Min: 7086.3 / Avg: 7147.47 / Max: 7181.91. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionEnabledRepeatRun10002000300040005000SE +/- 4.47, N = 3SE +/- 5.46, N = 3SE +/- 8.27, N = 34793.84799.94800.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionEnabledRepeatRun8001600240032004000Min: 4787.7 / Avg: 4793.8 / Max: 4802.5Min: 4790.4 / Avg: 4799.93 / Max: 4809.3Min: 4789.8 / Avg: 4800.53 / Max: 4816.8

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEnabledRepeatRun1.24652.4933.73954.9866.2325SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 35.525.545.52
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEnabledRepeatRun246810Min: 5.47 / Avg: 5.52 / Max: 5.58Min: 5.51 / Avg: 5.54 / Max: 5.56Min: 5.48 / Avg: 5.52 / Max: 5.55

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5EnabledRepeatRun20406080100SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 374.7675.0274.751. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5EnabledRepeatRun1428425670Min: 74.57 / Avg: 74.76 / Max: 74.98Min: 74.88 / Avg: 75.02 / Max: 75.11Min: 74.57 / Avg: 74.75 / Max: 74.951. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEnabledRepeatRun70K140K210K280K350KSE +/- 1317.00, N = 3SE +/- 589.67, N = 3SE +/- 2132.06, N = 3328827.76328744.22327648.061. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEnabledRepeatRun60K120K180K240K300KMin: 326205.15 / Avg: 328827.76 / Max: 330351Min: 327600.33 / Avg: 328744.22 / Max: 329564.7Min: 323580.96 / Avg: 327648.06 / Max: 330791.141. (CC) gcc options: -O2 -lrt" -lrt

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: unsharp-maskEnabledRepeatRun3691215SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 311.2111.1711.20
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: unsharp-maskEnabledRepeatRun3691215Min: 11.17 / Avg: 11.21 / Max: 11.27Min: 11.16 / Avg: 11.17 / Max: 11.18Min: 11.18 / Avg: 11.2 / Max: 11.25

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestEnabledRepeatRun70140210280350SE +/- 0.55, N = 3SE +/- 0.32, N = 3SE +/- 1.22, N = 3335.8335.7336.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestEnabledRepeatRun60120180240300Min: 335.1 / Avg: 335.83 / Max: 336.9Min: 335.1 / Avg: 335.73 / Max: 336.1Min: 334.5 / Avg: 336.9 / Max: 338.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEnabledRepeatRun0.94261.88522.82783.77044.713SE +/- 0.00971, N = 3SE +/- 0.00355, N = 3SE +/- 0.00910, N = 34.189194.174274.17526MIN: 4.12MIN: 4.13MIN: 4.131. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEnabledRepeatRun246810Min: 4.17 / Avg: 4.19 / Max: 4.2Min: 4.17 / Avg: 4.17 / Max: 4.18Min: 4.16 / Avg: 4.18 / Max: 4.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEnabledRepeatRun612182430SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 325.7725.6825.77
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEnabledRepeatRun612182430Min: 25.61 / Avg: 25.77 / Max: 25.92Min: 25.48 / Avg: 25.68 / Max: 25.8Min: 25.59 / Avg: 25.77 / Max: 25.93

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EnabledRepeatRun7001400210028003500SE +/- 17.21, N = 3SE +/- 18.56, N = 3SE +/- 24.43, N = 33323.03312.43311.31. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EnabledRepeatRun6001200180024003000Min: 3289.4 / Avg: 3323 / Max: 3346.3Min: 3275.3 / Avg: 3312.4 / Max: 3331.8Min: 3262.5 / Avg: 3311.33 / Max: 3337.21. (CXX) g++ options: -O3 -march=native -rdynamic

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEnabledRepeatRun1224364860SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 353.7353.6453.83
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEnabledRepeatRun1122334455Min: 53.71 / Avg: 53.73 / Max: 53.75Min: 53.63 / Avg: 53.64 / Max: 53.65Min: 53.82 / Avg: 53.83 / Max: 53.84

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun0.7871.5742.3613.1483.935SE +/- 0.00666, N = 3SE +/- 0.00650, N = 3SE +/- 0.00395, N = 33.497563.494743.48539MIN: 3.43MIN: 3.42MIN: 3.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun246810Min: 3.48 / Avg: 3.5 / Max: 3.51Min: 3.48 / Avg: 3.49 / Max: 3.51Min: 3.48 / Avg: 3.49 / Max: 3.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyEnabledRepeatRun612182430SE +/- 0.17, N = 3SE +/- 0.14, N = 4SE +/- 0.20, N = 323.0523.1123.13MIN: 22.55 / MAX: 23.56MIN: 22.55 / MAX: 23.74MIN: 22.55 / MAX: 24.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyEnabledRepeatRun510152025Min: 22.71 / Avg: 23.05 / Max: 23.28Min: 22.74 / Avg: 23.11 / Max: 23.39Min: 22.8 / Avg: 23.13 / Max: 23.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount SwivelEnabledRepeatRun246810SE +/- 0.005, N = 3SE +/- 0.042, N = 3SE +/- 0.008, N = 36.9346.9496.9251. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount SwivelEnabledRepeatRun3691215Min: 6.92 / Avg: 6.93 / Max: 6.94Min: 6.9 / Avg: 6.95 / Max: 7.03Min: 6.92 / Avg: 6.93 / Max: 6.941. OpenSCAD version 2021.01

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEnabledRepeatRun4K8K12K16K20KSE +/- 14.97, N = 3SE +/- 41.78, N = 3SE +/- 28.64, N = 31768517671176241. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEnabledRepeatRun3K6K9K12K15KMin: 17656 / Avg: 17684.67 / Max: 17706.5Min: 17587 / Avg: 17670.5 / Max: 17715Min: 17567.5 / Avg: 17624.33 / Max: 176591. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pEnabledRepeatRun612182430SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 323.1723.2023.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pEnabledRepeatRun510152025Min: 23.1 / Avg: 23.17 / Max: 23.24Min: 23.19 / Avg: 23.2 / Max: 23.21Min: 23 / Avg: 23.12 / Max: 23.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EnabledRepeatRun2K4K6K8K10KSE +/- 25.80, N = 3SE +/- 4.98, N = 3SE +/- 0.38, N = 37826.317848.757852.891. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EnabledRepeatRun14002800420056007000Min: 7774.72 / Avg: 7826.31 / Max: 7853.06Min: 7838.87 / Avg: 7848.75 / Max: 7854.86Min: 7852.14 / Avg: 7852.89 / Max: 7853.331. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEnabledRepeatRun0.01960.03920.05880.07840.098SE +/- 0.00021979, N = 3SE +/- 0.00050265, N = 3SE +/- 0.00028187, N = 30.086906580.087017690.08672429
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEnabledRepeatRun12345Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.09

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastEnabledRepeatRun306090120150SE +/- 0.13, N = 3SE +/- 0.23, N = 3SE +/- 0.21, N = 3120.06120.19119.791. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastEnabledRepeatRun20406080100Min: 119.89 / Avg: 120.06 / Max: 120.32Min: 119.73 / Avg: 120.19 / Max: 120.47Min: 119.37 / Avg: 119.79 / Max: 120.081. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EnabledRepeatRun5001000150020002500SE +/- 0.83, N = 3SE +/- 0.55, N = 3SE +/- 0.42, N = 32387.982387.902380.091. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EnabledRepeatRun400800120016002000Min: 2386.64 / Avg: 2387.98 / Max: 2389.49Min: 2387.35 / Avg: 2387.9 / Max: 2389.01Min: 2379.3 / Avg: 2380.09 / Max: 2380.721. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Masskrug - Acceleration: CPU-onlyEnabledRepeatRun1.02892.05783.08674.11565.1445SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 34.5734.5714.558
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Masskrug - Acceleration: CPU-onlyEnabledRepeatRun246810Min: 4.56 / Avg: 4.57 / Max: 4.58Min: 4.56 / Avg: 4.57 / Max: 4.58Min: 4.56 / Avg: 4.56 / Max: 4.56

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEnabledRepeatRun70140210280350SE +/- 0.33, N = 3SE +/- 0.33, N = 3305305306
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEnabledRepeatRun50100150200250Min: 304 / Avg: 304.67 / Max: 305Min: 305 / Avg: 305.67 / Max: 306

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pEnabledRepeatRun3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 39.269.299.271. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pEnabledRepeatRun3691215Min: 9.24 / Avg: 9.26 / Max: 9.28Min: 9.28 / Avg: 9.29 / Max: 9.3Min: 9.27 / Avg: 9.27 / Max: 9.281. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianEnabledRepeatRun70140210280350SE +/- 0.33, N = 33103103091. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianEnabledRepeatRun60120180240300Min: 309 / Avg: 309.67 / Max: 3101. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: auto-levelsEnabledRepeatRun3691215SE +/- 0.005, N = 3SE +/- 0.021, N = 3SE +/- 0.035, N = 39.3239.3049.334
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: auto-levelsEnabledRepeatRun3691215Min: 9.31 / Avg: 9.32 / Max: 9.33Min: 9.27 / Avg: 9.3 / Max: 9.34Min: 9.29 / Avg: 9.33 / Max: 9.4

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyEnabledRepeatRun70140210280350329.63329.91330.69

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedEnabledRepeatRun714212835SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 331.431.331.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19, Long Mode - Compression SpeedEnabledRepeatRun714212835Min: 31.1 / Avg: 31.4 / Max: 31.6Min: 31.1 / Avg: 31.3 / Max: 31.6Min: 31.2 / Avg: 31.33 / Max: 31.41. (CC) gcc options: -O3 -pthread -lz -llzma

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleEnabledRepeatRun1.06342.12683.19024.25365.317SE +/- 0.017, N = 3SE +/- 0.013, N = 3SE +/- 0.008, N = 34.7114.7264.723
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleEnabledRepeatRun246810Min: 4.68 / Avg: 4.71 / Max: 4.74Min: 4.71 / Avg: 4.73 / Max: 4.75Min: 4.72 / Avg: 4.72 / Max: 4.74

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedEnabledRepeatRun816243240SE +/- 0.09, N = 3SE +/- 0.19, N = 3SE +/- 0.06, N = 333.033.133.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Compression SpeedEnabledRepeatRun714212835Min: 32.9 / Avg: 33.03 / Max: 33.2Min: 32.7 / Avg: 33.07 / Max: 33.3Min: 33 / Avg: 33.1 / Max: 33.21. (CC) gcc options: -O3 -pthread -lz -llzma

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEnabledRepeatRun90180270360450SE +/- 0.53, N = 3SE +/- 0.37, N = 3SE +/- 0.64, N = 3427.59426.30426.801. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEnabledRepeatRun80160240320400Min: 426.54 / Avg: 427.59 / Max: 428.21Min: 425.92 / Avg: 426.3 / Max: 427.04Min: 425.57 / Avg: 426.8 / Max: 427.71. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6EnabledRepeatRun3691215SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 311.0511.0611.031. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6EnabledRepeatRun3691215Min: 10.97 / Avg: 11.05 / Max: 11.19Min: 10.98 / Avg: 11.06 / Max: 11.13Min: 10.9 / Avg: 11.03 / Max: 11.11. (CXX) g++ options: -O3 -fPIC -lm

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEnabledRepeatRun1.06432.12863.19294.25725.3215SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 34.7254.7304.716
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEnabledRepeatRun246810Min: 4.72 / Avg: 4.73 / Max: 4.73Min: 4.73 / Avg: 4.73 / Max: 4.73Min: 4.71 / Avg: 4.72 / Max: 4.72

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEnabledRepeatRun1020304050SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 344.3544.2944.421. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEnabledRepeatRun918273645Min: 44.3 / Avg: 44.35 / Max: 44.39Min: 44.26 / Avg: 44.29 / Max: 44.32Min: 44.31 / Avg: 44.42 / Max: 44.541. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305RepeatEnabledRun2004006008001000SE +/- 0.57, N = 3SE +/- 1.45, N = 3SE +/- 0.89, N = 3941.53938.78940.831. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305RepeatEnabledRun170340510680850Min: 940.51 / Avg: 941.53 / Max: 942.5Min: 936.8 / Avg: 938.78 / Max: 941.61Min: 939.27 / Avg: 940.83 / Max: 942.371. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEnabledRepeatRun48121620SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 317.4417.4617.49
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEnabledRepeatRun48121620Min: 17.34 / Avg: 17.44 / Max: 17.53Min: 17.37 / Avg: 17.46 / Max: 17.57Min: 17.42 / Avg: 17.49 / Max: 17.54

Nebular Empirical Analysis Tool

NEAT is the Nebular Empirical Analysis Tool for empirical analysis of ionised nebulae, with uncertainty propagation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29EnabledRepeatRun48121620SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 313.7613.7213.741. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace
OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29EnabledRepeatRun48121620Min: 13.67 / Avg: 13.76 / Max: 13.83Min: 13.69 / Avg: 13.72 / Max: 13.74Min: 13.72 / Avg: 13.74 / Max: 13.791. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEnabledRepeatRun816243240SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 334.734.834.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEnabledRepeatRun714212835Min: 34.6 / Avg: 34.7 / Max: 34.8Min: 34.7 / Avg: 34.77 / Max: 34.8Min: 34.7 / Avg: 34.77 / Max: 34.8

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEnabledRepeatRun0.29230.58460.87691.16921.4615SE +/- 0.00186, N = 3SE +/- 0.00320, N = 3SE +/- 0.00312, N = 31.299021.295291.29607
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEnabledRepeatRun246810Min: 1.3 / Avg: 1.3 / Max: 1.3Min: 1.29 / Avg: 1.3 / Max: 1.3Min: 1.29 / Avg: 1.3 / Max: 1.3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedEnabledRepeatRun10002000300040005000SE +/- 3.90, N = 3SE +/- 2.15, N = 3SE +/- 1.52, N = 34763.04760.74774.41. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 8 - Decompression SpeedEnabledRepeatRun8001600240032004000Min: 4759 / Avg: 4763 / Max: 4770.8Min: 4757.8 / Avg: 4760.7 / Max: 4764.9Min: 4772.2 / Avg: 4774.37 / Max: 4777.31. (CC) gcc options: -O3 -pthread -lz -llzma

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionEnabledRepeatRun110220330440550SE +/- 0.47, N = 3SE +/- 0.88, N = 3SE +/- 0.10, N = 3487.4487.2487.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionEnabledRepeatRun90180270360450Min: 486.5 / Avg: 487.43 / Max: 488Min: 485.6 / Avg: 487.23 / Max: 488.6Min: 487.4 / Avg: 487.6 / Max: 487.7

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxEnabledRepeatRun2004006008001000SE +/- 0.82, N = 3SE +/- 0.73, N = 3SE +/- 1.71, N = 3844.1842.2844.61. firefox 86.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxEnabledRepeatRun150300450600750Min: 842.6 / Avg: 844.13 / Max: 845.4Min: 841 / Avg: 842.17 / Max: 843.5Min: 842.2 / Avg: 844.57 / Max: 847.91. firefox 86.0

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsEnabledRepeatRun918273645SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 340.9740.8840.991. git version 2.30.2
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsEnabledRepeatRun918273645Min: 40.91 / Avg: 40.97 / Max: 41.03Min: 40.79 / Avg: 40.88 / Max: 40.94Min: 40.84 / Avg: 40.99 / Max: 41.081. git version 2.30.2

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaEnabledRepeatRun0.811.622.433.244.05SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.593.603.601. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaEnabledRepeatRun246810Min: 3.57 / Avg: 3.59 / Max: 3.6Min: 3.6 / Avg: 3.6 / Max: 3.61Min: 3.6 / Avg: 3.6 / Max: 3.61. (CXX) g++ options: -O3 -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionEnabledRepeatRun2004006008001000SE +/- 1.04, N = 3SE +/- 1.34, N = 3SE +/- 0.25, N = 3773.1774.0774.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionEnabledRepeatRun140280420560700Min: 771.1 / Avg: 773.1 / Max: 774.6Min: 771.4 / Avg: 774.03 / Max: 775.8Min: 774.6 / Avg: 774.9 / Max: 775.4

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEnabledRepeatRun0.90971.81942.72913.63884.5485SE +/- 0.00318, N = 3SE +/- 0.00225, N = 3SE +/- 0.00367, N = 34.040784.032384.04332MIN: 3.91MIN: 3.91MIN: 3.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEnabledRepeatRun246810Min: 4.04 / Avg: 4.04 / Max: 4.05Min: 4.03 / Avg: 4.03 / Max: 4.04Min: 4.04 / Avg: 4.04 / Max: 4.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosEnabledRepeatRun20406080100SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 374.975.174.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosEnabledRepeatRun1428425670Min: 74.7 / Avg: 74.87 / Max: 75Min: 75 / Avg: 75.13 / Max: 75.2Min: 74.8 / Avg: 74.93 / Max: 75.1

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionEnabledRepeatRun110220330440550SE +/- 0.41, N = 3SE +/- 0.30, N = 2SE +/- 0.12, N = 3488.0487.1487.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionEnabledRepeatRun90180270360450Min: 487.4 / Avg: 488.03 / Max: 488.8Min: 486.8 / Avg: 487.1 / Max: 487.4Min: 487.1 / Avg: 487.33 / Max: 487.5

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEnabledRepeatRun1020304050SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 345.6145.5045.62
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEnabledRepeatRun918273645Min: 45.5 / Avg: 45.61 / Max: 45.73Min: 45.45 / Avg: 45.5 / Max: 45.52Min: 45.56 / Avg: 45.62 / Max: 45.7

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEnabledRepeatRun2M4M6M8M10MSE +/- 15747.82, N = 3SE +/- 11534.51, N = 3SE +/- 3853.86, N = 31023052510247883102208811. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEnabledRepeatRun2M4M6M8M10MMin: 10211107 / Avg: 10230524.67 / Max: 10261709Min: 10226887 / Avg: 10247883.33 / Max: 10266658Min: 10216545 / Avg: 10220881.33 / Max: 102285681. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

GNU GMP GMPbench

GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGMPbench Score, More Is BetterGNU GMP GMPbench 6.2.1Total TimeRepeatEnabledRun140028004200560070006432.56449.16438.81. (CC) gcc options: -O3 -fomit-frame-pointer -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pEnabledRepeatRun246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.817.827.801. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pEnabledRepeatRun3691215Min: 7.78 / Avg: 7.81 / Max: 7.83Min: 7.8 / Avg: 7.82 / Max: 7.84Min: 7.79 / Avg: 7.8 / Max: 7.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Boat - Acceleration: CPU-onlyEnabledRepeatRun0.9681.9362.9043.8724.84SE +/- 0.021, N = 3SE +/- 0.022, N = 3SE +/- 0.016, N = 34.3004.3024.291
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.4.1Test: Boat - Acceleration: CPU-onlyEnabledRepeatRun246810Min: 4.28 / Avg: 4.3 / Max: 4.34Min: 4.27 / Avg: 4.3 / Max: 4.35Min: 4.27 / Avg: 4.29 / Max: 4.32

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedEnabledRepeatRun9001800270036004500SE +/- 11.78, N = 3SE +/- 4.33, N = 3SE +/- 8.51, N = 34390.64401.84394.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 19 - Decompression SpeedEnabledRepeatRun8001600240032004000Min: 4367.1 / Avg: 4390.6 / Max: 4403.8Min: 4394.2 / Avg: 4401.77 / Max: 4409.2Min: 4381.9 / Avg: 4393.97 / Max: 4410.41. (CC) gcc options: -O3 -pthread -lz -llzma

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To CompileEnabledRepeatRun90180270360450SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 3425.30425.14426.20
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To CompileEnabledRepeatRun80160240320400Min: 425.22 / Avg: 425.3 / Max: 425.46Min: 424.99 / Avg: 425.14 / Max: 425.23Min: 426.05 / Avg: 426.2 / Max: 426.4

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEnabledRepeatRun246810SE +/- 0.005, N = 3SE +/- 0.003, N = 3SE +/- 0.004, N = 36.0956.0836.0801. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEnabledRepeatRun246810Min: 6.09 / Avg: 6.1 / Max: 6.1Min: 6.08 / Avg: 6.08 / Max: 6.09Min: 6.07 / Avg: 6.08 / Max: 6.091. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEnabledRepeatRun20406080100SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 382.181.982.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEnabledRepeatRun1632486480Min: 82 / Avg: 82.1 / Max: 82.2Min: 81.9 / Avg: 81.93 / Max: 82Min: 81.8 / Avg: 82 / Max: 82.2

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEnabledRepeatRun1122334455SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 349.2449.1749.29
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEnabledRepeatRun1020304050Min: 49.19 / Avg: 49.24 / Max: 49.33Min: 49.05 / Avg: 49.17 / Max: 49.28Min: 49.17 / Avg: 49.29 / Max: 49.36

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pEnabledRepeatRun1530456075SE +/- 0.48, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 367.4767.4467.601. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pEnabledRepeatRun1326395265Min: 66.52 / Avg: 67.47 / Max: 68Min: 67.14 / Avg: 67.44 / Max: 67.74Min: 67.03 / Avg: 67.6 / Max: 681. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEnabledRepeatRun2004006008001000SE +/- 0.76, N = 3SE +/- 1.88, N = 3SE +/- 1.17, N = 38528538541. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEnabledRepeatRun150300450600750Min: 850.5 / Avg: 852 / Max: 853Min: 850 / Avg: 853.33 / Max: 856.5Min: 851.5 / Avg: 853.83 / Max: 8551. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionEnabledRepeatRun306090120150SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 3118.39118.12118.351. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionEnabledRepeatRun20406080100Min: 118.3 / Avg: 118.39 / Max: 118.5Min: 117.93 / Avg: 118.12 / Max: 118.33Min: 118.3 / Avg: 118.35 / Max: 118.41. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent_core -levent_pthreads -lutil -lm -lrt -lz

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EnabledRepeatRun3691215SE +/- 0.004, N = 3SE +/- 0.008, N = 3SE +/- 0.016, N = 39.6189.6039.6251. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EnabledRepeatRun3691215Min: 9.61 / Avg: 9.62 / Max: 9.63Min: 9.59 / Avg: 9.6 / Max: 9.62Min: 9.59 / Avg: 9.63 / Max: 9.651. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun48121620SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 316.7416.7316.70MIN: 16.62MIN: 16.63MIN: 16.631. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun48121620Min: 16.71 / Avg: 16.74 / Max: 16.79Min: 16.72 / Avg: 16.73 / Max: 16.73Min: 16.7 / Avg: 16.7 / Max: 16.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesEnabledRepeatRun510152025SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 318.5018.5118.46
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesEnabledRepeatRun510152025Min: 18.49 / Avg: 18.5 / Max: 18.51Min: 18.46 / Avg: 18.51 / Max: 18.57Min: 18.42 / Avg: 18.46 / Max: 18.49

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.41e12 Prime Number GenerationEnabledRepeatRun48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 318.0418.0318.071. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.41e12 Prime Number GenerationEnabledRepeatRun510152025Min: 18 / Avg: 18.04 / Max: 18.06Min: 18 / Avg: 18.03 / Max: 18.07Min: 18.05 / Avg: 18.07 / Max: 18.11. (CXX) g++ options: -O3 -lpthread

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestEnabledRepeatRun306090120150SE +/- 0.12, N = 3SE +/- 0.09, N = 3SE +/- 0.40, N = 3132.7132.4132.61. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto
OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_TestEnabledRepeatRun20406080100Min: 132.5 / Avg: 132.7 / Max: 132.9Min: 132.3 / Avg: 132.43 / Max: 132.6Min: 131.8 / Avg: 132.6 / Max: 133.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lbladeRF -lm -lfftw3f -lmbedcrypto

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEnabledRepeatRun306090120150SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.14, N = 3113.10112.97113.22
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEnabledRepeatRun20406080100Min: 113 / Avg: 113.1 / Max: 113.19Min: 112.86 / Avg: 112.96 / Max: 113.04Min: 112.99 / Avg: 113.22 / Max: 113.48

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedEnabledRepeatRun10002000300040005000SE +/- 12.94, N = 15SE +/- 5.59, N = 15SE +/- 11.09, N = 154865.24875.44864.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Decompression SpeedEnabledRepeatRun8001600240032004000Min: 4689.9 / Avg: 4865.2 / Max: 4899.4Min: 4807.5 / Avg: 4875.38 / Max: 4902.3Min: 4717.9 / Avg: 4864.62 / Max: 4900.11. (CC) gcc options: -O3 -pthread -lz -llzma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEnabledRepeatRun100200300400500SE +/- 1.59, N = 3SE +/- 1.59, N = 3SE +/- 2.42, N = 34574574561. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEnabledRepeatRun80160240320400Min: 453.5 / Avg: 456.67 / Max: 458.5Min: 453.5 / Avg: 456.67 / Max: 458.5Min: 451 / Avg: 455.83 / Max: 458.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pEnabledRepeatRun150300450600750SE +/- 0.85, N = 3SE +/- 0.91, N = 3SE +/- 1.92, N = 3707.73708.63709.28MIN: 632.84 / MAX: 768.08MIN: 636.52 / MAX: 768.39MIN: 632.86 / MAX: 774.841. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pEnabledRepeatRun120240360480600Min: 706.3 / Avg: 707.73 / Max: 709.25Min: 706.98 / Avg: 708.63 / Max: 710.14Min: 706.35 / Avg: 709.28 / Max: 712.91. (CC) gcc options: -pthread

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EnabledRepeatRun714212835SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 329.8029.8129.861. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EnabledRepeatRun714212835Min: 29.72 / Avg: 29.8 / Max: 29.84Min: 29.71 / Avg: 29.81 / Max: 29.87Min: 29.76 / Avg: 29.86 / Max: 29.971. (CC) gcc options: -pthread -fvisibility=hidden -O2

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NRepeatEnabledRun1122334455SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 346.746.746.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NRepeatEnabledRun1020304050Min: 46.6 / Avg: 46.73 / Max: 46.8Min: 46.6 / Avg: 46.7 / Max: 46.8Min: 46.6 / Avg: 46.63 / Max: 46.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EnabledRepeatRun7K14K21K28K35KSE +/- 21.53, N = 3SE +/- 61.74, N = 3SE +/- 28.35, N = 33471234770347861. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EnabledRepeatRun6K12K18K24K30KMin: 34678 / Avg: 34712.33 / Max: 34752Min: 34699 / Avg: 34770 / Max: 34893Min: 34732 / Avg: 34786 / Max: 348281. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5EnabledRepeatRun816243240SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 332.9132.9032.841. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5EnabledRepeatRun714212835Min: 32.73 / Avg: 32.91 / Max: 33.04Min: 32.84 / Avg: 32.9 / Max: 32.97Min: 32.75 / Avg: 32.84 / Max: 32.911. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsEnabledRepeatRun48121620SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 315.0615.0515.03
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsEnabledRepeatRun48121620Min: 15.01 / Avg: 15.06 / Max: 15.13Min: 15.03 / Avg: 15.05 / Max: 15.07Min: 15.01 / Avg: 15.03 / Max: 15.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun0.15850.3170.47550.6340.7925SE +/- 0.001249, N = 3SE +/- 0.001477, N = 3SE +/- 0.002770, N = 30.7031960.7046650.704534MIN: 0.66MIN: 0.66MIN: 0.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun246810Min: 0.7 / Avg: 0.7 / Max: 0.71Min: 0.7 / Avg: 0.7 / Max: 0.71Min: 0.7 / Avg: 0.7 / Max: 0.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EnabledRepeatRun2004006008001000SE +/- 0.45, N = 3SE +/- 0.35, N = 3SE +/- 2.29, N = 3928.21928.09926.291. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EnabledRepeatRun160320480640800Min: 927.34 / Avg: 928.21 / Max: 928.81Min: 927.51 / Avg: 928.09 / Max: 928.71Min: 921.72 / Avg: 926.29 / Max: 928.871. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeEnabledRepeatRun714212835SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 329.1929.1329.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.1-rcEncoder Mode: Speed 6 RealtimeEnabledRepeatRun612182430Min: 29.06 / Avg: 29.19 / Max: 29.26Min: 29.08 / Avg: 29.13 / Max: 29.23Min: 29.04 / Avg: 29.14 / Max: 29.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterEnabledRepeatRun2004006008001000SE +/- 0.57, N = 3SE +/- 2.55, N = 3SE +/- 0.47, N = 31084.41082.21084.21. 3.8.2.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterEnabledRepeatRun2004006008001000Min: 1083.6 / Avg: 1084.4 / Max: 1085.5Min: 1077.2 / Avg: 1082.23 / Max: 1085.5Min: 1083.6 / Avg: 1084.17 / Max: 1085.11. 3.8.2.0

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7EnabledRepeatRun3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 310.1010.1110.091. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7EnabledRepeatRun3691215Min: 10.09 / Avg: 10.1 / Max: 10.12Min: 10.1 / Avg: 10.11 / Max: 10.11Min: 10.06 / Avg: 10.09 / Max: 10.121. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: rotateEnabledRepeatRun3691215SE +/- 0.017, N = 3SE +/- 0.007, N = 3SE +/- 0.008, N = 39.0279.0369.044
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: rotateEnabledRepeatRun3691215Min: 9 / Avg: 9.03 / Max: 9.06Min: 9.03 / Avg: 9.04 / Max: 9.05Min: 9.03 / Avg: 9.04 / Max: 9.06

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEnabledRepeatRun400K800K1200K1600K2000KSE +/- 1321.50, N = 5SE +/- 1711.59, N = 5SE +/- 1321.50, N = 51723339172658317244181. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEnabledRepeatRun300K600K900K1200K1500KMin: 1721181 / Avg: 1723339 / Max: 1726576Min: 1721181 / Avg: 1726583 / Max: 1732006Min: 1721181 / Avg: 1724418 / Max: 17265761. (CC) gcc options: -O3 -march=native

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEEnabledRepeatRun3691215SE +/- 0.02, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 510.1510.1410.131. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEEnabledRepeatRun3691215Min: 10.12 / Avg: 10.15 / Max: 10.22Min: 10.12 / Avg: 10.14 / Max: 10.17Min: 10.12 / Avg: 10.13 / Max: 10.181. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Cython Benchmark

Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-QueensEnabledRepeatRun510152025SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 320.3720.3920.35
OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-QueensEnabledRepeatRun510152025Min: 20.32 / Avg: 20.37 / Max: 20.41Min: 20.36 / Avg: 20.39 / Max: 20.43Min: 20.32 / Avg: 20.35 / Max: 20.38

rays1bench

This is a test of rays1bench, a simple path-tracer / ray-tracing that supports SSE and AVX instructions, multi-threading, and other features. This test profile is measuring the performance of the "large scene" in rays1bench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEnabledRepeatRun1632486480SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 371.8271.6971.69
OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEnabledRepeatRun1428425670Min: 71.77 / Avg: 71.82 / Max: 71.89Min: 71.66 / Avg: 71.69 / Max: 71.75Min: 71.64 / Avg: 71.69 / Max: 71.78

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEnabledRepeatRun306090120150SE +/- 0.08, N = 3SE +/- 0.18, N = 3SE +/- 0.46, N = 3121.45121.24121.331. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEnabledRepeatRun20406080100Min: 121.33 / Avg: 121.45 / Max: 121.6Min: 120.9 / Avg: 121.24 / Max: 121.5Min: 120.55 / Avg: 121.33 / Max: 122.151. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)EnabledRepeatRun7001400210028003500SE +/- 2.52, N = 3SE +/- 4.70, N = 3SE +/- 1.51, N = 33244.43240.73238.81. 3.8.2.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)EnabledRepeatRun6001200180024003000Min: 3241.2 / Avg: 3244.43 / Max: 3249.4Min: 3231.3 / Avg: 3240.7 / Max: 3245.4Min: 3237 / Avg: 3238.8 / Max: 3241.81. 3.8.2.0

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesEnabledRepeatRun714212835SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 329.5429.5129.56
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesEnabledRepeatRun714212835Min: 29.5 / Avg: 29.54 / Max: 29.58Min: 29.49 / Avg: 29.51 / Max: 29.53Min: 29.52 / Avg: 29.56 / Max: 29.59

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EnabledRepeatRun246810SE +/- 0.004, N = 3SE +/- 0.014, N = 3SE +/- 0.005, N = 36.5216.5326.5261. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EnabledRepeatRun3691215Min: 6.51 / Avg: 6.52 / Max: 6.53Min: 6.51 / Avg: 6.53 / Max: 6.56Min: 6.52 / Avg: 6.53 / Max: 6.541. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEnabledRepeatRun306090120150SE +/- 0.41, N = 3SE +/- 0.83, N = 3SE +/- 1.20, N = 3118.95119.08119.151. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEnabledRepeatRun20406080100Min: 118.33 / Avg: 118.95 / Max: 119.72Min: 117.53 / Avg: 119.08 / Max: 120.38Min: 116.74 / Avg: 119.15 / Max: 120.371. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionEnabledRepeatRun160320480640800SE +/- 0.25, N = 2SE +/- 0.27, N = 3729.3729.6729.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionEnabledRepeatRun130260390520650Min: 729 / Avg: 729.25 / Max: 729.5Min: 729.1 / Avg: 729.63 / Max: 730

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro CarEnabledRepeatRun0.8241.6482.4723.2964.12SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.002, N = 33.6623.6603.6561. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro CarEnabledRepeatRun246810Min: 3.65 / Avg: 3.66 / Max: 3.67Min: 3.65 / Avg: 3.66 / Max: 3.67Min: 3.65 / Avg: 3.66 / Max: 3.661. OpenSCAD version 2021.01

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEnabledRepeatRun2K4K6K8K10KSE +/- 8.84, N = 3SE +/- 7.03, N = 3SE +/- 13.24, N = 310283.6510277.6710267.041. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEnabledRepeatRun2K4K6K8K10KMin: 10273.45 / Avg: 10283.65 / Max: 10301.25Min: 10264.78 / Avg: 10277.67 / Max: 10288.99Min: 10240.57 / Avg: 10267.04 / Max: 102811. (CC) gcc options: -O3

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: resizeEnabledRepeatRun246810SE +/- 0.048, N = 3SE +/- 0.024, N = 3SE +/- 0.023, N = 36.2126.2136.222
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.22Test: resizeEnabledRepeatRun246810Min: 6.16 / Avg: 6.21 / Max: 6.31Min: 6.19 / Avg: 6.21 / Max: 6.26Min: 6.2 / Avg: 6.22 / Max: 6.27

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEnabledRepeatRun2K4K6K8K10KSE +/- 7.93, N = 3SE +/- 5.72, N = 3SE +/- 13.00, N = 310672.310655.310670.11. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEnabledRepeatRun2K4K6K8K10KMin: 10660 / Avg: 10672.27 / Max: 10687.1Min: 10646.8 / Avg: 10655.33 / Max: 10666.2Min: 10656.7 / Avg: 10670.1 / Max: 10696.11. (CC) gcc options: -O3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KEnabledRepeatRun4080120160200SE +/- 0.18, N = 3SE +/- 0.27, N = 3SE +/- 0.27, N = 3189.80189.52189.50MIN: 176.26 / MAX: 202.54MIN: 175.59 / MAX: 203MIN: 177.16 / MAX: 202.891. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KEnabledRepeatRun306090120150Min: 189.45 / Avg: 189.8 / Max: 190.03Min: 189 / Avg: 189.52 / Max: 189.91Min: 189 / Avg: 189.5 / Max: 189.911. (CC) gcc options: -pthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEnabledRepeatRun246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 36.416.426.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEnabledRepeatRun3691215Min: 6.41 / Avg: 6.41 / Max: 6.42Min: 6.42 / Avg: 6.42 / Max: 6.42Min: 6.4 / Avg: 6.41 / Max: 6.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptRepeatEnabledRun2004006008001000SE +/- 1.47, N = 3SE +/- 1.07, N = 3SE +/- 0.44, N = 3936.21935.22936.651. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptRepeatEnabledRun160320480640800Min: 933.76 / Avg: 936.21 / Max: 938.84Min: 933.13 / Avg: 935.22 / Max: 936.66Min: 936.02 / Avg: 936.65 / Max: 937.511. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEnabledRepeatRun48121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 317.0017.0117.021. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEnabledRepeatRun48121620Min: 16.98 / Avg: 17 / Max: 17.02Min: 17.01 / Avg: 17.01 / Max: 17.01Min: 17.01 / Avg: 17.02 / Max: 17.041. (CXX) g++ options: -O2 -lOpenCL

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptRepeatEnabledRun20406080100SE +/- 0.15, N = 3SE +/- 0.19, N = 3SE +/- 0.17, N = 3103.64103.65103.501. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptRepeatEnabledRun20406080100Min: 103.39 / Avg: 103.64 / Max: 103.9Min: 103.28 / Avg: 103.65 / Max: 103.86Min: 103.31 / Avg: 103.5 / Max: 103.831. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceEnabledRepeatRun1020304050SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 342.8942.8742.93
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceEnabledRepeatRun918273645Min: 42.87 / Avg: 42.89 / Max: 42.91Min: 42.85 / Avg: 42.87 / Max: 42.89Min: 42.92 / Avg: 42.93 / Max: 42.95

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzEnabledRepeatRun48121620SE +/- 0.02, N = 4SE +/- 0.02, N = 4SE +/- 0.01, N = 414.7514.7414.76
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzEnabledRepeatRun48121620Min: 14.71 / Avg: 14.75 / Max: 14.79Min: 14.72 / Avg: 14.74 / Max: 14.79Min: 14.74 / Avg: 14.76 / Max: 14.8

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeEnabledRepeatRun60120180240300SE +/- 0.22, N = 3SE +/- 0.19, N = 3SE +/- 0.15, N = 3280.38280.30280.721. chrome 89.0.4389.90
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeEnabledRepeatRun50100150200250Min: 280.15 / Avg: 280.38 / Max: 280.81Min: 280.08 / Avg: 280.3 / Max: 280.69Min: 280.43 / Avg: 280.72 / Max: 280.931. chrome 89.0.4389.90

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumEnabledRepeatRun1.13362.26723.40084.53445.668SE +/- 0.0020, N = 3SE +/- 0.0024, N = 3SE +/- 0.0044, N = 35.03075.03815.03451. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumEnabledRepeatRun246810Min: 5.03 / Avg: 5.03 / Max: 5.03Min: 5.03 / Avg: 5.04 / Max: 5.04Min: 5.03 / Avg: 5.03 / Max: 5.041. (CXX) g++ options: -O3 -flto -pthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEnabledRepeatRun0.46170.92341.38511.84682.3085SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 32.0512.0522.049
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEnabledRepeatRun246810Min: 2.05 / Avg: 2.05 / Max: 2.05Min: 2.05 / Avg: 2.05 / Max: 2.05Min: 2.05 / Avg: 2.05 / Max: 2.05

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun0.18510.37020.55530.74040.9255SE +/- 0.003255, N = 3SE +/- 0.003677, N = 3SE +/- 0.003248, N = 30.8223500.8225190.821323MIN: 0.8MIN: 0.8MIN: 0.81. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEnabledRepeatRun246810Min: 0.82 / Avg: 0.82 / Max: 0.83Min: 0.82 / Avg: 0.82 / Max: 0.83Min: 0.82 / Avg: 0.82 / Max: 0.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0EnabledRepeatRun246810SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.002, N = 36.2576.2506.2481. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0EnabledRepeatRun3691215Min: 6.25 / Avg: 6.26 / Max: 6.27Min: 6.24 / Avg: 6.25 / Max: 6.26Min: 6.25 / Avg: 6.25 / Max: 6.251. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeEnabledRepeatRun246810SE +/- 0.011, N = 5SE +/- 0.011, N = 5SE +/- 0.010, N = 57.2307.2227.2321. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeEnabledRepeatRun3691215Min: 7.21 / Avg: 7.23 / Max: 7.27Min: 7.2 / Avg: 7.22 / Max: 7.26Min: 7.22 / Avg: 7.23 / Max: 7.271. (CXX) g++ options: -fvisibility=hidden -logg -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EnabledRepeatRun50100150200250SE +/- 0.07, N = 3SE +/- 0.28, N = 3SE +/- 0.07, N = 3209.63209.36209.561. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EnabledRepeatRun4080120160200Min: 209.5 / Avg: 209.63 / Max: 209.7Min: 208.82 / Avg: 209.36 / Max: 209.72Min: 209.49 / Avg: 209.56 / Max: 209.711. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishRepeatEnabledRun90180270360450SE +/- 0.25, N = 3SE +/- 0.04, N = 3SE +/- 0.19, N = 3427.37427.84427.281. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishRepeatEnabledRun80160240320400Min: 426.93 / Avg: 427.37 / Max: 427.79Min: 427.78 / Avg: 427.84 / Max: 427.91Min: 426.91 / Avg: 427.28 / Max: 427.541. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeEnabledRepeatRun3691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 313.0413.0513.031. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeEnabledRepeatRun48121620Min: 13.03 / Avg: 13.04 / Max: 13.04Min: 13.03 / Avg: 13.05 / Max: 13.09Min: 13.03 / Avg: 13.03 / Max: 13.041. (CC) gcc options: -static -fopenmp -O3 -march=native

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionEnabledRepeatRun2004006008001000SE +/- 0.49, N = 3SE +/- 0.50, N = 2774.4774.9775.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionEnabledRepeatRun140280420560700Min: 773.5 / Avg: 774.4 / Max: 775.2Min: 774.4 / Avg: 774.9 / Max: 775.4

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishRepeatEnabledRun120240360480600SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.28, N = 3545.42545.45544.751. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishRepeatEnabledRun100200300400500Min: 545.37 / Avg: 545.42 / Max: 545.5Min: 545.42 / Avg: 545.45 / Max: 545.48Min: 544.43 / Avg: 544.75 / Max: 545.31. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeEnabledRepeatRun600K1200K1800K2400K3000KSE +/- 884.39, N = 3SE +/- 3937.22, N = 3SE +/- 2813.39, N = 3270858227095872706127MIN: 32681 / MAX: 9049793MIN: 32743 / MAX: 9064631MIN: 32688 / MAX: 9055017
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeEnabledRepeatRun500K1000K1500K2000K2500KMin: 2707479 / Avg: 2708582 / Max: 2710331Min: 2703790 / Avg: 2709587.33 / Max: 2717101Min: 2703233 / Avg: 2706127 / Max: 2711753

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEnabledRepeatRun48121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 315.3015.3215.321. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEnabledRepeatRun48121620Min: 15.27 / Avg: 15.3 / Max: 15.33Min: 15.28 / Avg: 15.32 / Max: 15.38Min: 15.28 / Avg: 15.31 / Max: 15.371. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptRepeatEnabledRun120240360480600SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.33, N = 3535.87535.85535.211. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptRepeatEnabledRun90180270360450Min: 535.82 / Avg: 535.87 / Max: 535.92Min: 535.69 / Avg: 535.85 / Max: 536.14Min: 534.81 / Avg: 535.21 / Max: 535.871. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingEnabledRepeatRun400800120016002000SE +/- 0.63, N = 3SE +/- 0.24, N = 3SE +/- 1.19, N = 31745.701743.561745.701. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingEnabledRepeatRun30060090012001500Min: 1744.51 / Avg: 1745.7 / Max: 1746.66Min: 1743.08 / Avg: 1743.56 / Max: 1743.8Min: 1744.51 / Avg: 1745.7 / Max: 1748.091. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEnabledRepeatRun20406080100SE +/- 0.12, N = 3SE +/- 0.32, N = 3SE +/- 0.13, N = 383.083.082.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEnabledRepeatRun1632486480Min: 82.8 / Avg: 82.97 / Max: 83.2Min: 82.5 / Avg: 83 / Max: 83.6Min: 82.6 / Avg: 82.87 / Max: 83

dcraw

This test times how long it takes to convert several high-resolution RAW NEF image files to PPM image format using dcraw. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image ConversionEnabledRepeatRun714212835SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 331.0931.1131.071. (CC) gcc options: -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterdcrawRAW To PPM Image ConversionEnabledRepeatRun714212835Min: 31.06 / Avg: 31.09 / Max: 31.11Min: 31.1 / Avg: 31.11 / Max: 31.12Min: 31.01 / Avg: 31.07 / Max: 31.111. (CC) gcc options: -lm

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEnabledRepeatRun20406080100SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.18, N = 396.0396.0596.141. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEnabledRepeatRun20406080100Min: 95.92 / Avg: 96.03 / Max: 96.12Min: 95.95 / Avg: 96.05 / Max: 96.11Min: 95.94 / Avg: 96.14 / Max: 96.491. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EnabledRepeatRun60M120M180M240M300MSE +/- 43622.17, N = 3SE +/- 57627.28, N = 3SE +/- 73645.19, N = 32593416002596227002593297001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EnabledRepeatRun50M100M150M200M250MMin: 259263400 / Avg: 259341600 / Max: 259414200Min: 259509200 / Avg: 259622700 / Max: 259696800Min: 259182900 / Avg: 259329700 / Max: 2594135001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CPU-OnlyEnabledRepeatRun306090120150SE +/- 0.22, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 3132.84132.87132.99
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CPU-OnlyEnabledRepeatRun20406080100Min: 132.46 / Avg: 132.84 / Max: 133.22Min: 132.57 / Avg: 132.87 / Max: 133.22Min: 132.83 / Avg: 132.99 / Max: 133.2

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionEnabledRepeatRun160320480640800SE +/- 0.35, N = 3SE +/- 0.26, N = 3SE +/- 0.20, N = 3729.5729.7729.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionEnabledRepeatRun130260390520650Min: 728.9 / Avg: 729.53 / Max: 730.1Min: 729.3 / Avg: 729.73 / Max: 730.2Min: 729.3 / Avg: 729.63 / Max: 730

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumEnabledRepeatRun714212835SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 327.8927.9227.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumEnabledRepeatRun612182430Min: 27.87 / Avg: 27.89 / Max: 27.92Min: 27.9 / Avg: 27.92 / Max: 27.95Min: 27.9 / Avg: 27.92 / Max: 27.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0EnabledRepeatRun3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 39.349.359.351. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0EnabledRepeatRun3691215Min: 9.33 / Avg: 9.34 / Max: 9.35Min: 9.34 / Avg: 9.35 / Max: 9.36Min: 9.34 / Avg: 9.35 / Max: 9.351. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionEnabledRepeatRun110220330440550SE +/- 0.27, N = 3SE +/- 0.24, N = 3487.9487.9488.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionEnabledRepeatRun90180270360450Min: 487.4 / Avg: 487.93 / Max: 488.3Min: 487.4 / Avg: 487.87 / Max: 488.2

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionEnabledRepeatRun110220330440550SE +/- 0.00, N = 2SE +/- 0.38, N = 3SE +/- 0.12, N = 3488.4488.2488.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionEnabledRepeatRun90180270360450Min: 488.4 / Avg: 488.4 / Max: 488.4Min: 487.6 / Avg: 488.23 / Max: 488.9Min: 488.1 / Avg: 488.3 / Max: 488.5

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EnabledRepeatRun60120180240300SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3263.34263.47263.60MIN: 261.29 / MAX: 268.2MIN: 261.32 / MAX: 269.11MIN: 261.99 / MAX: 268.681. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EnabledRepeatRun50100150200250Min: 263.27 / Avg: 263.34 / Max: 263.39Min: 263.41 / Avg: 263.47 / Max: 263.57Min: 263.46 / Avg: 263.6 / Max: 263.741. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

RAR Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using RAR/WinRAR compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRAR Compression 5.6.1Linux Source Tree Archiving To RAREnabledRepeatRun816243240SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 336.1636.1636.19
OpenBenchmarking.orgSeconds, Fewer Is BetterRAR Compression 5.6.1Linux Source Tree Archiving To RAREnabledRepeatRun816243240Min: 36.09 / Avg: 36.16 / Max: 36.3Min: 36.01 / Avg: 36.16 / Max: 36.34Min: 36.05 / Avg: 36.19 / Max: 36.36

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformEnabledRepeatRun20406080100SE +/- 0.06, N = 3SE +/- 0.25, N = 3SE +/- 0.10, N = 3107.2107.2107.1
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformEnabledRepeatRun20406080100Min: 107.1 / Avg: 107.2 / Max: 107.3Min: 106.9 / Avg: 107.2 / Max: 107.7Min: 106.9 / Avg: 107.1 / Max: 107.2

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SerialEnabledRepeatRun100200300400500484.58484.88485.03

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropEnabledRepeatRun246810SE +/- 0.024, N = 3SE +/- 0.023, N = 3SE +/- 0.023, N = 36.4796.4856.482
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropEnabledRepeatRun3691215Min: 6.45 / Avg: 6.48 / Max: 6.53Min: 6.46 / Avg: 6.48 / Max: 6.53Min: 6.45 / Avg: 6.48 / Max: 6.53

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackEnabledRepeatRun3691215SE +/- 0.01, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 513.2013.2113.201. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackEnabledRepeatRun48121620Min: 13.18 / Avg: 13.2 / Max: 13.21Min: 13.21 / Avg: 13.21 / Max: 13.21Min: 13.2 / Avg: 13.2 / Max: 13.211. (CXX) g++ options: -rdynamic

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEnabledRepeatRun100200300400500SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3440.92441.06441.321. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEnabledRepeatRun80160240320400Min: 440.71 / Avg: 440.92 / Max: 441.19Min: 440.91 / Avg: 441.06 / Max: 441.15Min: 441.14 / Avg: 441.32 / Max: 441.451. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowEnabledRepeat369121511.6611.67

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterEnabledRepeatRun160320480640800SE +/- 0.83, N = 3SE +/- 0.71, N = 3SE +/- 0.98, N = 3734.0733.4733.71. 3.8.2.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterEnabledRepeatRun130260390520650Min: 732.3 / Avg: 733.97 / Max: 734.8Min: 732.4 / Avg: 733.43 / Max: 734.8Min: 732.4 / Avg: 733.67 / Max: 735.61. 3.8.2.0

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEnabledRepeatRun1428425670SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.05, N = 364.6764.6764.721. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEnabledRepeatRun1326395265Min: 64.63 / Avg: 64.67 / Max: 64.7Min: 64.66 / Avg: 64.67 / Max: 64.67Min: 64.66 / Avg: 64.72 / Max: 64.821. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEnabledRepeatRun1530456075SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 365.9566.0066.001. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEnabledRepeatRun1326395265Min: 65.82 / Avg: 65.95 / Max: 66.02Min: 65.98 / Avg: 66 / Max: 66.02Min: 66 / Avg: 66 / Max: 66.011. (CC) gcc options: -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEnabledRepeatRun20K40K60K80K100KSE +/- 18.21, N = 3SE +/- 32.67, N = 3SE +/- 25.94, N = 3110221110166110245
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEnabledRepeatRun20K40K60K80K100KMin: 110190 / Avg: 110220.67 / Max: 110253Min: 110133 / Avg: 110165.67 / Max: 110231Min: 110195 / Avg: 110245 / Max: 110282

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEnabledRepeatRun2K4K6K8K10KSE +/- 7.05, N = 3SE +/- 9.28, N = 3SE +/- 4.62, N = 310679.110683.210675.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEnabledRepeatRun2K4K6K8K10KMin: 10671.8 / Avg: 10679.1 / Max: 10693.2Min: 10666.7 / Avg: 10683.2 / Max: 10698.8Min: 10666.9 / Avg: 10675.73 / Max: 10682.51. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 315.8215.8215.81MIN: 15.79MIN: 15.79MIN: 15.791. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun48121620Min: 15.82 / Avg: 15.82 / Max: 15.83Min: 15.81 / Avg: 15.82 / Max: 15.84Min: 15.81 / Avg: 15.81 / Max: 15.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptRepeatEnabledRun90180270360450SE +/- 0.37, N = 3SE +/- 0.22, N = 3SE +/- 0.15, N = 3425.88426.10426.121. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptRepeatEnabledRun80160240320400Min: 425.39 / Avg: 425.88 / Max: 426.6Min: 425.71 / Avg: 426.1 / Max: 426.48Min: 425.83 / Avg: 426.12 / Max: 426.281. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEnabledRepeatRun12002400360048006000SE +/- 0.45, N = 3SE +/- 1.42, N = 3SE +/- 1.19, N = 35671.425672.865669.601. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEnabledRepeatRun10002000300040005000Min: 5670.78 / Avg: 5671.42 / Max: 5672.29Min: 5670.1 / Avg: 5672.86 / Max: 5674.85Min: 5667.46 / Avg: 5669.6 / Max: 5671.581. (CC) gcc options: -O3 -mavx2

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEnabledRepeatRun20406080100SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3105.70105.76105.751. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEnabledRepeatRun20406080100Min: 105.51 / Avg: 105.7 / Max: 105.81Min: 105.66 / Avg: 105.76 / Max: 105.81Min: 105.63 / Avg: 105.75 / Max: 105.811. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun246810SE +/- 0.00601, N = 3SE +/- 0.00649, N = 3SE +/- 0.00312, N = 38.397328.392978.39591MIN: 8.25MIN: 8.25MIN: 8.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun3691215Min: 8.39 / Avg: 8.4 / Max: 8.41Min: 8.38 / Avg: 8.39 / Max: 8.41Min: 8.39 / Avg: 8.4 / Max: 8.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveEnabledRepeatRun20406080100SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 387.8187.8587.861. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveEnabledRepeatRun20406080100Min: 87.79 / Avg: 87.81 / Max: 87.85Min: 87.83 / Avg: 87.85 / Max: 87.87Min: 87.84 / Avg: 87.86 / Max: 87.871. (CXX) g++ options: -O3 -flto -pthread

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28EnabledRepeatRun510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 320.8920.9020.891. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28EnabledRepeatRun510152025Min: 20.87 / Avg: 20.89 / Max: 20.9Min: 20.89 / Avg: 20.9 / Max: 20.91Min: 20.89 / Avg: 20.89 / Max: 20.91. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2EnabledRepeatRun612182430SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 327.6127.6027.611. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2EnabledRepeatRun612182430Min: 27.59 / Avg: 27.61 / Max: 27.61Min: 27.59 / Avg: 27.6 / Max: 27.61Min: 27.6 / Avg: 27.61 / Max: 27.631. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughEnabledRepeatRun3691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 311.6711.6711.681. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughEnabledRepeatRun3691215Min: 11.67 / Avg: 11.67 / Max: 11.67Min: 11.66 / Avg: 11.67 / Max: 11.68Min: 11.67 / Avg: 11.68 / Max: 11.681. (CXX) g++ options: -O3 -flto -pthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIRepeatEnabledRun20406080100SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3105.73105.78105.731. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIRepeatEnabledRun20406080100Min: 105.69 / Avg: 105.73 / Max: 105.77Min: 105.77 / Avg: 105.78 / Max: 105.78Min: 105.71 / Avg: 105.73 / Max: 105.781. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEnabledRepeatRun20K40K60K80K100KSE +/- 43.76, N = 3SE +/- 23.36, N = 3SE +/- 37.02, N = 3112405112393112439
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEnabledRepeatRun20K40K60K80K100KMin: 112324 / Avg: 112405.33 / Max: 112474Min: 112368 / Avg: 112393.33 / Max: 112440Min: 112370 / Avg: 112438.67 / Max: 112497

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3EnabledRepeatRun1224364860SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 351.5651.5451.551. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3EnabledRepeatRun1020304050Min: 51.53 / Avg: 51.56 / Max: 51.58Min: 51.51 / Avg: 51.54 / Max: 51.58Min: 51.53 / Avg: 51.55 / Max: 51.561. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EnabledRepeatRun500K1000K1500K2000K2500KSE +/- 835.16, N = 3SE +/- 41.63, N = 3SE +/- 36.06, N = 3234413023432102343680
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EnabledRepeatRun400K800K1200K1600K2000KMin: 2343080 / Avg: 2344130 / Max: 2345780Min: 2343130 / Avg: 2343210 / Max: 2343270Min: 2343630 / Avg: 2343680 / Max: 2343750

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEnabledRepeatRun5001000150020002500SE +/- 1.85, N = 3SE +/- 0.23, N = 3SE +/- 1.86, N = 32217.942218.612217.751. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEnabledRepeatRun400800120016002000Min: 2214.33 / Avg: 2217.94 / Max: 2220.42Min: 2218.2 / Avg: 2218.61 / Max: 2219Min: 2214.03 / Avg: 2217.75 / Max: 2219.631. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 316.4516.4516.45MIN: 16.25MIN: 16.24MIN: 16.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUEnabledRepeatRun48121620Min: 16.44 / Avg: 16.45 / Max: 16.47Min: 16.43 / Avg: 16.45 / Max: 16.46Min: 16.43 / Avg: 16.45 / Max: 16.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingEnabledRepeatRun30060090012001500SE +/- 0.30, N = 3SE +/- 0.19, N = 3SE +/- 0.92, N = 31200.821200.371200.481. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingEnabledRepeatRun2004006008001000Min: 1200.37 / Avg: 1200.82 / Max: 1201.38Min: 1200.03 / Avg: 1200.37 / Max: 1200.7Min: 1198.68 / Avg: 1200.48 / Max: 1201.721. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterEnabledRepeatRun2004006008001000SE +/- 0.78, N = 3SE +/- 0.46, N = 3SE +/- 0.58, N = 3837.7837.4837.41. 3.8.2.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterEnabledRepeatRun150300450600750Min: 836.7 / Avg: 837.67 / Max: 839.2Min: 836.6 / Avg: 837.4 / Max: 838.2Min: 836.4 / Avg: 837.43 / Max: 838.41. 3.8.2.0

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesEnabledRepeatRun246810SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 38.8808.8838.8831. (CXX) g++ options: -fopenmp -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesEnabledRepeatRun3691215Min: 8.87 / Avg: 8.88 / Max: 8.89Min: 8.87 / Avg: 8.88 / Max: 8.89Min: 8.88 / Avg: 8.88 / Max: 8.891. (CXX) g++ options: -fopenmp -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEnabledRepeatRun30K60K90K120K150KSE +/- 55.34, N = 3SE +/- 34.64, N = 3SE +/- 30.69, N = 3163084163043163094
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEnabledRepeatRun30K60K90K120K150KMin: 163028 / Avg: 163084.33 / Max: 163195Min: 163001 / Avg: 163043.33 / Max: 163112Min: 163043 / Avg: 163093.67 / Max: 163149

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxEnabledRepeatRun70140210280350SE +/- 3.30, N = 3SE +/- 3.18, N = 3SE +/- 3.27, N = 3337.8337.9337.91. firefox 86.0
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxEnabledRepeatRun60120180240300Min: 331.2 / Avg: 337.8 / Max: 341.2Min: 331.5 / Avg: 337.87 / Max: 341.2Min: 331.4 / Avg: 337.93 / Max: 341.31. firefox 86.0

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEnabledRepeatRun1530456075SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 368.6068.5868.591. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEnabledRepeatRun1326395265Min: 68.58 / Avg: 68.6 / Max: 68.63Min: 68.57 / Avg: 68.58 / Max: 68.58Min: 68.56 / Avg: 68.59 / Max: 68.631. (CXX) g++ options: -fopenmp -O2 -march=native

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACEnabledRepeatRun246810SE +/- 0.002, N = 5SE +/- 0.003, N = 5SE +/- 0.002, N = 57.1017.1037.1031. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACEnabledRepeatRun3691215Min: 7.1 / Avg: 7.1 / Max: 7.11Min: 7.09 / Avg: 7.1 / Max: 7.11Min: 7.1 / Avg: 7.1 / Max: 7.111. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUEnabledRepeatRun7K14K21K28K35KSE +/- 0.29, N = 3SE +/- 4.23, N = 3SE +/- 5.32, N = 334101.3234092.5434093.331. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUEnabledRepeatRun6K12K18K24K30KMin: 34100.76 / Avg: 34101.32 / Max: 34101.73Min: 34087.3 / Avg: 34092.54 / Max: 34100.9Min: 34083.33 / Avg: 34093.33 / Max: 34101.51. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEnabledRepeatRun120240360480600SE +/- 0.30, N = 3SE +/- 0.15, N = 3SE +/- 0.32, N = 3545.75545.73545.861. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEnabledRepeatRun100200300400500Min: 545.15 / Avg: 545.75 / Max: 546.14Min: 545.48 / Avg: 545.73 / Max: 545.99Min: 545.25 / Avg: 545.86 / Max: 546.321. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100EnabledRepeatRun20K40K60K80K100KSE +/- 39.14, N = 3SE +/- 16.20, N = 3SE +/- 62.17, N = 38873988720887181. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100EnabledRepeatRun15K30K45K60K75KMin: 88661 / Avg: 88738.67 / Max: 88786Min: 88693 / Avg: 88720 / Max: 88749Min: 88654 / Avg: 88717.67 / Max: 888421. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEnabledRepeatRun1530456075SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 367.1367.1367.121. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEnabledRepeatRun1326395265Min: 67.1 / Avg: 67.13 / Max: 67.16Min: 67.07 / Avg: 67.13 / Max: 67.17Min: 67.09 / Avg: 67.12 / Max: 67.141. (CC) gcc options: -lm -lpthread -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EnabledRepeatRun500K1000K1500K2000K2500KSE +/- 348.44, N = 3SE +/- 418.85, N = 3SE +/- 51.75, N = 3211942721189602119127
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EnabledRepeatRun400K800K1200K1600K2000KMin: 2118910 / Avg: 2119426.67 / Max: 2120090Min: 2118370 / Avg: 2118960 / Max: 2119770Min: 2119070 / Avg: 2119126.67 / Max: 2119230

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1EnabledRepeatRun80160240320400SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3369.31369.33369.261. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1EnabledRepeatRun70140210280350Min: 369.28 / Avg: 369.31 / Max: 369.35Min: 369.33 / Avg: 369.33 / Max: 369.33Min: 369.22 / Avg: 369.26 / Max: 369.311. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256RepeatEnabledRun4080120160200SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3162.60162.63162.601. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256RepeatEnabledRun306090120150Min: 162.57 / Avg: 162.6 / Max: 162.61Min: 162.6 / Avg: 162.63 / Max: 162.67Min: 162.58 / Avg: 162.6 / Max: 162.651. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterEnabledRepeatRun120240360480600SE +/- 0.23, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 3547.0546.9547.0
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterEnabledRepeatRun100200300400500Min: 546.6 / Avg: 547 / Max: 547.4Min: 546.7 / Avg: 546.9 / Max: 547Min: 546.9 / Avg: 547 / Max: 547.1

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EnabledRepeatRun8K16K24K32K40KSE +/- 25.58, N = 3SE +/- 12.23, N = 3SE +/- 7.24, N = 336396.5436393.9636390.211. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lsqlite3 -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EnabledRepeatRun6K12K18K24K30KMin: 36358.29 / Avg: 36396.54 / Max: 36445.09Min: 36370.23 / Avg: 36393.96 / Max: 36410.98Min: 36381.14 / Avg: 36390.21 / Max: 36404.531. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lsqlite3 -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptRepeatEnabledRun4080120160200SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3162.33162.32162.311. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptRepeatEnabledRun306090120150Min: 162.32 / Avg: 162.33 / Max: 162.34Min: 162.28 / Avg: 162.32 / Max: 162.36Min: 162.3 / Avg: 162.31 / Max: 162.331. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EnabledRepeatRun4080120160200SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3162.51162.52162.501. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EnabledRepeatRun306090120150Min: 162.5 / Avg: 162.51 / Max: 162.52Min: 162.49 / Avg: 162.52 / Max: 162.54Min: 162.47 / Avg: 162.5 / Max: 162.551. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEnabledRepeatRun306090120150120120120

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceEnabledRepeatRun70140210280350339339339

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEnabledRepeatRun3691215SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 312.812.812.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEnabledRepeatRun48121620Min: 12.7 / Avg: 12.77 / Max: 12.8Min: 12.7 / Avg: 12.77 / Max: 12.8Min: 12.7 / Avg: 12.77 / Max: 12.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3EnabledRepeatRun50100150200250231231231

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goEnabledRepeatRun4080120160200SE +/- 0.33, N = 3SE +/- 0.33, N = 3184184184
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goEnabledRepeatRun306090120150Min: 183 / Avg: 183.67 / Max: 184Min: 183 / Avg: 183.67 / Max: 184

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pEnabledRepeatRun0.11250.2250.33750.450.5625SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.500.500.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pEnabledRepeatRun246810Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KEnabledRepeatRun0.0360.0720.1080.1440.18SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.160.160.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KEnabledRepeatRun12345Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerEnabledRepeatRun70140210280350SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3333.33333.33333.33MIN: 250MIN: 250MIN: 250
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerEnabledRepeatRun60120180240300Min: 333.33 / Avg: 333.33 / Max: 333.33Min: 333.33 / Avg: 333.33 / Max: 333.33Min: 333.33 / Avg: 333.33 / Max: 333.33

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisEnabledRepeatRun510152025202020MIN: 18.87 / MAX: 20.41MIN: 19.23 / MAX: 20.41MIN: 19.61 / MAX: 20.41

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisEnabledRepeatRun612182430252525MIN: 23.81 / MAX: 25.64MIN: 24.39 / MAX: 25.64MIN: 23.81 / MAX: 25.64

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEnabledRepeatRun0.35780.71561.07341.43121.789SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.591.591.59MIN: 1.58 / MAX: 1.6MIN: 1.58 / MAX: 1.6MIN: 1.58 / MAX: 1.6
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEnabledRepeatRun246810Min: 1.59 / Avg: 1.59 / Max: 1.59Min: 1.59 / Avg: 1.59 / Max: 1.59Min: 1.59 / Avg: 1.59 / Max: 1.59

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenEnabledRepeatRun40801201602001651651651. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomEnabledRepeatRun0.27230.54460.81691.08921.3615SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.211.211.211. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomEnabledRepeatRun246810Min: 1.21 / Avg: 1.21 / Max: 1.21Min: 1.21 / Avg: 1.21 / Max: 1.21Min: 1.21 / Avg: 1.21 / Max: 1.211. (CXX) g++ options: -O3 -pthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkEnabledRepeat0.48150.9631.44451.9262.40752.142.14

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxEnabledRepeatRun306090120150SE +/- 2.02, N = 15SE +/- 1.81, N = 15SE +/- 1.76, N = 151171191181. firefox 86.0
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxEnabledRepeatRun20406080100Min: 107 / Avg: 116.53 / Max: 125Min: 106 / Avg: 118.67 / Max: 124Min: 107 / Avg: 118 / Max: 1251. firefox 86.0

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedEnabledRepeatRun30060090012001500SE +/- 14.33, N = 15SE +/- 28.36, N = 15SE +/- 22.02, N = 151302.61327.11303.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.9Compression Level: 3, Long Mode - Compression SpeedEnabledRepeatRun2004006008001000Min: 1193.1 / Avg: 1302.58 / Max: 1392.7Min: 1166.6 / Avg: 1327.11 / Max: 1504.3Min: 1172.2 / Avg: 1303.83 / Max: 1469.41. (CC) gcc options: -O3 -pthread -lz -llzma

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEnabledRepeatRun1632486480SE +/- 0.41, N = 14SE +/- 0.33, N = 3SE +/- 1.89, N = 1470.4270.2572.241. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEnabledRepeatRun1428425670Min: 69.52 / Avg: 70.42 / Max: 75.42Min: 69.6 / Avg: 70.25 / Max: 70.6Min: 69.45 / Avg: 72.24 / Max: 96.791. (CXX) g++ options: -O2 -lOpenCL

413 Results Shown

ACES DGEMM
oneDNN:
  IP Shapes 3D - f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
AOM AV1
OpenVKL
Stress-NG
CloverLeaf
ViennaCL
NCNN:
  CPU - googlenet
  CPU - blazeface
oneDNN:
  IP Shapes 3D - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
GraphicsMagick
oneDNN
Selenium
oneDNN
Swet
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
Selenium
AOM AV1
FFTW
NCNN:
  CPU - vgg16
  CPU - resnet18
Radiance Benchmark
NCNN:
  CPU - mnasnet
  CPU - resnet50
Embree
Stress-NG:
  CPU Cache
  Atomic
Stockfish
x265
simdjson
TTSIOD 3D Renderer
Embree
AOM AV1
oneDNN
ASKAP
GNU Radio
TensorFlow Lite
Java Gradle Build
NCNN:
  CPU - mobilenet
  CPU - efficientnet-b0
AOM AV1
NAS Parallel Benchmarks
Zstd Compression
DaCapo Benchmark
John The Ripper
Selenium
NCNN
ViennaCL
Stockfish
NAS Parallel Benchmarks
OpenVKL:
  vklBenchmark
  vklBenchmarkVdbVolume
DaCapo Benchmark
NCNN
GraphicsMagick
ViennaCL
Zstd Compression
DeepSpeech
OSPray
Liquid-DSP
Numpy Benchmark
Stress-NG
Rodinia
Liquid-DSP
Selenium
NCNN
Zstd Compression
Ngspice
BLAKE2
libavif avifenc
Ngspice
SVT-VP9
Tachyon
SVT-HEVC
LeelaChessZero
Xcompact3d Incompact3d
GraphicsMagick
Mobile Neural Network
Timed MAFFT Alignment
ViennaCL
asmFish
Darktable
Liquid-DSP
LeelaChessZero
ONNX Runtime
Perl Benchmarks
Embree
JPEG XL
OSPray
LibreOffice
ASKAP
PyPerformance
Rodinia
JPEG XL Decoding
ViennaCL
LuxCoreRender
Hugin
Sysbench
LuaJIT
Botan
Selenium
Mobile Neural Network:
  MobileNetV2_224
  SqueezeNetV1.0
Botan
AOM AV1
ViennaCL
librsvg
PHPBench
Optcarrot
FinanceBench
AOM AV1
WebP2 Image Encode
Selenium
Appleseed
OpenSCAD
AOM AV1
Embree
ViennaCL
LuaRadio
Build2
ViennaCL
Appleseed
John The Ripper
Timed Linux Kernel Compilation
NCNN
ViennaCL
Mobile Neural Network
Kvazaar
PlaidML
Stress-NG
SVT-VP9
oneDNN
PyBench
LZ4 Compression
srsLTE
Stress-NG
Selenium
Intel Open Image Denoise
Timed MrBayes Analysis
libavif avifenc
AOM AV1
Embree
SVT-VP9
oneDNN
SVT-AV1
Timed LLVM Compilation
Timed GDB GNU Debugger Compilation
FinanceBench
oneDNN
ctx_clock
JPEG XL Decoding
OSPray
Liquid-DSP
G'MIC
Rodinia
Zstd Compression
Monte Carlo Simulations of Ionised Nebulae
simdjson
Cryptsetup
NeatBench
PyPerformance
BRL-CAD
GNU Octave Benchmark
libjpeg-turbo tjbench
eSpeak-NG Speech Engine
Timed Erlang/OTP Compilation
Chaos Group V-RAY
SVT-AV1
7-Zip Compression
WebP2 Image Encode
Embree
libavif avifenc
AOM AV1
ViennaCL
Liquid-DSP
C-Blosc
AOM AV1
GNU Radio
oneDNN
WireGuard + Linux Networking Stack Stress Test
oneDNN
Kvazaar
LuaRadio
Cryptsetup
Stress-NG
OpenSCAD
Mobile Neural Network
LuxCoreRender
SQLite Speedtest
GraphicsMagick
libavif avifenc
SVT-HEVC
OpenSCAD
NCNN
ONNX Runtime
JPEG XL
NCNN
OSPray
Cryptsetup
Darktable
Selenium
Cryptsetup
Selenium
OCRMyPDF
POV-Ray
PlaidML
ViennaCL
PyPerformance
Timed Mesa Compilation
Cryptsetup
libavif avifenc
OpenFOAM
TNN
Basis Universal
Kvazaar
AOM AV1
RawTherapee
LULESH
Timed Wasmer Compilation
ASKAP
FFTW
Cryptsetup
PlaidML
JPEG XL
Coremark
GIMP
srsLTE
oneDNN
Timed ImageMagick Compilation
QuantLib
Timed Eigen Compilation
oneDNN
NCNN
OpenSCAD
ONNX Runtime
AOM AV1
Botan
Perl Benchmarks
Kvazaar
Etcpak
Darktable
PyPerformance
SVT-HEVC
GraphicsMagick
GIMP
Appleseed
Zstd Compression
GEGL
Zstd Compression
Botan
libavif avifenc
IndigoBench
LibRaw
Botan
Timed Apache Compilation
Nebular Empirical Analysis Tool
PyPerformance
NAMD
Zstd Compression
Cryptsetup
Selenium
Git
simdjson
Cryptsetup
oneDNN
PyPerformance
Cryptsetup
Timed FFmpeg Compilation
Crafty
GNU GMP GMPbench
AOM AV1
Darktable
Zstd Compression
Timed Node.js Compilation
WebP Image Encode
PyPerformance
Timed PHP Compilation
x265
ONNX Runtime
Xcompact3d Incompact3d
WebP2 Image Encode
oneDNN
Tesseract OCR
Primesieve
srsLTE
Timed Godot Game Engine Compilation
Zstd Compression
ONNX Runtime
dav1d
XZ Compression
ViennaCL
Caffe
VP9 libvpx Encoding
Dolfyn
oneDNN
Google SynthMark
AOM AV1
GNU Radio
JPEG XL
GIMP
TSCP
Monkey Audio Encoding
Cython Benchmark
rays1bench
YafaRay
GNU Radio
GEGL
LAME MP3 Encoding
x264
Cryptsetup
OpenSCAD
LZ4 Compression
GIMP
LZ4 Compression
dav1d
Kvazaar
Botan
Rodinia
Botan
GEGL
Unpacking Firefox
Selenium
ASTC Encoder
IndigoBench
oneDNN
Basis Universal
Opus Codec Encoding
Etcpak
Botan
N-Queens
Cryptsetup
Botan
OpenVKL
WebP Image Encode
Botan
ASKAP
PyPerformance
dcraw
Timed HMMer Search
Algebraic Multi-Grid Benchmark
Blender
Cryptsetup
Kvazaar
VP9 libvpx Encoding
Cryptsetup:
  Twofish-XTS 512b Decryption
  Twofish-XTS 256b Decryption
TNN
RAR Compression
LuaRadio
Radiance Benchmark
GEGL
WavPack Audio Encoding
Crypto++
Polyhedron Fortran Benchmarks
GNU Radio
LZ4 Compression:
  9 - Compression Speed
  3 - Compression Speed
TensorFlow Lite
LZ4 Compression
oneDNN
Botan
Himeno Benchmark
Botan
oneDNN
ASTC Encoder
RNNoise
Basis Universal
ASTC Encoder
Botan
TensorFlow Lite
Basis Universal
TensorFlow Lite
Stress-NG
oneDNN
ASKAP
GNU Radio
Smallpt
TensorFlow Lite
Selenium
m-queens
FLAC Audio Encoding
Sysbench
Botan
Caffe
C-Ray
TensorFlow Lite
Etcpak
Botan
LuaRadio
Aircrack-ng
Botan
Botan
PyPerformance:
  regex_compile
  raytrace
  pathlib
  2to3
  go
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 4K
OSPray:
  Magnetic Reconnection - Path Tracer
  Magnetic Reconnection - SciVis
  NASA Streamlines - SciVis
  San Miguel - Path Tracer
GraphicsMagick
simdjson
Polyhedron Fortran Benchmarks
Selenium
Zstd Compression
Rodinia