AMD Ryzen 9 7950X3D Modes On Linux

Ryzen 9 7950X3D benchmarks for a future article by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302261-NE-7950X3DMO02
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 5 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 4 Tests
Web Browsers 1 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 21 Tests
Compression Tests 3 Tests
CPU Massive 41 Tests
Creator Workloads 42 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Database Test Suite 3 Tests
Encoding 13 Tests
Fortran Tests 4 Tests
Game Development 6 Tests
HPC - High Performance Computing 28 Tests
Imaging 8 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Linear Algebra 2 Tests
Machine Learning 11 Tests
Molecular Dynamics 8 Tests
MPI Benchmarks 8 Tests
Multi-Core 47 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 11 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 4 Tests
Raytracing 3 Tests
Renderers 10 Tests
Scientific Computing 14 Tests
Software Defined Radio 4 Tests
Server 7 Tests
Server CPU Tests 28 Tests
Single-Threaded 8 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 11 Tests
Common Workstation Benchmarks 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Auto
February 17 2023
  1 Day, 8 Hours, 38 Minutes
Prefer Cache
February 19 2023
  1 Day, 9 Hours, 1 Minute
Prefer Freq
February 20 2023
  1 Day, 12 Hours, 27 Minutes
Invert Hiding All Results Option
  1 Day, 10 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD Ryzen 9 7950X3D Modes On LinuxOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR X670E HERO (9922 BIOS)AMD Device 14d832GBWestern Digital WD_BLACK SN850X 1000GB + 2000GBAMD Radeon RX 7900 XTX 24GB (2304/1249MHz)AMD Device ab30ASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 23.046.2.0-060200rc8daily20230213-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.64.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49)GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionAMD Ryzen 9 7950X3D Modes On Linux BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203- OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-0ubuntu1)- Python 3.11.1- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AutoPrefer CachePrefer FreqResult OverviewPhoronix Test Suite100%104%108%112%Google DracoHimeno BenchmarkOSPRay StudioPyBenchPHPBenchQuantLibDeepSpeechStress-NGsrsRANNumpy BenchmarksimdjsonLAME MP3 EncodingKTX-Software toktxACES DGEMMPennantRadiance BenchmarkSQLite SpeedtestRNNoiseTensorFlow LiteoneDNN

AutoPrefer CachePrefer FreqPer Watt Result OverviewPhoronix Test Suite100%107%113%120%Numpy BenchmarkPHPBenchQuantLibsimdjsonHimeno BenchmarkLZ4 CompressionStress-NGsrsRANACES DGEMMASKAPClickHouseGraphicsMagickNode.js V8 Web Tooling Benchmarklibjpeg-turbo tjbenchAlgebraic Multi-Grid BenchmarkLiquid-DSPASTC EncoderWebP Image EncodeBRL-CADNode.js Express HTTP Load TestOpenVKLChaos Group V-RAYLuxCoreRenderStargate Digital Audio WorkstationIndigoBenchCraftyNatronSeleniumLuaRadioHigh Performance Conjugate Gradientx2657-Zip CompressionXmrigOpenEMSLULESHOSPRaySVT-VP9GNU Radiodav1dEmbreeSVT-HEVCVP9 libvpx EncodingSysbenchx264LAMMPS Molecular Dynamics SimulatorSVT-AV1Cpuminer-OptKvazaarZstd CompressionLeelaChessZeroRocksDBAOM AV1rav1eGROMACSP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

AMD Ryzen 9 7950X3D Modes On Linuxgnuradio: Hilbert Transformgnuradio: FM Deemphasis Filtergnuradio: IIR Filtergnuradio: FIR Filtergnuradio: Signal Source (Cosine)gnuradio: Five Back to Back FIR Filterslammps: 20k Atomsbuild-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlyonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardaskap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardluaradio: Complex Phaseluaradio: Hilbert Transformluaradio: FM Deemphasis Filterluaradio: Five Back to Back FIR Filtersopenvkl: vklBenchmark ISPCospray: particle_volume/pathtracer/real_timeopenems: pyEMS Coupleropenvkl: vklBenchmark Scalarlczero: BLASlczero: Eigencompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedospray: particle_volume/scivis/real_timeonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardhimeno: Poisson Pressure Solverbuild-llvm: Unix Makefilesbuild-llvm: Ninjanumpy: onnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardbrl-cad: VGR Performance Metricospray: particle_volume/ao/real_timeclickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachehpcg: selenium: PSPDFKit WASM - Google Chromerenaissance: ALS Movie Lensavifenc: 0blender: Pabellon Barcelona - CPU-Onlyrenaissance: Akka Unbalanced Cobwebbed Treencnn: CPU - mnasnetncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetselenium: Octane - Google Chromecompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedospray-studio: 3 - 4K - 32 - Path Tracerstress-ng: Socket Activitycpuminer-opt: Myriad-Groestlwireguard: gcrypt: ospray-studio: 3 - 4K - 1 - Path Tracertnn: CPU - DenseNetospray-studio: 2 - 4K - 1 - Path Tracerblender: Classroom - CPU-Onlyospray-studio: 1 - 4K - 1 - Path Tracerstress-ng: IO_uringospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerstress-ng: NUMAgpaw: Carbon Nanotuberenaissance: Scala Dottytoktx: UASTC 4 + Zstd Compression 19ospray-studio: 3 - 1080p - 32 - Path Tracerstress-ng: CPU Cachegraphics-magick: Noise-Gaussianradiance: Serialcompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingcpuminer-opt: Garlicoinavifenc: 2ospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 32 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracersvt-hevc: 1 - Bosphorus 4Kospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeappleseed: Emilysimdjson: Kostyanumenta-nab: KNN CADsysbench: CPUstress-ng: System V Message Passingprimesieve: 1e13onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUsimdjson: LargeRandcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedstargate: 192000 - 512selenium: ARES-6 - Google Chromemrbayes: Primate Phylogeny Analysisselenium: Jetstream 2 - Google Chromev-ray: CPUgromacs: MPI CPU - water_GMX50_baredeepspeech: CPUincompact3d: input.i3d 193 Cells Per Directionblender: Fishy Cat - CPU-Onlygraphics-magick: Resizingappleseed: Material Testersimdjson: DistinctUserIDsimdjson: TopTweetsimdjson: PartialTweetsxmrig: Monero - 1Mopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUluxcorerender: Orange Juice - CPUgegl: Cartooncompress-zstd: 12 - Decompression Speedcompress-zstd: 12 - Compression Speedopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUluxcorerender: Danish Mood - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedrenaissance: Apache Spark ALScompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedluxcorerender: LuxCore Benchmark - CPUsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedindigobench: CPU - Supercarindigobench: CPU - Bedroomopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUluxcorerender: DLSC - CPUtensorflow-lite: Inception V4selenium: Kraken - Google Chromeopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUtensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantrocksdb: Rand Fill Syncopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUrocksdb: Rand Fillgraphics-magick: Sharpengraphics-magick: Enhancedrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randgraphics-magick: Rotategraphics-magick: Swirlgraphics-magick: HWB Color Spacerocksdb: Rand Readvpxenc: Speed 0 - Bosphorus 4Ktachyon: Total Timetjbench: Decompression Throughputaom-av1: Speed 4 Two-Pass - Bosphorus 4Kselenium: WASM collisionDetection - Google Chromeappleseed: Disney Materialrenaissance: Genetic Algorithm Using Jenetics + Futuresdolfyn: Computational Fluid Dynamicsbuild2: Time To Compilerenaissance: Apache Spark PageRankonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUcpuminer-opt: Ringcoinnode-web-tooling: blender: BMW27 - CPU-Onlyxmrig: Wownero - 1Mcompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timestress-ng: Memory Copyingbuild-godot: Time To Compileaom-av1: Speed 0 Two-Pass - Bosphorus 4Kstargate: 192000 - 1024numenta-nab: Earthgecko Skylinebuild-linux-kernel: defconfigrenaissance: Savina Reactors.IOstress-ng: Futexnumenta-nab: Bayesian Changepointstress-ng: Semaphoresx264: Bosphorus 4Kgegl: Wavelet Blurnamd: ATPase Simulation - 327,506 Atomsradiance: SMP Parallelaom-av1: Speed 8 Realtime - Bosphorus 4Krawtherapee: Total Benchmark Timesqlite-speedtest: Timed Time - Size 1,000selenium: Speedometer - Google Chromevpxenc: Speed 5 - Bosphorus 4Kgegl: Rotate 90 Degreesgegl: Color Enhancemt-dgemm: Sustained Floating-Point Ratepennant: sedovbigvpxenc: Speed 0 - Bosphorus 1080prnnoise: stargate: 96000 - 512selenium: WASM imageConvolute - Google Chromeaom-av1: Speed 6 Two-Pass - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Ksrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMastcenc: Exhaustiveonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUstargate: 96000 - 1024aom-av1: Speed 9 Realtime - Bosphorus 4Kcpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: scryptstress-ng: MEMFDstress-ng: Glibc C String Functionsstress-ng: Matrix Mathstress-ng: Atomiccpuminer-opt: Deepcoinstress-ng: Mutexstress-ng: Mallocstress-ng: SENDFILEstress-ng: Forkingstress-ng: Cryptostress-ng: Vector Mathstress-ng: MMAPstress-ng: CPU Stressstress-ng: Glibc Qsort Data Sortingwebp: Quality 100, Losslesscpuminer-opt: LBC, LBRY Creditscpuminer-opt: Magicpuminer-opt: Skeincoincpuminer-opt: x25xcompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedcpuminer-opt: Quad SHA-256, Pyritekvazaar: Bosphorus 4K - Slowsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMwebp: Quality 100, Lossless, Highest Compressionm-queens: Time To Solvekvazaar: Bosphorus 4K - Mediumrenaissance: Finagle HTTP Requestssvt-hevc: 1 - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Kpyhpc: CPU - Numpy - 4194304 - Equation of Staterenaissance: Apache Spark Bayestnn: CPU - MobileNet v2onednn: IP Shapes 3D - u8s8f32 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 4Kgegl: Antialiassrsran: OFDM_Testdacapobench: H2pennant: leblancbigincompact3d: input.i3d 129 Cells Per Directionstargate: 480000 - 512srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMtoktx: UASTC 3 + Zstd Compression 19stargate: 44100 - 512rocksdb: Seq Fillembree: Pathtracer ISPC - Asian Dragonbuild-ffmpeg: Time To Compilepyhpc: CPU - Numpy - 1048576 - Equation of Statestargate: 480000 - 1024build-mesa: Time To Compileonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUrenaissance: Rand Foreststargate: 44100 - 1024gegl: Reflectvpxenc: Speed 5 - Bosphorus 1080pbuild-mplayer: Time To Compilegegl: Tile Glassembree: Pathtracer - Crownnumenta-nab: Contextual Anomaly Detector OSEamg: liquid-dsp: 32 - 256 - 57liquid-dsp: 16 - 256 - 57liquid-dsp: 8 - 256 - 57phpbench: PHP Benchmark Suitecompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingembree: Pathtracer - Asian Dragonquantlib: astcenc: Fastsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMaskap: Hogbom Clean OpenMPembree: Pathtracer ISPC - Crowndav1d: Chimera 1080p 10-bitnatron: Spaceshiptoktx: Zstd Compression 19selenium: Maze Solver - Google Chromeastcenc: Thoroughgimp: unsharp-maskkvazaar: Bosphorus 4K - Very Fastx265: Bosphorus 4Krav1e: 5draco: Lioncrafty: Elapsed Timerav1e: 1pyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingsvt-av1: Preset 13 - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080pgimp: resizekvazaar: Bosphorus 4K - Super Fastdav1d: Chimera 1080pavifenc: 10, Losslesstnn: CPU - SqueezeNet v1.1rav1e: 6encode-mp3: WAV To MP3luxcorerender: Rainbow Colors and Prism - CPUpybench: Total For Average Test Timessysbench: RAM / Memorywebp: Quality 100, Highest Compressionsvt-av1: Preset 8 - Bosphorus 4Kgimp: rotateaom-av1: Speed 6 Two-Pass - Bosphorus 1080pgimp: auto-levelssvt-av1: Preset 4 - Bosphorus 1080pdraco: Church Facadenode-express-loadtest: kvazaar: Bosphorus 4K - Ultra Fastdav1d: Summer Nature 4Kkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumgegl: Scalelulesh: numenta-nab: Relative Entropyaom-av1: Speed 6 Realtime - Bosphorus 1080punpack-firefox: firefox-84.0.source.tar.xzastcenc: Mediumn-queens: Elapsed Timerav1e: 10gegl: Cropsvt-vp9: VMAF Optimized - Bosphorus 4Kprimesieve: 1e12askap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingaom-av1: Speed 10 Realtime - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ktnn: CPU - SqueezeNet v2aom-av1: Speed 9 Realtime - Bosphorus 1080psvt-hevc: 7 - Bosphorus 4Kavifenc: 6, Losslessaom-av1: Speed 8 Realtime - Bosphorus 1080px265: Bosphorus 1080poctave-benchmark: toktx: UASTC 3unpack-linux: linux-5.19.tar.xzsvt-av1: Preset 12 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Very Fastnumenta-nab: Windowed Gaussiansvt-av1: Preset 8 - Bosphorus 1080px264: Bosphorus 1080pdarktable: Masskrug - CPU-onlyavifenc: 6dacapobench: Jythondarktable: Server Room - CPU-onlyonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUdarktable: Boat - CPU-onlykvazaar: Bosphorus 1080p - Super Fastdav1d: Summer Nature 1080plammps: Rhodopsin Proteinkvazaar: Bosphorus 1080p - Ultra Fastsvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pwebp: Quality 100svt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pwebp: Defaultdarktable: Server Rack - CPU-onlystress-ng: x86_64 RdRandAutoPrefer CachePrefer Freq720.21119.7520.01390.74813.01356.516.339523.312490.1959.638817.09662964.862371.38346.4633.0135523.645142.75995.52425185.5441078.4153.7527.51959.1407235.94561.61198192818252004.926.07.5642314.696769.08135350.249908285.528252.372899.7381.701512.24455.39574185.3442.11966474.9353947667.57692313.67311.62275.838.3288531398108.573.668168.207732.73.374.7381.5212.1211.7714.3211.604.827.0324.518.481.624.643.843.313.818.91950991968.014.915006834995.7859358149.039145.31345812035.1623988138.42388232571.40127948126604578.01126.883475.2127.25737535183.32611331.61218495.372.101.0853841.4736.2261154313119805.80969309437.465707.661028.95807146.3079585.5490.680107446.9725423648.1483.1561280.481280.49674.836673.2361.702414.11051.03.2808747.2470.376333.012312242.68538.5671460.472933567.61216996.9104319.148.838.6416134.41084.607.341097.267.257.9061.8872485.5296.0589.5113.534.38305.5126.142252.51418.11874.32198.64033.34.77215.3609.22433.01015.011.7865.42859.08135.285.0217797.3373.24.801663.091108.201626.40397284.701700.140.3940664.138.01997.600.6723579.676.122613.426.001332.5013987172604869471324196944332366710301144163414729613110.1058.9928337.85606113.19230.1084.5798731126.511.11855.2911566.90.1488204161.5520.6352.5419626.118483.078.95125.8245723.2020387172.3149.3820.413.65948345.22646.5743454.14087777.6511.5033423759.3371.3939.3420.81721112.04589.0734.82834.84036822.6834.27033.42110.72284133.1153020.1612.5705.25528918.8424.185.296109.37127.7191.21.69505.141725.560909105.96425237639.971281.144554878.75122833.10212936.161977311172291.2935942000.76484816.5273511.8443690.40138021.66381.2158780.75368.052.291365131101.712627871110.5519572.917290.9829154720.71254.8633.10.8328.54521.152083.522.66108.410.736790.2192.2820.66551728.32106.7824.943202625000163224.8518414.95184397.196401242.3622.67.4977.339446144549939.740121.9640.1217.72673821.2130.460811391.57.92632320.58039.8015.34220.26831.995519.90443758620014877000001485366667756130000127049617661618967134.47004169.2367.8940264.9675.6551.30536.0301830.585.910.1993.716.042213.11045.8434.284.6333196153356671.1760.237210.9981.2012.53460.46913.513.283174.1117.3704.56717.5750012821.255.4670.0869.46069.2210.64014.41936771100679.14399.4679.7282.587.3449039.15287.159219.5010.488129.14605.98117.0506.890117.846.86211519.96855.31238.38126.2242.843248.22101.105.488221.03113.234.6525.2793.953214.074174.66173.453.414193.762268.242.6433.22423292.3460.6344552.456230.491409.2316.314281.37383.36305.8317.44449.83458.64592.13789.949772.34428.120.152721.31115.0518.21390.54918.11404.416.328517.734489.9858.063317.69352970.042530.47297.8193.4280824.135842.26745.55023184.0151089.7155.6527.71945.7407235.72362.01198192618101949.125.97.5968113.420974.50985163.440763282.589252.317958.4894.131910.793155.29251188.8822.00833497.8813967457.56189314.70316.99280.938.3283031038103.676.168168.247772.73.374.7582.0112.1811.7014.3511.494.756.9024.578.471.634.663.843.333.828.92956441847.014.615363335308.6358791147.187143.14146812027.2803990138.45394831486.82131552129677576.94126.814521.1127.44537690181.06612360.30718764.573.611.0673781.0737.1241152320619975.80986315937.507977.614878.98545145.150936.0489.861107434.0625978492.7783.0671276.221286.51683.479685.4231.872428.51046.43.3072517.3670.337326.220312802.67635.3232560.817441967.62215896.5937869.159.577.5916231.91082.367.351093.407.277.8662.5992476.5299.9591.1413.484.36305.7126.112158.61423.91874.12210.73967.54.78223.1622.32410.11012.011.8195.42558.99135.515.0717748.5373.24.821659.221109.841622.75396534.691702.470.3940512.397.921009.730.6723486.366.122613.725.981335.2913887872614869507864212246331176610271162170214776010910.0659.0962342.25167313.11229.0984.7898191116.211.08855.3531562.20.1700314211.6320.4652.3719605.418752.775.74123.6636723.9817817097.3949.4450.413.63746147.98646.1873489.04120615.7011.9523476176.5571.7939.5140.81594114.02689.7234.94234.51736722.2034.23634.25010.64250432.3875520.3512.7695.19352819.3324.185.311109.57126.3191.61.69765.154565.549972108.48425550640.641266.524616386.56122869.22212454.761951711230656.9936157266.33485658.9773184.0443716.13137735.11379.5358865.01371.842.271367831105.252624201116.0219460.117319.6729193320.69220.7571.60.8428.55021.162096.122.64105.980.741788.2192.2470.66462728.34109.8124.598204366667159322.6229914.67195847.208599204.9566.38.7147.324451144181539.778921.8230.1237.72639621.4370.464426391.57.97725520.74040.3415.34820.37632.129820.77843811330014856666671471433333751733333115434117639818987134.59144577.4368.2456222.5610.4550.08436.0175830.286.011.4463.716.115013.14045.7834.644.6893543152316021.1810.236209.8611.2112.44660.40912.383.297175.7157.4054.87917.6756212696.415.5370.3949.10468.4710.76714.40245171087179.19401.8179.7982.367.2649073.59047.278221.7010.775129.15755.98416.7866.933117.976.86811495.96857.02241.09126.3442.247242.10101.105.517219.56113.594.6535.1304.007213.532174.96173.353.409194.838268.802.6493.27023142.3260.6357672.476229.991406.8116.114281.56382.65306.6216.95446.12457.94591.30787.790773.41327.920.156687.41136.9524.71267.75008.91352.316.386519.516488.9155.593818.31802959.182322.20337.2163.0870222.984743.73585.53549185.9531071.4154.6527.92002.5419235.28360.92199193117951990.625.47.5301814.389170.08994638.291812276.209252.070885.6182.102112.18165.46230183.5702.00100499.6973959457.57299323.36321.27281.138.3319731247991.471.804167.947797.23.424.7880.5512.3011.8714.4111.614.747.0224.728.611.654.763.883.383.889.16955431835.114.915214835444.4258189148.626143.79546292028.8373988138.43394629496.76131199129900581.60126.820470.7126.5623758831.91605366.41518587.572.841.0943869.3735.93511783200510015.82984315517.564007.700028.98056145.5887335.9089.953107729.7825052017.7382.9281288.241279.16678.655673.1571.892423.11046.83.2423177.0470.413325.828312382.69135.1951661.343949667.48221696.86082810.069.738.5316109.41080.717.371089.897.297.9262.3442469.8297.9589.7113.534.46306.2326.092267.81405.51857.12193.44023.54.78256.0682.22417.31012.911.8525.43458.93135.635.0517769.5372.84.771674.001108.321627.87394784.651721.240.3940605.448.05993.160.6723592.856.102621.905.991333.2413960242614929502944198454331541510121153164614774001710.1858.9602344.08021313.22228.6484.733221111.311.00155.2911584.10.1485854152.5820.9052.2519658.118319.873.61129.4152422.9255097077.0949.3660.413.65455946.38646.4253439.34134347.2711.4113476539.4971.7839.2230.81379112.3489.8734.85936.09236921.9934.97233.59811.28118833.0465520.3913.1305.18160718.6424.095.298109.76143.9211.51.70125.426885.506467110.91426480641.111276.154613558.35123819.50212527.911967011172285.0036014622.73489525.4272995.0443758.76138137.06381.3756651.53373.082.231366131102.642628871109.9719602.617314.4029195320.74215.9563.60.8428.47921.192080.422.73102.320.745786.3191.6590.82300928.29108.6025.055241166667163823.3901714.61817127.195516243.1633.37.5617.391493145059939.764521.8460.1227.74326021.3010.465139388.97.99664120.37339.8315.26220.26532.292019.96243549493314886333331488033333757983333124778617702419065934.62084512.4368.1015223.1613.0549.45136.2407827.995.910.1913.716.150013.12545.9234.654.7253195153873801.1790.234209.9831.2012.64760.47915.113.250174.6277.3674.56217.5850412712.245.3270.4839.47168.5810.78214.46636231105679.32399.6079.8482.647.2339066.29337.283220.5810.722129.53095.98316.7446.884118.006.84511495.96918.41238.97126.6343.359241.38101.095.421217.88113.424.5855.2724.054212.522174.93173.583.432195.812269.522.6363.20923212.3310.6343572.452230.801407.9116.252281.74384.34308.0417.16450.47459.41591.01796.324771.61126.570.152OpenBenchmarking.org

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformPrefer CacheAutoPrefer Freq160320480640800SE +/- 3.09, N = 9SE +/- 3.80, N = 9SE +/- 3.07, N = 9721.3720.2687.41. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformPrefer CacheAutoPrefer Freq130260390520650Min: 702.8 / Avg: 721.32 / Max: 735.1Min: 703.7 / Avg: 720.18 / Max: 738Min: 674.7 / Avg: 687.4 / Max: 697.51. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterPrefer FreqAutoPrefer Cache2004006008001000SE +/- 3.20, N = 9SE +/- 2.62, N = 9SE +/- 4.98, N = 91136.91119.71115.01. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterPrefer FreqAutoPrefer Cache2004006008001000Min: 1121 / Avg: 1136.87 / Max: 1153.5Min: 1100.8 / Avg: 1119.71 / Max: 1128.7Min: 1096.5 / Avg: 1115.02 / Max: 1139.11. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterPrefer FreqAutoPrefer Cache110220330440550SE +/- 0.92, N = 9SE +/- 2.24, N = 9SE +/- 1.65, N = 9524.7520.0518.21. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterPrefer FreqAutoPrefer Cache90180270360450Min: 518.5 / Avg: 524.66 / Max: 527.1Min: 512.4 / Avg: 519.99 / Max: 534.1Min: 511.7 / Avg: 518.16 / Max: 526.21. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAutoPrefer CachePrefer Freq30060090012001500SE +/- 2.89, N = 9SE +/- 3.63, N = 9SE +/- 2.90, N = 91390.71390.51267.71. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAutoPrefer CachePrefer Freq2004006008001000Min: 1379.7 / Avg: 1390.74 / Max: 1404.2Min: 1372.7 / Avg: 1390.5 / Max: 1408.1Min: 1255.6 / Avg: 1267.73 / Max: 1282.41. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Prefer FreqPrefer CacheAuto11002200330044005500SE +/- 41.28, N = 9SE +/- 45.51, N = 9SE +/- 48.78, N = 95008.94918.14813.01. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Prefer FreqPrefer CacheAuto9001800270036004500Min: 4777.3 / Avg: 5008.9 / Max: 5150.7Min: 4753.7 / Avg: 4918.1 / Max: 5121.9Min: 4541 / Avg: 4813.03 / Max: 5000.61. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersPrefer CacheAutoPrefer Freq30060090012001500SE +/- 25.73, N = 9SE +/- 22.39, N = 9SE +/- 17.40, N = 91404.41356.51352.31. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersPrefer CacheAutoPrefer Freq2004006008001000Min: 1300 / Avg: 1404.44 / Max: 1509.6Min: 1269.3 / Avg: 1356.54 / Max: 1506.6Min: 1232.5 / Avg: 1352.26 / Max: 1395.51. 3.10.5.1

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsPrefer FreqAutoPrefer Cache48121620SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 316.3916.3416.331. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsPrefer FreqAutoPrefer Cache48121620Min: 16.21 / Avg: 16.39 / Max: 16.52Min: 16.19 / Avg: 16.34 / Max: 16.5Min: 16.13 / Avg: 16.33 / Max: 16.451. (CXX) g++ options: -O3 -lm -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigPrefer CachePrefer FreqAuto110220330440550SE +/- 0.45, N = 3SE +/- 0.32, N = 3SE +/- 0.26, N = 3517.73519.52523.31
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigPrefer CachePrefer FreqAuto90180270360450Min: 516.97 / Avg: 517.73 / Max: 518.51Min: 518.89 / Avg: 519.52 / Max: 519.88Min: 522.79 / Avg: 523.31 / Max: 523.6

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.59, N = 3SE +/- 0.55, N = 3SE +/- 0.17, N = 3488.91489.98490.19
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto90180270360450Min: 488.32 / Avg: 488.91 / Max: 490.1Min: 489.16 / Avg: 489.98 / Max: 491.03Min: 489.98 / Avg: 490.19 / Max: 490.52

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1326395265SE +/- 2.10, N = 15SE +/- 2.61, N = 15SE +/- 2.18, N = 1555.5958.0659.641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1224364860Min: 49.03 / Avg: 55.59 / Max: 68.82Min: 49.04 / Avg: 58.06 / Max: 70.11Min: 49.39 / Avg: 59.64 / Max: 68.441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto510152025SE +/- 0.63, N = 15SE +/- 0.75, N = 15SE +/- 0.65, N = 1518.3217.6917.101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto510152025Min: 14.53 / Avg: 18.32 / Max: 20.39Min: 14.26 / Avg: 17.69 / Max: 20.39Min: 14.61 / Avg: 17.1 / Max: 20.251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingPrefer CacheAutoPrefer Freq6001200180024003000SE +/- 9.57, N = 15SE +/- 12.65, N = 15SE +/- 11.63, N = 152970.042964.862959.181. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingPrefer CacheAutoPrefer Freq5001000150020002500Min: 2877.47 / Avg: 2970.04 / Max: 3006.42Min: 2860.08 / Avg: 2964.86 / Max: 3004.3Min: 2884.29 / Avg: 2959.18 / Max: 3006.421. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingPrefer CacheAutoPrefer Freq5001000150020002500SE +/- 31.33, N = 15SE +/- 34.96, N = 15SE +/- 18.48, N = 152530.472371.382322.201. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingPrefer CacheAutoPrefer Freq400800120016002000Min: 2333.66 / Avg: 2530.47 / Max: 2694.56Min: 2242.16 / Avg: 2371.38 / Max: 2672.58Min: 2248.67 / Avg: 2322.2 / Max: 2498.591. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto80160240320400SE +/- 15.84, N = 12SE +/- 19.31, N = 15SE +/- 20.18, N = 15297.82337.22346.461. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto60120180240300Min: 277.23 / Avg: 297.82 / Max: 470.93Min: 276.35 / Avg: 337.22 / Max: 469.59Min: 276.66 / Avg: 346.46 / Max: 473.821. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto0.77131.54262.31393.08523.8565SE +/- 0.12057, N = 12SE +/- 0.15247, N = 15SE +/- 0.15740, N = 153.428083.087023.013551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto246810Min: 2.12 / Avg: 3.43 / Max: 3.61Min: 2.13 / Avg: 3.09 / Max: 3.62Min: 2.11 / Avg: 3.01 / Max: 3.611. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqAutoPrefer Cache612182430SE +/- 0.52, N = 12SE +/- 0.72, N = 15SE +/- 0.98, N = 1522.9823.6524.141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqAutoPrefer Cache612182430Min: 21.57 / Avg: 22.98 / Max: 26.12Min: 21.59 / Avg: 23.65 / Max: 30.66Min: 21.52 / Avg: 24.14 / Max: 30.71. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqAutoPrefer Cache1020304050SE +/- 0.93, N = 12SE +/- 1.11, N = 15SE +/- 1.48, N = 1543.7442.7642.271. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqAutoPrefer Cache918273645Min: 38.28 / Avg: 43.74 / Max: 46.35Min: 32.61 / Avg: 42.76 / Max: 46.32Min: 32.57 / Avg: 42.27 / Max: 46.461. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardAutoPrefer FreqPrefer Cache1.24882.49763.74644.99526.244SE +/- 0.23907, N = 15SE +/- 0.29232, N = 12SE +/- 0.21808, N = 155.524255.535495.550231. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardAutoPrefer FreqPrefer Cache246810Min: 4.69 / Avg: 5.52 / Max: 7.15Min: 4.7 / Avg: 5.54 / Max: 7.11Min: 4.71 / Avg: 5.55 / Max: 6.721. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqAutoPrefer Cache4080120160200SE +/- 9.16, N = 12SE +/- 7.51, N = 15SE +/- 7.02, N = 15185.95185.54184.021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqAutoPrefer Cache306090120150Min: 140.55 / Avg: 185.95 / Max: 212.97Min: 139.91 / Avg: 185.54 / Max: 213.06Min: 148.88 / Avg: 184.02 / Max: 212.431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhasePrefer CacheAutoPrefer Freq2004006008001000SE +/- 4.40, N = 5SE +/- 4.07, N = 3SE +/- 4.41, N = 71089.71078.41071.4
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhasePrefer CacheAutoPrefer Freq2004006008001000Min: 1080.7 / Avg: 1089.72 / Max: 1102.1Min: 1071.9 / Avg: 1078.4 / Max: 1085.9Min: 1056.1 / Avg: 1071.39 / Max: 1088.3

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformPrefer CachePrefer FreqAuto306090120150SE +/- 1.10, N = 5SE +/- 0.35, N = 7SE +/- 1.47, N = 3155.6154.6153.7
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformPrefer CachePrefer FreqAuto306090120150Min: 152.9 / Avg: 155.58 / Max: 158.4Min: 153.9 / Avg: 154.59 / Max: 156.5Min: 150.8 / Avg: 153.73 / Max: 155.3

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterPrefer FreqPrefer CacheAuto110220330440550SE +/- 2.13, N = 7SE +/- 3.13, N = 5SE +/- 4.15, N = 3527.9527.7527.5
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterPrefer FreqPrefer CacheAuto90180270360450Min: 518.6 / Avg: 527.9 / Max: 534Min: 515.7 / Avg: 527.66 / Max: 532.2Min: 519.4 / Avg: 527.53 / Max: 533

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersPrefer FreqAutoPrefer Cache400800120016002000SE +/- 17.45, N = 7SE +/- 22.01, N = 3SE +/- 20.86, N = 52002.51959.11945.7
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersPrefer FreqAutoPrefer Cache30060090012001500Min: 1926.2 / Avg: 2002.47 / Max: 2045.3Min: 1915.1 / Avg: 1959.07 / Max: 1982.9Min: 1885.2 / Avg: 1945.72 / Max: 2004.1

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCPrefer FreqPrefer CacheAuto90180270360450SE +/- 1.86, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3419407407MIN: 53 / MAX: 5936MIN: 53 / MAX: 5165MIN: 53 / MAX: 5149
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCPrefer FreqPrefer CacheAuto70140210280350Min: 415 / Avg: 418.67 / Max: 421Min: 407 / Avg: 407.33 / Max: 408Min: 407 / Avg: 407.33 / Max: 408

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeAutoPrefer CachePrefer Freq50100150200250SE +/- 0.40, N = 3SE +/- 0.12, N = 3SE +/- 0.84, N = 3235.95235.72235.28
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeAutoPrefer CachePrefer Freq4080120160200Min: 235.18 / Avg: 235.94 / Max: 236.51Min: 235.53 / Avg: 235.72 / Max: 235.95Min: 233.72 / Avg: 235.28 / Max: 236.58

OpenEMS

OpenEMS is a free and open electromagnetic field solver using the FDTD method. This test profile runs OpenEMS and pyEMS benchmark demos. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerPrefer CacheAutoPrefer Freq1428425670SE +/- 0.12, N = 3SE +/- 0.35, N = 3SE +/- 0.05, N = 362.0161.6160.921. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat
OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerPrefer CacheAutoPrefer Freq1224364860Min: 61.81 / Avg: 62.01 / Max: 62.21Min: 61.01 / Avg: 61.61 / Max: 62.21Min: 60.82 / Avg: 60.92 / Max: 611. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.00, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3199198198MIN: 18 / MAX: 3749MIN: 18 / MAX: 3736MIN: 17 / MAX: 3749
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarPrefer FreqPrefer CacheAuto4080120160200Min: 199 / Avg: 199 / Max: 199Min: 197 / Avg: 197.67 / Max: 198Min: 197 / Avg: 198.33 / Max: 199

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASPrefer FreqAutoPrefer Cache400800120016002000SE +/- 2.96, N = 3SE +/- 22.10, N = 3SE +/- 6.23, N = 31931192819261. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASPrefer FreqAutoPrefer Cache30060090012001500Min: 1927 / Avg: 1931.33 / Max: 1937Min: 1897 / Avg: 1928.33 / Max: 1971Min: 1914 / Avg: 1926.33 / Max: 19341. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAutoPrefer CachePrefer Freq400800120016002000SE +/- 12.03, N = 3SE +/- 19.08, N = 3SE +/- 14.57, N = 31825181017951. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAutoPrefer CachePrefer Freq30060090012001500Min: 1807 / Avg: 1825.33 / Max: 1848Min: 1778 / Avg: 1810 / Max: 1844Min: 1767 / Avg: 1795 / Max: 18161. (CXX) g++ options: -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedAutoPrefer FreqPrefer Cache400800120016002000SE +/- 26.46, N = 15SE +/- 22.67, N = 15SE +/- 2.99, N = 152004.91990.61949.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedAutoPrefer FreqPrefer Cache30060090012001500Min: 1938.2 / Avg: 2004.89 / Max: 2201.5Min: 1925 / Avg: 1990.55 / Max: 2190.1Min: 1925.1 / Avg: 1949.14 / Max: 1975.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedAutoPrefer CachePrefer Freq612182430SE +/- 0.22, N = 15SE +/- 0.28, N = 15SE +/- 0.35, N = 1526.025.925.41. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedAutoPrefer CachePrefer Freq612182430Min: 24.2 / Avg: 26.03 / Max: 26.9Min: 24 / Avg: 25.9 / Max: 27.1Min: 23 / Avg: 25.44 / Max: 26.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timePrefer CacheAutoPrefer Freq246810SE +/- 0.00464, N = 3SE +/- 0.00618, N = 3SE +/- 0.03588, N = 37.596817.564237.53018
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timePrefer CacheAutoPrefer Freq3691215Min: 7.59 / Avg: 7.6 / Max: 7.61Min: 7.55 / Avg: 7.56 / Max: 7.57Min: 7.46 / Avg: 7.53 / Max: 7.57

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto48121620SE +/- 0.06, N = 3SE +/- 0.43, N = 12SE +/- 0.51, N = 1513.4214.3914.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto48121620Min: 13.3 / Avg: 13.42 / Max: 13.49Min: 13.39 / Avg: 14.39 / Max: 17.57Min: 13.27 / Avg: 14.7 / Max: 181. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto20406080100SE +/- 0.35, N = 3SE +/- 1.82, N = 12SE +/- 2.15, N = 1574.5170.0969.081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer CachePrefer FreqAuto1428425670Min: 74.11 / Avg: 74.51 / Max: 75.2Min: 56.91 / Avg: 70.09 / Max: 74.69Min: 55.55 / Avg: 69.08 / Max: 75.381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverAutoPrefer CachePrefer Freq11002200330044005500SE +/- 140.79, N = 15SE +/- 120.69, N = 15SE +/- 83.07, N = 155350.255163.444638.291. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverAutoPrefer CachePrefer Freq9001800270036004500Min: 4682.76 / Avg: 5350.25 / Max: 6576.31Min: 4533.32 / Avg: 5163.44 / Max: 5888.94Min: 4161.95 / Avg: 4638.29 / Max: 5055.631. (CC) gcc options: -O3 -mavx2

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesPrefer FreqPrefer CacheAuto60120180240300SE +/- 2.01, N = 3SE +/- 3.72, N = 3SE +/- 0.94, N = 3276.21282.59285.53
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesPrefer FreqPrefer CacheAuto50100150200250Min: 273.48 / Avg: 276.21 / Max: 280.13Min: 275.48 / Avg: 282.59 / Max: 288.07Min: 283.92 / Avg: 285.53 / Max: 287.16

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.26, N = 3SE +/- 0.39, N = 3SE +/- 0.14, N = 3252.07252.32252.37
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaPrefer FreqPrefer CacheAuto50100150200250Min: 251.67 / Avg: 252.07 / Max: 252.56Min: 251.79 / Avg: 252.32 / Max: 253.09Min: 252.1 / Avg: 252.37 / Max: 252.58

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkPrefer CacheAutoPrefer Freq2004006008001000SE +/- 9.21, N = 6SE +/- 6.01, N = 15SE +/- 1.73, N = 3958.48899.73885.61
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkPrefer CacheAutoPrefer Freq2004006008001000Min: 914.45 / Avg: 958.48 / Max: 979.92Min: 872.04 / Avg: 899.73 / Max: 977.41Min: 883.71 / Avg: 885.61 / Max: 889.07

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer FreqPrefer Cache20406080100SE +/- 0.82, N = 5SE +/- 0.73, N = 3SE +/- 3.28, N = 1581.7082.1094.131. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer FreqPrefer Cache20406080100Min: 78.51 / Avg: 81.7 / Max: 82.84Min: 80.65 / Avg: 82.1 / Max: 82.88Min: 82.6 / Avg: 94.13 / Max: 113.581. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer FreqPrefer Cache3691215SE +/- 0.13, N = 5SE +/- 0.11, N = 3SE +/- 0.35, N = 1512.2412.1810.791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer FreqPrefer Cache48121620Min: 12.07 / Avg: 12.24 / Max: 12.74Min: 12.06 / Avg: 12.18 / Max: 12.4Min: 8.8 / Avg: 10.79 / Max: 12.111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer CacheAutoPrefer Freq1.2292.4583.6874.9166.145SE +/- 0.01610, N = 3SE +/- 0.06583, N = 4SE +/- 0.08402, N = 155.292515.395745.462301. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer CacheAutoPrefer Freq246810Min: 5.27 / Avg: 5.29 / Max: 5.32Min: 5.22 / Avg: 5.4 / Max: 5.54Min: 5.22 / Avg: 5.46 / Max: 6.111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer CacheAutoPrefer Freq4080120160200SE +/- 0.57, N = 3SE +/- 2.28, N = 4SE +/- 2.62, N = 15188.88185.34183.571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer CacheAutoPrefer Freq306090120150Min: 187.79 / Avg: 188.88 / Max: 189.73Min: 180.45 / Avg: 185.34 / Max: 191.43Min: 163.72 / Avg: 183.57 / Max: 191.451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto0.47690.95381.43071.90762.3845SE +/- 0.00189, N = 3SE +/- 0.00651, N = 3SE +/- 0.04888, N = 152.001002.008332.119661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 2 / Avg: 2 / Max: 2Min: 2 / Avg: 2.01 / Max: 2.02Min: 1.98 / Avg: 2.12 / Max: 2.531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.47, N = 3SE +/- 1.61, N = 3SE +/- 9.98, N = 15499.70497.88474.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto90180270360450Min: 498.89 / Avg: 499.7 / Max: 500.52Min: 494.71 / Avg: 497.88 / Max: 499.94Min: 395.89 / Avg: 474.93 / Max: 503.771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.34VGR Performance MetricPrefer CachePrefer FreqAuto80K160K240K320K400K3967453959453947661. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeAutoPrefer FreqPrefer Cache246810SE +/- 0.00120, N = 3SE +/- 0.00336, N = 3SE +/- 0.00154, N = 37.576927.572997.56189
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeAutoPrefer FreqPrefer Cache3691215Min: 7.57 / Avg: 7.58 / Max: 7.58Min: 7.57 / Avg: 7.57 / Max: 7.58Min: 7.56 / Avg: 7.56 / Max: 7.56

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunPrefer FreqPrefer CacheAuto70140210280350SE +/- 2.63, N = 3SE +/- 0.72, N = 3SE +/- 3.39, N = 3323.36314.70313.67MIN: 18.55 / MAX: 12000MIN: 15.62 / MAX: 10000MIN: 15.76 / MAX: 10000
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunPrefer FreqPrefer CacheAuto60120180240300Min: 319.52 / Avg: 323.36 / Max: 328.39Min: 313.38 / Avg: 314.7 / Max: 315.84Min: 308.81 / Avg: 313.67 / Max: 320.19

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunPrefer FreqPrefer CacheAuto70140210280350SE +/- 1.19, N = 3SE +/- 2.19, N = 3SE +/- 1.92, N = 3321.27316.99311.62MIN: 19.84 / MAX: 12000MIN: 19.73 / MAX: 10000MIN: 15.96 / MAX: 12000
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunPrefer FreqPrefer CacheAuto60120180240300Min: 319.52 / Avg: 321.27 / Max: 323.54Min: 312.94 / Avg: 316.99 / Max: 320.48Min: 307.91 / Avg: 311.62 / Max: 314.31

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CachePrefer FreqPrefer CacheAuto60120180240300SE +/- 0.59, N = 3SE +/- 1.64, N = 3SE +/- 1.37, N = 3281.13280.93275.83MIN: 12.85 / MAX: 7500MIN: 13.18 / MAX: 8571.43MIN: 13.18 / MAX: 7500
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CachePrefer FreqPrefer CacheAuto50100150200250Min: 279.97 / Avg: 281.13 / Max: 281.9Min: 278.71 / Avg: 280.93 / Max: 284.14Min: 273.64 / Avg: 275.83 / Max: 278.35

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Prefer FreqAutoPrefer Cache246810SE +/- 0.00077, N = 3SE +/- 0.00551, N = 3SE +/- 0.01712, N = 38.331978.328858.328301. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Prefer FreqAutoPrefer Cache3691215Min: 8.33 / Avg: 8.33 / Max: 8.33Min: 8.32 / Avg: 8.33 / Max: 8.34Min: 8.3 / Avg: 8.33 / Max: 8.351. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromePrefer CachePrefer FreqAuto7001400210028003500SE +/- 42.76, N = 15SE +/- 33.73, N = 15SE +/- 42.95, N = 153103312431391. chrome 110.0.5481.96
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromePrefer CachePrefer FreqAuto5001000150020002500Min: 2760 / Avg: 3103.33 / Max: 3254Min: 2833 / Avg: 3124.2 / Max: 3259Min: 2743 / Avg: 3139.47 / Max: 32661. chrome 110.0.5481.96

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensPrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 112.03, N = 3SE +/- 88.81, N = 3SE +/- 77.81, N = 37991.48103.68108.5MIN: 7874.73 / MAX: 9046.54MIN: 7926.83 / MAX: 8896.3MIN: 7956.94 / MAX: 8936.02
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensPrefer FreqPrefer CacheAuto14002800420056007000Min: 7874.73 / Avg: 7991.37 / Max: 8215.37Min: 7926.83 / Avg: 8103.63 / Max: 8206.88Min: 7956.94 / Avg: 8108.52 / Max: 8214.85

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Prefer FreqAutoPrefer Cache20406080100SE +/- 0.29, N = 3SE +/- 0.56, N = 15SE +/- 0.17, N = 371.8073.6776.171. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Prefer FreqAutoPrefer Cache1530456075Min: 71.24 / Avg: 71.8 / Max: 72.23Min: 71.57 / Avg: 73.67 / Max: 76.75Min: 75.88 / Avg: 76.17 / Max: 76.481. (CXX) g++ options: -O3 -fPIC -lm

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyPrefer FreqAutoPrefer Cache4080120160200SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3167.94168.20168.24
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyPrefer FreqAutoPrefer Cache306090120150Min: 167.76 / Avg: 167.94 / Max: 168.22Min: 168.03 / Avg: 168.2 / Max: 168.42Min: 168.05 / Avg: 168.24 / Max: 168.38

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeAutoPrefer CachePrefer Freq2K4K6K8K10KSE +/- 58.84, N = 3SE +/- 79.65, N = 3SE +/- 94.41, N = 47732.77772.77797.2MIN: 5663.78 / MAX: 7847.58MIN: 5787.99 / MAX: 7897.04MIN: 5695.75 / MAX: 7918.79
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeAutoPrefer CachePrefer Freq14002800420056007000Min: 7653.14 / Avg: 7732.69 / Max: 7847.58Min: 7624.29 / Avg: 7772.68 / Max: 7897.04Min: 7519.9 / Avg: 7797.2 / Max: 7918.79

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetAutoPrefer CachePrefer Freq0.76951.5392.30853.0783.8475SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 153.373.373.42MIN: 3.32 / MAX: 3.78MIN: 3.3 / MAX: 3.84MIN: 3.32 / MAX: 3.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetAutoPrefer CachePrefer Freq246810Min: 3.35 / Avg: 3.37 / Max: 3.38Min: 3.33 / Avg: 3.37 / Max: 3.39Min: 3.36 / Avg: 3.42 / Max: 3.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetAutoPrefer CachePrefer Freq1.07552.1513.22654.3025.3775SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 134.734.754.78MIN: 4.62 / MAX: 9.76MIN: 4.62 / MAX: 5.91MIN: 4.62 / MAX: 10.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetAutoPrefer CachePrefer Freq246810Min: 4.69 / Avg: 4.73 / Max: 4.75Min: 4.67 / Avg: 4.75 / Max: 4.8Min: 4.68 / Avg: 4.78 / Max: 51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerPrefer FreqAutoPrefer Cache20406080100SE +/- 0.23, N = 15SE +/- 1.09, N = 3SE +/- 0.91, N = 380.5581.5282.01MIN: 79.58 / MAX: 94.08MIN: 80.09 / MAX: 86.32MIN: 79.65 / MAX: 95.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerPrefer FreqAutoPrefer Cache1632486480Min: 79.88 / Avg: 80.55 / Max: 83.7Min: 80.36 / Avg: 81.52 / Max: 83.7Min: 80.23 / Avg: 82.01 / Max: 83.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mAutoPrefer CachePrefer Freq3691215SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 1512.1212.1812.30MIN: 11.83 / MAX: 17.6MIN: 11.67 / MAX: 12.88MIN: 11.79 / MAX: 39.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mAutoPrefer CachePrefer Freq48121620Min: 12.01 / Avg: 12.12 / Max: 12.19Min: 11.88 / Avg: 12.18 / Max: 12.42Min: 11.97 / Avg: 12.3 / Max: 12.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdPrefer CacheAutoPrefer Freq3691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 1511.7011.7711.87MIN: 11.52 / MAX: 17.65MIN: 11.6 / MAX: 12.48MIN: 11.61 / MAX: 18.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdPrefer CacheAutoPrefer Freq3691215Min: 11.66 / Avg: 11.7 / Max: 11.73Min: 11.73 / Avg: 11.77 / Max: 11.8Min: 11.72 / Avg: 11.87 / Max: 12.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyAutoPrefer CachePrefer Freq48121620SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 1514.3214.3514.41MIN: 14.06 / MAX: 19.84MIN: 14.11 / MAX: 14.93MIN: 14.02 / MAX: 20.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyAutoPrefer CachePrefer Freq48121620Min: 14.25 / Avg: 14.32 / Max: 14.37Min: 14.31 / Avg: 14.35 / Max: 14.39Min: 14.15 / Avg: 14.41 / Max: 15.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Prefer CacheAutoPrefer Freq3691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 1511.4911.6011.61MIN: 11.32 / MAX: 12.4MIN: 11.46 / MAX: 17.5MIN: 11.23 / MAX: 17.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Prefer CacheAutoPrefer Freq3691215Min: 11.43 / Avg: 11.49 / Max: 11.55Min: 11.58 / Avg: 11.6 / Max: 11.64Min: 11.34 / Avg: 11.61 / Max: 12.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetPrefer FreqPrefer CacheAuto1.08452.1693.25354.3385.4225SE +/- 0.01, N = 14SE +/- 0.12, N = 3SE +/- 0.13, N = 34.744.754.82MIN: 4.6 / MAX: 10.72MIN: 4.42 / MAX: 5.49MIN: 4.45 / MAX: 5.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetPrefer FreqPrefer CacheAuto246810Min: 4.67 / Avg: 4.74 / Max: 4.82Min: 4.51 / Avg: 4.75 / Max: 4.87Min: 4.56 / Avg: 4.82 / Max: 4.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Prefer CachePrefer FreqAuto246810SE +/- 0.01, N = 3SE +/- 0.01, N = 15SE +/- 0.01, N = 36.907.027.03MIN: 6.76 / MAX: 7.67MIN: 6.83 / MAX: 10.65MIN: 6.9 / MAX: 7.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Prefer CachePrefer FreqAuto3691215Min: 6.89 / Avg: 6.9 / Max: 6.91Min: 6.93 / Avg: 7.02 / Max: 7.09Min: 7.01 / Avg: 7.03 / Max: 7.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16AutoPrefer CachePrefer Freq612182430SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 1524.5124.5724.72MIN: 24.28 / MAX: 37.93MIN: 24.29 / MAX: 29.41MIN: 24.22 / MAX: 59.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16AutoPrefer CachePrefer Freq612182430Min: 24.45 / Avg: 24.51 / Max: 24.57Min: 24.52 / Avg: 24.57 / Max: 24.6Min: 24.48 / Avg: 24.72 / Max: 25.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetPrefer CacheAutoPrefer Freq246810SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 158.478.488.61MIN: 8.29 / MAX: 9.39MIN: 8.36 / MAX: 9.39MIN: 8.31 / MAX: 9.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetPrefer CacheAutoPrefer Freq3691215Min: 8.41 / Avg: 8.47 / Max: 8.54Min: 8.45 / Avg: 8.48 / Max: 8.52Min: 8.46 / Avg: 8.61 / Max: 8.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceAutoPrefer CachePrefer Freq0.37130.74261.11391.48521.8565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 151.621.631.65MIN: 1.59 / MAX: 1.99MIN: 1.57 / MAX: 1.98MIN: 1.58 / MAX: 2.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceAutoPrefer CachePrefer Freq246810Min: 1.61 / Avg: 1.62 / Max: 1.64Min: 1.6 / Avg: 1.63 / Max: 1.65Min: 1.61 / Avg: 1.65 / Max: 1.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0AutoPrefer CachePrefer Freq1.0712.1423.2134.2845.355SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 154.644.664.76MIN: 4.57 / MAX: 5.09MIN: 4.56 / MAX: 10.75MIN: 4.62 / MAX: 5.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0AutoPrefer CachePrefer Freq246810Min: 4.63 / Avg: 4.64 / Max: 4.64Min: 4.6 / Avg: 4.66 / Max: 4.71Min: 4.68 / Avg: 4.76 / Max: 4.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2AutoPrefer CachePrefer Freq0.8731.7462.6193.4924.365SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 133.843.843.88MIN: 3.76 / MAX: 4.18MIN: 3.73 / MAX: 4.32MIN: 3.75 / MAX: 9.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2AutoPrefer CachePrefer Freq246810Min: 3.83 / Avg: 3.84 / Max: 3.86Min: 3.8 / Avg: 3.84 / Max: 3.87Min: 3.84 / Avg: 3.88 / Max: 3.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3AutoPrefer CachePrefer Freq0.76051.5212.28153.0423.8025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 153.313.333.38MIN: 3.25 / MAX: 3.89MIN: 3.24 / MAX: 3.96MIN: 3.27 / MAX: 3.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3AutoPrefer CachePrefer Freq246810Min: 3.3 / Avg: 3.31 / Max: 3.33Min: 3.29 / Avg: 3.33 / Max: 3.36Min: 3.31 / Avg: 3.38 / Max: 3.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2AutoPrefer CachePrefer Freq0.8731.7462.6193.4924.365SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 153.813.823.88MIN: 3.75 / MAX: 4.33MIN: 3.74 / MAX: 4.27MIN: 3.78 / MAX: 7.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2AutoPrefer CachePrefer Freq246810Min: 3.79 / Avg: 3.81 / Max: 3.82Min: 3.78 / Avg: 3.82 / Max: 3.85Min: 3.83 / Avg: 3.88 / Max: 3.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetAutoPrefer CachePrefer Freq3691215SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 158.918.929.16MIN: 8.81 / MAX: 9.59MIN: 8.81 / MAX: 14.7MIN: 8.92 / MAX: 10.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetAutoPrefer CachePrefer Freq3691215Min: 8.86 / Avg: 8.91 / Max: 8.95Min: 8.89 / Avg: 8.92 / Max: 8.93Min: 9.01 / Avg: 9.16 / Max: 9.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromePrefer CachePrefer FreqAuto20K40K60K80K100KSE +/- 916.53, N = 15SE +/- 958.51, N = 6SE +/- 950.48, N = 159564495543950991. chrome 110.0.5481.96
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromePrefer CachePrefer FreqAuto17K34K51K68K85KMin: 90881 / Avg: 95643.93 / Max: 100143Min: 91363 / Avg: 95543 / Max: 97911Min: 89650 / Avg: 95099.07 / Max: 1013041. chrome 110.0.5481.96

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedAutoPrefer CachePrefer Freq400800120016002000SE +/- 53.52, N = 3SE +/- 17.36, N = 15SE +/- 10.27, N = 31968.01847.01835.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedAutoPrefer CachePrefer Freq30060090012001500Min: 1861 / Avg: 1968.03 / Max: 2021.7Min: 1746.5 / Avg: 1846.95 / Max: 2073.2Min: 1815.8 / Avg: 1835.13 / Max: 1850.81. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedPrefer FreqAutoPrefer Cache48121620SE +/- 0.06, N = 3SE +/- 0.00, N = 3SE +/- 0.15, N = 1514.914.914.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedPrefer FreqAutoPrefer Cache48121620Min: 14.8 / Avg: 14.9 / Max: 15Min: 14.9 / Avg: 14.9 / Max: 14.9Min: 12.9 / Avg: 14.59 / Max: 151. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache30K60K90K120K150KSE +/- 142.59, N = 3SE +/- 1166.54, N = 3SE +/- 538.77, N = 31500681521481536331. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache30K60K90K120K150KMin: 149841 / Avg: 150068 / Max: 150331Min: 149844 / Avg: 152148 / Max: 153618Min: 152762 / Avg: 153633.33 / Max: 1546181. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityPrefer FreqPrefer CacheAuto8K16K24K32K40KSE +/- 393.19, N = 15SE +/- 279.96, N = 15SE +/- 433.26, N = 1535444.4235308.6334995.781. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityPrefer FreqPrefer CacheAuto6K12K18K24K30KMin: 33825.18 / Avg: 35444.42 / Max: 38803.83Min: 33727.42 / Avg: 35308.63 / Max: 37543.66Min: 33412.07 / Avg: 34995.78 / Max: 39339.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlAutoPrefer CachePrefer Freq13K26K39K52K65KSE +/- 450.34, N = 15SE +/- 588.74, N = 15SE +/- 649.11, N = 155935858791581891. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlAutoPrefer CachePrefer Freq10K20K30K40K50KMin: 56150 / Avg: 59358 / Max: 62380Min: 53740 / Avg: 58791.33 / Max: 61900Min: 51510 / Avg: 58188.67 / Max: 610101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestPrefer CachePrefer FreqAuto306090120150SE +/- 1.26, N = 3SE +/- 1.81, N = 3SE +/- 0.72, N = 3147.19148.63149.04
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestPrefer CachePrefer FreqAuto306090120150Min: 145.52 / Avg: 147.19 / Max: 149.66Min: 146.68 / Avg: 148.63 / Max: 152.24Min: 148.04 / Avg: 149.04 / Max: 150.43

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Prefer CachePrefer FreqAuto306090120150SE +/- 0.78, N = 3SE +/- 1.17, N = 3SE +/- 0.50, N = 3143.14143.80145.311. (CC) gcc options: -O2 -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Prefer CachePrefer FreqAuto306090120150Min: 141.61 / Avg: 143.14 / Max: 144.15Min: 142.4 / Avg: 143.79 / Max: 146.12Min: 144.55 / Avg: 145.31 / Max: 146.241. (CC) gcc options: -O2 -fvisibility=hidden

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache10002000300040005000SE +/- 3.46, N = 3SE +/- 31.34, N = 3SE +/- 4.91, N = 34581462946811. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache8001600240032004000Min: 4575 / Avg: 4581 / Max: 4587Min: 4566 / Avg: 4628.67 / Max: 4661Min: 4671 / Avg: 4680.67 / Max: 46871. (CXX) g++ options: -O3 -lm -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetPrefer CachePrefer FreqAuto400800120016002000SE +/- 2.74, N = 3SE +/- 6.39, N = 3SE +/- 5.47, N = 32027.282028.842035.16MIN: 1978.04 / MAX: 2109.64MIN: 1972.23 / MAX: 2118.4MIN: 1988.46 / MAX: 2119.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetPrefer CachePrefer FreqAuto400800120016002000Min: 2021.86 / Avg: 2027.28 / Max: 2030.67Min: 2020.55 / Avg: 2028.84 / Max: 2041.4Min: 2025.61 / Avg: 2035.16 / Max: 2044.561. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache9001800270036004500SE +/- 3.51, N = 3SE +/- 7.55, N = 3SE +/- 8.89, N = 33988398839901. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache7001400210028003500Min: 3981 / Avg: 3988 / Max: 3992Min: 3973 / Avg: 3988 / Max: 3997Min: 3977 / Avg: 3990 / Max: 40071. (CXX) g++ options: -O3 -lm -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyAutoPrefer FreqPrefer Cache306090120150SE +/- 0.07, N = 3SE +/- 0.13, N = 3SE +/- 0.10, N = 3138.42138.43138.45
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyAutoPrefer FreqPrefer Cache306090120150Min: 138.28 / Avg: 138.42 / Max: 138.53Min: 138.19 / Avg: 138.43 / Max: 138.63Min: 138.26 / Avg: 138.45 / Max: 138.56

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache8001600240032004000SE +/- 35.92, N = 3SE +/- 0.58, N = 3SE +/- 4.58, N = 33882394639481. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache7001400210028003500Min: 3833 / Avg: 3882 / Max: 3952Min: 3945 / Avg: 3946 / Max: 3947Min: 3939 / Avg: 3948 / Max: 39541. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringAutoPrefer CachePrefer Freq7K14K21K28K35KSE +/- 645.93, N = 15SE +/- 592.21, N = 14SE +/- 283.23, N = 1232571.4031486.8229496.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringAutoPrefer CachePrefer Freq6K12K18K24K30KMin: 28136.72 / Avg: 32571.4 / Max: 38716.83Min: 26166.76 / Avg: 31486.82 / Max: 33875.27Min: 26897.82 / Avg: 29496.76 / Max: 31276.891. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache30K60K90K120K150KSE +/- 113.14, N = 3SE +/- 208.57, N = 3SE +/- 123.29, N = 31279481311991315521. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache20K40K60K80K100KMin: 127787 / Avg: 127947.67 / Max: 128166Min: 130799 / Avg: 131199.33 / Max: 131501Min: 131322 / Avg: 131552 / Max: 1317441. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer CachePrefer Freq30K60K90K120K150KSE +/- 180.00, N = 3SE +/- 114.33, N = 3SE +/- 180.62, N = 31266041296771299001. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer CachePrefer Freq20K40K60K80K100KMin: 126422 / Avg: 126604 / Max: 126964Min: 129449 / Avg: 129676.67 / Max: 129809Min: 129560 / Avg: 129899.67 / Max: 1301761. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAPrefer FreqAutoPrefer Cache130260390520650SE +/- 4.86, N = 13SE +/- 3.50, N = 14SE +/- 3.65, N = 13581.60578.01576.941. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAPrefer FreqAutoPrefer Cache100200300400500Min: 572.26 / Avg: 581.6 / Max: 639.3Min: 570.49 / Avg: 578.01 / Max: 622.67Min: 570.19 / Avg: 576.94 / Max: 620.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubePrefer CachePrefer FreqAuto306090120150SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 3126.81126.82126.881. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubePrefer CachePrefer FreqAuto20406080100Min: 126.55 / Avg: 126.81 / Max: 127.02Min: 126.69 / Avg: 126.82 / Max: 127.03Min: 126.64 / Avg: 126.88 / Max: 127.141. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyPrefer FreqAutoPrefer Cache110220330440550SE +/- 7.93, N = 15SE +/- 9.27, N = 15SE +/- 1.94, N = 3470.7475.2521.1MIN: 360.24 / MAX: 753.62MIN: 358.08 / MAX: 752.1MIN: 368.91 / MAX: 746.18
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyPrefer FreqAutoPrefer Cache90180270360450Min: 434.4 / Avg: 470.72 / Max: 522.64Min: 427.47 / Avg: 475.19 / Max: 525.69Min: 519.07 / Avg: 521.13 / Max: 525.01

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19Prefer FreqAutoPrefer Cache306090120150SE +/- 0.44, N = 3SE +/- 0.49, N = 3SE +/- 0.12, N = 3126.56127.26127.45
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19Prefer FreqAutoPrefer Cache20406080100Min: 125.97 / Avg: 126.56 / Max: 127.42Min: 126.28 / Avg: 127.26 / Max: 127.75Min: 127.23 / Avg: 127.44 / Max: 127.64

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache8K16K24K32K40KSE +/- 49.58, N = 3SE +/- 63.89, N = 3SE +/- 97.48, N = 33753537588376901. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache7K14K21K28K35KMin: 37459 / Avg: 37534.67 / Max: 37628Min: 37465 / Avg: 37587.67 / Max: 37680Min: 37579 / Avg: 37689.67 / Max: 378841. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheAutoPrefer CachePrefer Freq4080120160200SE +/- 3.49, N = 15SE +/- 1.74, N = 15SE +/- 0.31, N = 6183.32181.0631.911. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CacheAutoPrefer CachePrefer Freq306090120150Min: 168.28 / Avg: 183.32 / Max: 224.44Min: 171.79 / Avg: 181.06 / Max: 198.11Min: 30.82 / Avg: 31.91 / Max: 32.591. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianPrefer CacheAutoPrefer Freq130260390520650SE +/- 0.88, N = 3SE +/- 0.33, N = 3SE +/- 5.79, N = 126126116051. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianPrefer CacheAutoPrefer Freq110220330440550Min: 611 / Avg: 612.33 / Max: 614Min: 610 / Avg: 610.67 / Max: 611Min: 542 / Avg: 604.67 / Max: 6161. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SerialAutoPrefer CachePrefer Freq80160240320400331.61360.31366.42

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedPrefer CachePrefer FreqAuto4K8K12K16K20KSE +/- 33.45, N = 15SE +/- 123.72, N = 4SE +/- 23.00, N = 318764.518587.518495.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedPrefer CachePrefer FreqAuto3K6K9K12K15KMin: 18501.8 / Avg: 18764.47 / Max: 18861.9Min: 18362 / Avg: 18587.45 / Max: 18887.3Min: 18456.5 / Avg: 18495.27 / Max: 18536.11. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedPrefer CachePrefer FreqAuto1632486480SE +/- 0.53, N = 15SE +/- 0.81, N = 4SE +/- 0.40, N = 373.6172.8472.101. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedPrefer CachePrefer FreqAuto1428425670Min: 71.96 / Avg: 73.61 / Max: 79.78Min: 70.73 / Avg: 72.84 / Max: 74.48Min: 71.62 / Avg: 72.1 / Max: 72.891. (CC) gcc options: -O3

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingPrefer CacheAutoPrefer Freq0.24620.49240.73860.98481.231SE +/- 0.008, N = 12SE +/- 0.005, N = 3SE +/- 0.001, N = 31.0671.0851.094
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingPrefer CacheAutoPrefer Freq246810Min: 0.98 / Avg: 1.07 / Max: 1.09Min: 1.08 / Avg: 1.09 / Max: 1.09Min: 1.09 / Avg: 1.09 / Max: 1.1

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinPrefer FreqAutoPrefer Cache8001600240032004000SE +/- 137.18, N = 15SE +/- 55.34, N = 3SE +/- 45.48, N = 153869.373841.473781.071. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinPrefer FreqAutoPrefer Cache7001400210028003500Min: 3119.44 / Avg: 3869.37 / Max: 4560.72Min: 3782.16 / Avg: 3841.47 / Max: 3952.05Min: 3375.88 / Avg: 3781.07 / Max: 4063.331. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Prefer FreqAutoPrefer Cache918273645SE +/- 0.07, N = 3SE +/- 0.32, N = 8SE +/- 0.31, N = 1535.9436.2337.121. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Prefer FreqAutoPrefer Cache816243240Min: 35.84 / Avg: 35.94 / Max: 36.08Min: 35.64 / Avg: 36.23 / Max: 38.41Min: 35.59 / Avg: 37.12 / Max: 38.51. (CXX) g++ options: -O3 -fPIC -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer CacheAutoPrefer Freq30060090012001500SE +/- 1.86, N = 3SE +/- 1.73, N = 3SE +/- 1.53, N = 31152115411781. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer CacheAutoPrefer Freq2004006008001000Min: 1148 / Avg: 1151.67 / Max: 1154Min: 1151 / Avg: 1154 / Max: 1157Min: 1175 / Avg: 1178 / Max: 11801. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache7K14K21K28K35KSE +/- 52.98, N = 3SE +/- 33.67, N = 3SE +/- 39.37, N = 33131132005320611. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache6K12K18K24K30KMin: 31206 / Avg: 31311.33 / Max: 31374Min: 31971 / Avg: 32004.67 / Max: 32072Min: 32019 / Avg: 32061.33 / Max: 321401. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer CachePrefer Freq2004006008001000SE +/- 0.00, N = 3SE +/- 1.76, N = 3SE +/- 1.00, N = 398099710011. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer CachePrefer Freq2004006008001000Min: 980 / Avg: 980 / Max: 980Min: 994 / Avg: 996.67 / Max: 1000Min: 999 / Avg: 1001 / Max: 10021. (CXX) g++ options: -O3 -lm -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1.30952.6193.92855.2386.5475SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.825.805.801. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto246810Min: 5.82 / Avg: 5.82 / Max: 5.82Min: 5.79 / Avg: 5.8 / Max: 5.8Min: 5.8 / Avg: 5.8 / Max: 5.811. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache2004006008001000SE +/- 1.33, N = 3SE +/- 1.53, N = 3SE +/- 1.86, N = 39699849861. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerAutoPrefer FreqPrefer Cache2004006008001000Min: 966 / Avg: 968.67 / Max: 970Min: 981 / Avg: 984 / Max: 986Min: 984 / Avg: 986.33 / Max: 9901. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache7K14K21K28K35KSE +/- 52.27, N = 3SE +/- 68.70, N = 3SE +/- 38.68, N = 33094331551315931. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerAutoPrefer FreqPrefer Cache5K10K15K20K25KMin: 30859 / Avg: 30943.33 / Max: 31039Min: 31423 / Avg: 31551.33 / Max: 31658Min: 31526 / Avg: 31592.67 / Max: 316601. (CXX) g++ options: -O3 -lm -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.03167, N = 3SE +/- 0.03086, N = 3SE +/- 0.01046, N = 37.564007.507977.46570
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.5 / Avg: 7.56 / Max: 7.61Min: 7.48 / Avg: 7.51 / Max: 7.57Min: 7.45 / Avg: 7.47 / Max: 7.48

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timePrefer FreqAutoPrefer Cache246810SE +/- 0.01807, N = 3SE +/- 0.02259, N = 3SE +/- 0.00354, N = 37.700027.661027.61487
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timePrefer FreqAutoPrefer Cache3691215Min: 7.68 / Avg: 7.7 / Max: 7.74Min: 7.63 / Avg: 7.66 / Max: 7.71Min: 7.61 / Avg: 7.61 / Max: 7.62

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timePrefer CachePrefer FreqAuto3691215SE +/- 0.00821, N = 3SE +/- 0.00141, N = 3SE +/- 0.01032, N = 38.985458.980568.95807
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timePrefer CachePrefer FreqAuto3691215Min: 8.97 / Avg: 8.99 / Max: 9Min: 8.98 / Avg: 8.98 / Max: 8.98Min: 8.94 / Avg: 8.96 / Max: 8.97

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyPrefer CachePrefer FreqAuto306090120150145.15145.59146.31

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaPrefer CachePrefer FreqAuto246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.045.905.541. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaPrefer CachePrefer FreqAuto246810Min: 6 / Avg: 6.04 / Max: 6.08Min: 5.89 / Avg: 5.9 / Max: 5.91Min: 5.53 / Avg: 5.54 / Max: 5.551. (CXX) g++ options: -O3

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADPrefer CachePrefer FreqAuto20406080100SE +/- 0.40, N = 3SE +/- 0.20, N = 3SE +/- 0.32, N = 389.8689.9590.68
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADPrefer CachePrefer FreqAuto20406080100Min: 89.06 / Avg: 89.86 / Max: 90.34Min: 89.57 / Avg: 89.95 / Max: 90.23Min: 90.16 / Avg: 90.68 / Max: 91.26

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUPrefer FreqAutoPrefer Cache20K40K60K80K100KSE +/- 32.58, N = 3SE +/- 81.18, N = 3SE +/- 67.91, N = 3107729.78107446.97107434.061. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUPrefer FreqAutoPrefer Cache20K40K60K80K100KMin: 107683.21 / Avg: 107729.78 / Max: 107792.54Min: 107324.45 / Avg: 107446.97 / Max: 107600.5Min: 107336.86 / Avg: 107434.06 / Max: 107564.811. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingPrefer CacheAutoPrefer Freq6M12M18M24M30MSE +/- 518902.00, N = 15SE +/- 221720.67, N = 7SE +/- 15666.97, N = 325978492.7725423648.1425052017.731. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingPrefer CacheAutoPrefer Freq5M10M15M20M25MMin: 24923838.67 / Avg: 25978492.77 / Max: 30420911.82Min: 24969753.1 / Avg: 25423648.14 / Max: 26330489.33Min: 25025852.55 / Avg: 25052017.73 / Max: 25080030.151. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Prefer FreqPrefer CacheAuto20406080100SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 382.9383.0783.161. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Prefer FreqPrefer CacheAuto1632486480Min: 82.86 / Avg: 82.93 / Max: 83.05Min: 83.04 / Avg: 83.07 / Max: 83.09Min: 83.06 / Avg: 83.16 / Max: 83.241. (CXX) g++ options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUPrefer CacheAutoPrefer Freq30060090012001500SE +/- 5.17, N = 3SE +/- 10.71, N = 3SE +/- 3.20, N = 31276.221280.481288.24MIN: 1255.9MIN: 1249.8MIN: 1274.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUPrefer CacheAutoPrefer Freq2004006008001000Min: 1267.48 / Avg: 1276.22 / Max: 1285.37Min: 1260.49 / Avg: 1280.48 / Max: 1297.15Min: 1284.22 / Avg: 1288.24 / Max: 1294.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqAutoPrefer Cache30060090012001500SE +/- 12.75, N = 3SE +/- 3.31, N = 3SE +/- 5.43, N = 31279.161280.491286.51MIN: 1247.77MIN: 1262.86MIN: 1267.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqAutoPrefer Cache2004006008001000Min: 1258.42 / Avg: 1279.16 / Max: 1302.39Min: 1273.93 / Avg: 1280.49 / Max: 1284.54Min: 1278.69 / Avg: 1286.51 / Max: 1296.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUAutoPrefer FreqPrefer Cache150300450600750SE +/- 7.25, N = 3SE +/- 2.87, N = 3SE +/- 1.86, N = 3674.84678.66683.48MIN: 656.84MIN: 670.71MIN: 674.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUAutoPrefer FreqPrefer Cache120240360480600Min: 660.46 / Avg: 674.84 / Max: 683.59Min: 675.71 / Avg: 678.66 / Max: 684.4Min: 680.13 / Avg: 683.48 / Max: 686.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUPrefer FreqAutoPrefer Cache150300450600750SE +/- 3.86, N = 3SE +/- 2.26, N = 3SE +/- 3.19, N = 3673.16673.24685.42MIN: 663.46MIN: 666.56MIN: 674.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUPrefer FreqAutoPrefer Cache120240360480600Min: 666.58 / Avg: 673.16 / Max: 679.96Min: 670.49 / Avg: 673.24 / Max: 677.71Min: 679.77 / Avg: 685.42 / Max: 690.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomPrefer FreqPrefer CacheAuto0.42530.85061.27591.70122.1265SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.891.871.701. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomPrefer FreqPrefer CacheAuto246810Min: 1.88 / Avg: 1.89 / Max: 1.9Min: 1.86 / Avg: 1.87 / Max: 1.87Min: 1.7 / Avg: 1.7 / Max: 1.711. (CXX) g++ options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression SpeedPrefer CachePrefer FreqAuto5001000150020002500SE +/- 13.16, N = 5SE +/- 9.76, N = 3SE +/- 2.63, N = 32428.52423.12414.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression SpeedPrefer CachePrefer FreqAuto400800120016002000Min: 2386.1 / Avg: 2428.5 / Max: 2456.1Min: 2411.8 / Avg: 2423.07 / Max: 2442.5Min: 2410 / Avg: 2414.1 / Max: 24191. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedAutoPrefer FreqPrefer Cache2004006008001000SE +/- 5.17, N = 3SE +/- 6.16, N = 3SE +/- 11.34, N = 51051.01046.81046.41. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedAutoPrefer FreqPrefer Cache2004006008001000Min: 1045.1 / Avg: 1051 / Max: 1061.3Min: 1036.4 / Avg: 1046.77 / Max: 1057.7Min: 1001.8 / Avg: 1046.38 / Max: 10651. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Prefer CacheAutoPrefer Freq0.74411.48822.23232.97643.7205SE +/- 0.017434, N = 3SE +/- 0.026295, N = 3SE +/- 0.028257, N = 73.3072513.2808743.2423171. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Prefer CacheAutoPrefer Freq246810Min: 3.28 / Avg: 3.31 / Max: 3.34Min: 3.23 / Avg: 3.28 / Max: 3.32Min: 3.16 / Avg: 3.24 / Max: 3.371. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromePrefer FreqAutoPrefer Cache246810SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 157.047.247.361. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromePrefer FreqAutoPrefer Cache3691215Min: 6.94 / Avg: 7.04 / Max: 7.23Min: 7.12 / Avg: 7.24 / Max: 7.42Min: 6.94 / Avg: 7.36 / Max: 7.871. chrome 110.0.5481.96

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisPrefer CacheAutoPrefer Freq1632486480SE +/- 0.15, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 370.3470.3870.411. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisPrefer CacheAutoPrefer Freq1428425670Min: 70.06 / Avg: 70.34 / Max: 70.56Min: 70.2 / Avg: 70.38 / Max: 70.51Min: 70.38 / Avg: 70.41 / Max: 70.451. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeAutoPrefer CachePrefer Freq70140210280350SE +/- 3.14, N = 3SE +/- 3.84, N = 4SE +/- 4.12, N = 3333.01326.22325.831. chrome 110.0.5481.96
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeAutoPrefer CachePrefer Freq60120180240300Min: 327.03 / Avg: 333.01 / Max: 337.69Min: 318.99 / Avg: 326.22 / Max: 333.71Min: 317.66 / Avg: 325.83 / Max: 330.861. chrome 110.0.5481.96

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUPrefer CachePrefer FreqAuto7K14K21K28K35KSE +/- 72.19, N = 3SE +/- 97.34, N = 3SE +/- 58.09, N = 3312803123831224
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUPrefer CachePrefer FreqAuto5K10K15K20K25KMin: 31193 / Avg: 31279.67 / Max: 31423Min: 31093 / Avg: 31238 / Max: 31423Min: 31126 / Avg: 31223.67 / Max: 31327

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barePrefer FreqAutoPrefer Cache0.60551.2111.81652.4223.0275SE +/- 0.003, N = 3SE +/- 0.005, N = 3SE +/- 0.004, N = 32.6912.6852.6761. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barePrefer FreqAutoPrefer Cache246810Min: 2.69 / Avg: 2.69 / Max: 2.7Min: 2.68 / Avg: 2.69 / Max: 2.69Min: 2.67 / Avg: 2.68 / Max: 2.681. (CXX) g++ options: -O3

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUPrefer FreqPrefer CacheAuto918273645SE +/- 0.40, N = 3SE +/- 0.33, N = 3SE +/- 0.61, N = 1535.2035.3238.57
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUPrefer FreqPrefer CacheAuto816243240Min: 34.55 / Avg: 35.2 / Max: 35.92Min: 34.68 / Avg: 35.32 / Max: 35.78Min: 33.72 / Avg: 38.57 / Max: 40.67

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionAutoPrefer CachePrefer Freq1428425670SE +/- 0.26, N = 3SE +/- 0.70, N = 4SE +/- 0.39, N = 360.4760.8261.341. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionAutoPrefer CachePrefer Freq1224364860Min: 60.02 / Avg: 60.47 / Max: 60.91Min: 59.48 / Avg: 60.82 / Max: 62.78Min: 60.57 / Avg: 61.34 / Max: 61.81. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyPrefer FreqAutoPrefer Cache1530456075SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 367.4867.6167.62
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyPrefer FreqAutoPrefer Cache1326395265Min: 67.43 / Avg: 67.48 / Max: 67.58Min: 67.43 / Avg: 67.61 / Max: 67.83Min: 67.49 / Avg: 67.62 / Max: 67.74

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingPrefer FreqAutoPrefer Cache5001000150020002500SE +/- 31.52, N = 3SE +/- 26.27, N = 4SE +/- 17.52, N = 32216216921581. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingPrefer FreqAutoPrefer Cache400800120016002000Min: 2163 / Avg: 2215.67 / Max: 2272Min: 2141 / Avg: 2169.25 / Max: 2248Min: 2139 / Avg: 2158 / Max: 21931. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterPrefer CachePrefer FreqAuto2040608010096.5996.8696.91

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDPrefer FreqPrefer CacheAuto3691215SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 310.069.159.141. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDPrefer FreqPrefer CacheAuto3691215Min: 9.96 / Avg: 10.06 / Max: 10.2Min: 9.12 / Avg: 9.15 / Max: 9.21Min: 9.05 / Avg: 9.14 / Max: 9.191. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetPrefer FreqPrefer CacheAuto3691215SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 39.739.578.831. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetPrefer FreqPrefer CacheAuto3691215Min: 9.62 / Avg: 9.73 / Max: 9.85Min: 9.5 / Avg: 9.57 / Max: 9.64Min: 8.81 / Avg: 8.83 / Max: 8.851. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsAutoPrefer FreqPrefer Cache246810SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.00, N = 38.648.537.591. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsAutoPrefer FreqPrefer Cache3691215Min: 8.59 / Avg: 8.64 / Max: 8.69Min: 8.31 / Avg: 8.53 / Max: 8.65Min: 7.58 / Avg: 7.59 / Max: 7.591. (CXX) g++ options: -O3

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MPrefer CacheAutoPrefer Freq3K6K9K12K15KSE +/- 57.72, N = 3SE +/- 84.38, N = 3SE +/- 52.59, N = 316231.916134.416109.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MPrefer CacheAutoPrefer Freq3K6K9K12K15KMin: 16173.4 / Avg: 16231.87 / Max: 16347.3Min: 16025.1 / Avg: 16134.4 / Max: 16300.4Min: 16032.1 / Avg: 16109.37 / Max: 16209.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 2.74, N = 3SE +/- 2.11, N = 3SE +/- 3.67, N = 31080.711082.361084.60MIN: 591.34 / MAX: 1282.05MIN: 781.51 / MAX: 1247.1MIN: 963.74 / MAX: 1282.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1075.73 / Avg: 1080.71 / Max: 1085.19Min: 1079.79 / Avg: 1082.36 / Max: 1086.54Min: 1077.96 / Avg: 1084.6 / Max: 1090.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 37.377.357.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.32 / Avg: 7.37 / Max: 7.42Min: 7.31 / Avg: 7.35 / Max: 7.37Min: 7.3 / Avg: 7.34 / Max: 7.371. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 2.23, N = 3SE +/- 2.98, N = 3SE +/- 0.42, N = 31089.891093.401097.26MIN: 566.46 / MAX: 1284.73MIN: 638.38 / MAX: 1255.16MIN: 580.94 / MAX: 1284.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1085.75 / Avg: 1089.89 / Max: 1093.38Min: 1087.73 / Avg: 1093.4 / Max: 1097.81Min: 1096.43 / Avg: 1097.26 / Max: 1097.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 37.297.277.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.28 / Avg: 7.29 / Max: 7.31Min: 7.23 / Avg: 7.27 / Max: 7.31Min: 7.24 / Avg: 7.25 / Max: 7.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUPrefer FreqAutoPrefer Cache246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 37.927.907.86MIN: 7.07 / MAX: 8.33MIN: 7.04 / MAX: 8.37MIN: 7.02 / MAX: 8.29
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUPrefer FreqAutoPrefer Cache3691215Min: 7.91 / Avg: 7.92 / Max: 7.93Min: 7.88 / Avg: 7.9 / Max: 7.93Min: 7.86 / Avg: 7.86 / Max: 7.87

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonAutoPrefer FreqPrefer Cache1428425670SE +/- 0.19, N = 3SE +/- 0.66, N = 3SE +/- 0.19, N = 361.8962.3462.60
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonAutoPrefer FreqPrefer Cache1224364860Min: 61.51 / Avg: 61.89 / Max: 62.12Min: 61.67 / Avg: 62.34 / Max: 63.67Min: 62.22 / Avg: 62.6 / Max: 62.85

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedAutoPrefer CachePrefer Freq5001000150020002500SE +/- 14.86, N = 3SE +/- 10.59, N = 3SE +/- 23.52, N = 32485.52476.52469.81. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedAutoPrefer CachePrefer Freq400800120016002000Min: 2469.8 / Avg: 2485.5 / Max: 2515.2Min: 2457.5 / Avg: 2476.47 / Max: 2494.1Min: 2426.2 / Avg: 2469.8 / Max: 2506.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedPrefer CachePrefer FreqAuto70140210280350SE +/- 1.82, N = 3SE +/- 2.19, N = 3SE +/- 1.64, N = 3299.9297.9296.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedPrefer CachePrefer FreqAuto50100150200250Min: 296.3 / Avg: 299.87 / Max: 302.3Min: 294.1 / Avg: 297.93 / Max: 301.7Min: 293.5 / Avg: 296 / Max: 299.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUAutoPrefer FreqPrefer Cache130260390520650SE +/- 0.52, N = 3SE +/- 0.84, N = 3SE +/- 0.31, N = 3589.51589.71591.14MIN: 368.88 / MAX: 615.41MIN: 329.37 / MAX: 617.43MIN: 290.4 / MAX: 616.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUAutoPrefer FreqPrefer Cache100200300400500Min: 588.83 / Avg: 589.51 / Max: 590.53Min: 588.03 / Avg: 589.71 / Max: 590.71Min: 590.67 / Avg: 591.14 / Max: 591.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqAutoPrefer Cache3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 313.5313.5313.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqAutoPrefer Cache48121620Min: 13.5 / Avg: 13.53 / Max: 13.57Min: 13.51 / Avg: 13.53 / Max: 13.55Min: 13.47 / Avg: 13.48 / Max: 13.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUPrefer FreqAutoPrefer Cache1.00352.0073.01054.0145.0175SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 34.464.384.36MIN: 2.05 / MAX: 5MIN: 1.82 / MAX: 4.93MIN: 1.84 / MAX: 4.92
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUPrefer FreqAutoPrefer Cache246810Min: 4.44 / Avg: 4.46 / Max: 4.48Min: 4.34 / Avg: 4.38 / Max: 4.43Min: 4.33 / Avg: 4.36 / Max: 4.38

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAutoPrefer CachePrefer Freq70140210280350SE +/- 0.44, N = 3SE +/- 0.48, N = 3SE +/- 0.10, N = 3305.51305.71306.23MIN: 290.82 / MAX: 316.32MIN: 264.82 / MAX: 314.53MIN: 293.51 / MAX: 313.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAutoPrefer CachePrefer Freq50100150200250Min: 304.65 / Avg: 305.51 / Max: 306.09Min: 304.86 / Avg: 305.71 / Max: 306.51Min: 306.05 / Avg: 306.23 / Max: 306.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAutoPrefer CachePrefer Freq612182430SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 326.1426.1126.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAutoPrefer CachePrefer Freq612182430Min: 26.09 / Avg: 26.14 / Max: 26.21Min: 26.02 / Avg: 26.11 / Max: 26.22Min: 26.06 / Avg: 26.09 / Max: 26.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression SpeedPrefer FreqAutoPrefer Cache5001000150020002500SE +/- 2.30, N = 3SE +/- 10.66, N = 3SE +/- 58.56, N = 32267.82252.52158.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression SpeedPrefer FreqAutoPrefer Cache400800120016002000Min: 2263.3 / Avg: 2267.83 / Max: 2270.8Min: 2235.4 / Avg: 2252.53 / Max: 2272.1Min: 2046.1 / Avg: 2158.63 / Max: 22431. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression SpeedPrefer CacheAutoPrefer Freq30060090012001500SE +/- 9.62, N = 3SE +/- 1.91, N = 3SE +/- 7.52, N = 31423.91418.11405.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression SpeedPrefer CacheAutoPrefer Freq2004006008001000Min: 1407.6 / Avg: 1423.93 / Max: 1440.9Min: 1414.7 / Avg: 1418.13 / Max: 1421.3Min: 1390.5 / Avg: 1405.53 / Max: 1413.21. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSPrefer FreqPrefer CacheAuto400800120016002000SE +/- 1.83, N = 3SE +/- 7.98, N = 3SE +/- 11.39, N = 31857.11874.11874.3MIN: 1796.07 / MAX: 1967.45MIN: 1789.64 / MAX: 2506.58MIN: 1794.8 / MAX: 2057.95
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSPrefer FreqPrefer CacheAuto30060090012001500Min: 1853.43 / Avg: 1857.06 / Max: 1859.23Min: 1862.09 / Avg: 1874.09 / Max: 1889.21Min: 1857.57 / Avg: 1874.32 / Max: 1896.05

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression SpeedPrefer CacheAutoPrefer Freq5001000150020002500SE +/- 3.70, N = 3SE +/- 2.23, N = 3SE +/- 7.91, N = 32210.72198.62193.41. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression SpeedPrefer CacheAutoPrefer Freq400800120016002000Min: 2203.3 / Avg: 2210.67 / Max: 2215Min: 2195 / Avg: 2198.63 / Max: 2202.7Min: 2177.9 / Avg: 2193.4 / Max: 2203.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression SpeedAutoPrefer FreqPrefer Cache9001800270036004500SE +/- 20.17, N = 3SE +/- 29.94, N = 3SE +/- 26.15, N = 34033.34023.53967.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression SpeedAutoPrefer FreqPrefer Cache7001400210028003500Min: 3993 / Avg: 4033.27 / Max: 4055.6Min: 3991.5 / Avg: 4023.47 / Max: 4083.3Min: 3929.9 / Avg: 3967.53 / Max: 4017.81. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUPrefer FreqPrefer CacheAuto1.07552.1513.22654.3025.3775SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 34.784.784.77MIN: 2.12 / MAX: 5.4MIN: 2.1 / MAX: 5.41MIN: 2.09 / MAX: 5.39
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUPrefer FreqPrefer CacheAuto246810Min: 4.77 / Avg: 4.78 / Max: 4.79Min: 4.75 / Avg: 4.78 / Max: 4.8Min: 4.76 / Avg: 4.77 / Max: 4.77

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.99, N = 3SE +/- 3.93, N = 15SE +/- 1.33, N = 3256.0223.1215.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto50100150200250Min: 254 / Avg: 255.97 / Max: 257.1Min: 212.5 / Avg: 223.09 / Max: 253Min: 213.6 / Avg: 215.27 / Max: 217.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto150300450600750SE +/- 2.79, N = 3SE +/- 7.08, N = 15SE +/- 3.55, N = 3682.2622.3609.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto120240360480600Min: 676.7 / Avg: 682.2 / Max: 685.8Min: 602.3 / Avg: 622.35 / Max: 677.7Min: 605.4 / Avg: 609.2 / Max: 616.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression SpeedAutoPrefer FreqPrefer Cache5001000150020002500SE +/- 6.20, N = 3SE +/- 3.64, N = 3SE +/- 6.75, N = 32433.02417.32410.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression SpeedAutoPrefer FreqPrefer Cache400800120016002000Min: 2423.6 / Avg: 2433 / Max: 2444.7Min: 2410.3 / Avg: 2417.27 / Max: 2422.6Min: 2397.3 / Avg: 2410.13 / Max: 2420.21. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedAutoPrefer FreqPrefer Cache2004006008001000SE +/- 5.38, N = 3SE +/- 1.70, N = 3SE +/- 4.68, N = 31015.01012.91012.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedAutoPrefer FreqPrefer Cache2004006008001000Min: 1004.4 / Avg: 1015.03 / Max: 1021.8Min: 1011 / Avg: 1012.9 / Max: 1016.3Min: 1005 / Avg: 1012.03 / Max: 1020.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarPrefer FreqPrefer CacheAuto3691215SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 311.8511.8211.79
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarPrefer FreqPrefer CacheAuto3691215Min: 11.77 / Avg: 11.85 / Max: 11.9Min: 11.79 / Avg: 11.82 / Max: 11.87Min: 11.75 / Avg: 11.79 / Max: 11.84

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomPrefer FreqAutoPrefer Cache1.22272.44543.66814.89086.1135SE +/- 0.009, N = 3SE +/- 0.011, N = 3SE +/- 0.010, N = 35.4345.4285.425
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomPrefer FreqAutoPrefer Cache246810Min: 5.42 / Avg: 5.43 / Max: 5.45Min: 5.41 / Avg: 5.43 / Max: 5.44Min: 5.41 / Avg: 5.43 / Max: 5.45

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto1326395265SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 358.9358.9959.08MIN: 41.49 / MAX: 69.51MIN: 29.03 / MAX: 69.75MIN: 27.39 / MAX: 72.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto1224364860Min: 58.77 / Avg: 58.93 / Max: 59.2Min: 58.82 / Avg: 58.99 / Max: 59.17Min: 58.97 / Avg: 59.08 / Max: 59.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto306090120150SE +/- 0.31, N = 3SE +/- 0.23, N = 3SE +/- 0.15, N = 3135.63135.51135.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto306090120150Min: 135.01 / Avg: 135.63 / Max: 135.97Min: 135.12 / Avg: 135.51 / Max: 135.9Min: 135.08 / Avg: 135.28 / Max: 135.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUPrefer CachePrefer FreqAuto1.14082.28163.42244.56325.704SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.075.055.02MIN: 4.95 / MAX: 5.38MIN: 4.92 / MAX: 5.35MIN: 4.9 / MAX: 5.32
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUPrefer CachePrefer FreqAuto246810Min: 5.06 / Avg: 5.07 / Max: 5.08Min: 5.04 / Avg: 5.05 / Max: 5.06Min: 5.01 / Avg: 5.02 / Max: 5.02

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Prefer CachePrefer FreqAuto4K8K12K16K20KSE +/- 15.43, N = 3SE +/- 12.93, N = 3SE +/- 16.88, N = 317748.517769.517797.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Prefer CachePrefer FreqAuto3K6K9K12K15KMin: 17732 / Avg: 17748.47 / Max: 17779.3Min: 17748.4 / Avg: 17769.5 / Max: 17793Min: 17764.3 / Avg: 17797.33 / Max: 17819.9

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromePrefer FreqAutoPrefer Cache80160240320400SE +/- 3.94, N = 15SE +/- 3.39, N = 15SE +/- 3.24, N = 3372.8373.2373.21. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromePrefer FreqAutoPrefer Cache70140210280350Min: 345.3 / Avg: 372.8 / Max: 390.3Min: 344.4 / Avg: 373.17 / Max: 388.1Min: 369.1 / Avg: 373.2 / Max: 379.61. chrome 110.0.5481.96

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqAutoPrefer Cache1.08452.1693.25354.3385.4225SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.774.804.82MIN: 3.42 / MAX: 11.47MIN: 3.36 / MAX: 13.68MIN: 3.53 / MAX: 14.591. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqAutoPrefer Cache246810Min: 4.76 / Avg: 4.77 / Max: 4.78Min: 4.79 / Avg: 4.8 / Max: 4.81Min: 4.8 / Avg: 4.82 / Max: 4.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqAutoPrefer Cache400800120016002000SE +/- 2.43, N = 3SE +/- 1.97, N = 3SE +/- 3.63, N = 31674.001663.091659.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqAutoPrefer Cache30060090012001500Min: 1670.78 / Avg: 1674 / Max: 1678.76Min: 1660.99 / Avg: 1663.09 / Max: 1667.02Min: 1652.57 / Avg: 1659.22 / Max: 1665.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatAutoPrefer FreqPrefer Cache2004006008001000SE +/- 1.48, N = 3SE +/- 0.75, N = 3SE +/- 2.07, N = 31108.201108.321109.84
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatAutoPrefer FreqPrefer Cache2004006008001000Min: 1105.24 / Avg: 1108.2 / Max: 1109.76Min: 1106.99 / Avg: 1108.32 / Max: 1109.58Min: 1105.69 / Avg: 1109.84 / Max: 1112.02

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantPrefer CacheAutoPrefer Freq30060090012001500SE +/- 1.70, N = 3SE +/- 6.58, N = 3SE +/- 8.09, N = 31622.751626.401627.87
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantPrefer CacheAutoPrefer Freq30060090012001500Min: 1620.07 / Avg: 1622.75 / Max: 1625.91Min: 1619.63 / Avg: 1626.4 / Max: 1639.55Min: 1619.47 / Avg: 1627.87 / Max: 1644.05

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill SyncAutoPrefer CachePrefer Freq9K18K27K36K45KSE +/- 41.20, N = 3SE +/- 84.16, N = 3SE +/- 113.36, N = 33972839653394781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill SyncAutoPrefer CachePrefer Freq7K14K21K28K35KMin: 39646 / Avg: 39728 / Max: 39776Min: 39530 / Avg: 39653 / Max: 39814Min: 39330 / Avg: 39478.33 / Max: 397011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto1.05752.1153.17254.235.2875SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.654.694.70MIN: 3 / MAX: 12.96MIN: 3.01 / MAX: 12.79MIN: 3.01 / MAX: 12.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 4.63 / Avg: 4.65 / Max: 4.66Min: 4.69 / Avg: 4.69 / Max: 4.7Min: 4.68 / Avg: 4.7 / Max: 4.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto400800120016002000SE +/- 3.24, N = 3SE +/- 1.37, N = 3SE +/- 3.62, N = 31721.241702.471700.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto30060090012001500Min: 1716.24 / Avg: 1721.24 / Max: 1727.32Min: 1699.85 / Avg: 1702.47 / Max: 1704.5Min: 1693.79 / Avg: 1700.14 / Max: 1706.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAutoPrefer CachePrefer Freq0.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.390.390.39MIN: 0.23 / MAX: 7.84MIN: 0.23 / MAX: 9.35MIN: 0.22 / MAX: 7.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAutoPrefer CachePrefer Freq12345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAutoPrefer FreqPrefer Cache9K18K27K36K45KSE +/- 26.82, N = 3SE +/- 18.59, N = 3SE +/- 60.47, N = 340664.1340605.4440512.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUAutoPrefer FreqPrefer Cache7K14K21K28K35KMin: 40617.19 / Avg: 40664.13 / Max: 40710.09Min: 40581.55 / Avg: 40605.44 / Max: 40642.06Min: 40433.18 / Avg: 40512.39 / Max: 40631.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer CacheAutoPrefer Freq246810SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 37.928.018.05MIN: 4.93 / MAX: 18.91MIN: 4.18 / MAX: 20.46MIN: 3.98 / MAX: 18.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer CacheAutoPrefer Freq3691215Min: 7.87 / Avg: 7.92 / Max: 7.97Min: 7.95 / Avg: 8.01 / Max: 8.08Min: 8 / Avg: 8.05 / Max: 8.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer CacheAutoPrefer Freq2004006008001000SE +/- 3.98, N = 3SE +/- 4.60, N = 3SE +/- 3.10, N = 31009.73997.60993.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer CacheAutoPrefer Freq2004006008001000Min: 1002.54 / Avg: 1009.73 / Max: 1016.28Min: 989.12 / Avg: 997.6 / Max: 1004.92Min: 987.94 / Avg: 993.16 / Max: 998.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAutoPrefer CachePrefer Freq0.15080.30160.45240.60320.754SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.670.670.67MIN: 0.37 / MAX: 8.39MIN: 0.37 / MAX: 8.32MIN: 0.34 / MAX: 8.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAutoPrefer CachePrefer Freq246810Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.67 / Max: 0.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqAutoPrefer Cache5K10K15K20K25KSE +/- 31.27, N = 3SE +/- 11.18, N = 3SE +/- 118.07, N = 323592.8523579.6723486.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqAutoPrefer Cache4K8K12K16K20KMin: 23535.81 / Avg: 23592.85 / Max: 23643.56Min: 23560.77 / Avg: 23579.67 / Max: 23599.47Min: 23251.21 / Avg: 23486.36 / Max: 23622.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqAutoPrefer Cache246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.106.126.12MIN: 3.29 / MAX: 13.39MIN: 3.2 / MAX: 14.83MIN: 3.21 / MAX: 13.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqAutoPrefer Cache246810Min: 6.09 / Avg: 6.1 / Max: 6.1Min: 6.11 / Avg: 6.12 / Max: 6.13Min: 6.11 / Avg: 6.12 / Max: 6.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto6001200180024003000SE +/- 1.52, N = 3SE +/- 1.72, N = 3SE +/- 3.36, N = 32621.902613.722613.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto5001000150020002500Min: 2619.81 / Avg: 2621.9 / Max: 2624.86Min: 2610.91 / Avg: 2613.72 / Max: 2616.85Min: 2607.11 / Avg: 2613.42 / Max: 2618.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer CachePrefer FreqAuto246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.985.996.00MIN: 3.07 / MAX: 15.39MIN: 3.14 / MAX: 13.64MIN: 3.08 / MAX: 13.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer CachePrefer FreqAuto246810Min: 5.98 / Avg: 5.98 / Max: 5.99Min: 5.98 / Avg: 5.99 / Max: 6.01Min: 5.99 / Avg: 6 / Max: 6.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer CachePrefer FreqAuto30060090012001500SE +/- 0.83, N = 3SE +/- 1.63, N = 3SE +/- 1.06, N = 31335.291333.241332.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer CachePrefer FreqAuto2004006008001000Min: 1333.69 / Avg: 1335.29 / Max: 1336.5Min: 1330.44 / Avg: 1333.24 / Max: 1336.09Min: 1330.59 / Avg: 1332.5 / Max: 1334.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random FillAutoPrefer FreqPrefer Cache300K600K900K1200K1500KSE +/- 1364.86, N = 3SE +/- 2340.10, N = 3SE +/- 3639.38, N = 31398717139602413887871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random FillAutoPrefer FreqPrefer Cache200K400K600K800K1000KMin: 1396002 / Avg: 1398716.67 / Max: 1400322Min: 1391481 / Avg: 1396024.33 / Max: 1399269Min: 1381586 / Avg: 1388787.33 / Max: 13933051. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.00, N = 32612612601. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenPrefer FreqPrefer CacheAuto50100150200250Min: 260 / Avg: 261 / Max: 262Min: 260 / Avg: 260.67 / Max: 261Min: 260 / Avg: 260 / Max: 2601. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedPrefer FreqPrefer CacheAuto110220330440550SE +/- 1.45, N = 3SE +/- 0.88, N = 3SE +/- 1.00, N = 34924864861. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedPrefer FreqPrefer CacheAuto90180270360450Min: 490 / Avg: 492.33 / Max: 495Min: 485 / Avg: 486.33 / Max: 488Min: 485 / Avg: 486 / Max: 4881. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomPrefer CachePrefer FreqAuto200K400K600K800K1000KSE +/- 3396.18, N = 3SE +/- 293.65, N = 3SE +/- 1979.12, N = 39507869502949471321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomPrefer CachePrefer FreqAuto160K320K480K640K800KMin: 944024 / Avg: 950785.67 / Max: 954725Min: 949716 / Avg: 950294.33 / Max: 950672Min: 943934 / Avg: 947132 / Max: 9507511. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While WritingPrefer CachePrefer FreqAuto900K1800K2700K3600K4500KSE +/- 24244.30, N = 3SE +/- 25035.49, N = 3SE +/- 12110.14, N = 34212246419845441969441. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While WritingPrefer CachePrefer FreqAuto700K1400K2100K2800K3500KMin: 4179982 / Avg: 4212246 / Max: 4259725Min: 4173149 / Avg: 4198454 / Max: 4248524Min: 4173414 / Avg: 4196944 / Max: 42136811. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write RandomAutoPrefer FreqPrefer Cache700K1400K2100K2800K3500KSE +/- 3489.00, N = 3SE +/- 3924.51, N = 3SE +/- 9373.61, N = 33323667331541533117661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write RandomAutoPrefer FreqPrefer Cache600K1200K1800K2400K3000KMin: 3317011 / Avg: 3323666.67 / Max: 3328810Min: 3307677 / Avg: 3315415.33 / Max: 3320422Min: 3298196 / Avg: 3311766 / Max: 33297531. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateAutoPrefer CachePrefer Freq2004006008001000SE +/- 2.31, N = 3SE +/- 6.51, N = 3SE +/- 0.33, N = 31030102710121. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateAutoPrefer CachePrefer Freq2004006008001000Min: 1026 / Avg: 1030 / Max: 1034Min: 1020 / Avg: 1027 / Max: 1040Min: 1012 / Avg: 1012.33 / Max: 10131. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlPrefer CachePrefer FreqAuto2004006008001000SE +/- 5.17, N = 3SE +/- 0.33, N = 3SE +/- 1.53, N = 31162115311441. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlPrefer CachePrefer FreqAuto2004006008001000Min: 1156 / Avg: 1161.67 / Max: 1172Min: 1153 / Avg: 1153.33 / Max: 1154Min: 1142 / Avg: 1144 / Max: 11471. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpacePrefer CachePrefer FreqAuto400800120016002000SE +/- 3.71, N = 3SE +/- 6.89, N = 3SE +/- 10.37, N = 31702164616341. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpacePrefer CachePrefer FreqAuto30060090012001500Min: 1697 / Avg: 1701.67 / Max: 1709Min: 1632 / Avg: 1645.67 / Max: 1654Min: 1620 / Avg: 1633.67 / Max: 16541. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadPrefer CachePrefer FreqAuto30M60M90M120M150MSE +/- 515512.16, N = 3SE +/- 260132.99, N = 3SE +/- 134841.05, N = 31477601091477400171472961311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadPrefer CachePrefer FreqAuto30M60M90M120M150MMin: 147032125 / Avg: 147760109.33 / Max: 148756389Min: 147323351 / Avg: 147740017 / Max: 148218164Min: 147113196 / Avg: 147296131 / Max: 1475592011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4KPrefer FreqAutoPrefer Cache3691215SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 310.1810.1010.061. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4KPrefer FreqAutoPrefer Cache3691215Min: 10.13 / Avg: 10.18 / Max: 10.23Min: 10 / Avg: 10.1 / Max: 10.21Min: 9.91 / Avg: 10.06 / Max: 10.221. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimePrefer FreqAutoPrefer Cache1326395265SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 358.9658.9959.101. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimePrefer FreqAutoPrefer Cache1224364860Min: 58.76 / Avg: 58.96 / Max: 59.16Min: 58.86 / Avg: 58.99 / Max: 59.12Min: 59.03 / Avg: 59.1 / Max: 59.141. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputPrefer FreqPrefer CacheAuto70140210280350SE +/- 1.53, N = 3SE +/- 1.66, N = 3SE +/- 3.34, N = 15344.08342.25337.861. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputPrefer FreqPrefer CacheAuto60120180240300Min: 341.05 / Avg: 344.08 / Max: 345.99Min: 339.84 / Avg: 342.25 / Max: 345.43Min: 311.84 / Avg: 337.86 / Max: 347.911. (CC) gcc options: -O3 -rdynamic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KPrefer FreqAutoPrefer Cache3691215SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 313.2213.1913.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KPrefer FreqAutoPrefer Cache48121620Min: 13.14 / Avg: 13.22 / Max: 13.32Min: 13.12 / Avg: 13.19 / Max: 13.28Min: 13.04 / Avg: 13.11 / Max: 13.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromePrefer FreqPrefer CacheAuto50100150200250SE +/- 2.69, N = 15SE +/- 2.79, N = 15SE +/- 2.67, N = 15228.64229.09230.101. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromePrefer FreqPrefer CacheAuto4080120160200Min: 217.88 / Avg: 228.64 / Max: 239.61Min: 217.88 / Avg: 229.09 / Max: 239.71Min: 217.89 / Avg: 230.1 / Max: 239.631. chrome 110.0.5481.96

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialAutoPrefer FreqPrefer Cache2040608010084.5884.7384.79

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesPrefer FreqPrefer CacheAuto2004006008001000SE +/- 7.37, N = 3SE +/- 8.29, N = 3SE +/- 5.13, N = 31111.31116.21126.5MIN: 1060.74 / MAX: 1144.27MIN: 1043.35 / MAX: 1143.74MIN: 1017.45 / MAX: 1147.28
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesPrefer FreqPrefer CacheAuto2004006008001000Min: 1096.78 / Avg: 1111.33 / Max: 1120.66Min: 1100.33 / Avg: 1116.2 / Max: 1128.28Min: 1117.08 / Avg: 1126.49 / Max: 1134.75

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsPrefer FreqPrefer CacheAuto3691215SE +/- 0.20, N = 15SE +/- 0.20, N = 15SE +/- 0.19, N = 1511.0011.0911.12
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsPrefer FreqPrefer CacheAuto3691215Min: 10.18 / Avg: 11 / Max: 11.92Min: 10.31 / Avg: 11.09 / Max: 11.96Min: 10.24 / Avg: 11.12 / Max: 11.93

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileAutoPrefer FreqPrefer Cache1224364860SE +/- 0.37, N = 3SE +/- 0.42, N = 3SE +/- 0.31, N = 355.2955.2955.35
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileAutoPrefer FreqPrefer Cache1122334455Min: 54.63 / Avg: 55.29 / Max: 55.91Min: 54.56 / Avg: 55.29 / Max: 56.02Min: 54.73 / Avg: 55.35 / Max: 55.67

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankPrefer CacheAutoPrefer Freq30060090012001500SE +/- 2.07, N = 3SE +/- 15.84, N = 3SE +/- 15.65, N = 31562.21566.91584.1MIN: 1439.9 / MAX: 1667.7MIN: 1420.34 / MAX: 1721.97MIN: 1414.69 / MAX: 1754.86
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankPrefer CacheAutoPrefer Freq30060090012001500Min: 1559.13 / Avg: 1562.23 / Max: 1566.15Min: 1547.01 / Avg: 1566.9 / Max: 1598.2Min: 1553.25 / Avg: 1584.1 / Max: 1604.13

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUPrefer FreqAutoPrefer Cache0.03830.07660.11490.15320.1915SE +/- 0.001025, N = 13SE +/- 0.002416, N = 12SE +/- 0.002974, N = 150.1485850.1488200.170031MIN: 0.13MIN: 0.13MIN: 0.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUPrefer FreqAutoPrefer Cache12345Min: 0.14 / Avg: 0.15 / Max: 0.16Min: 0.14 / Avg: 0.15 / Max: 0.17Min: 0.15 / Avg: 0.17 / Max: 0.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinPrefer CacheAutoPrefer Freq9001800270036004500SE +/- 32.16, N = 10SE +/- 12.63, N = 3SE +/- 13.77, N = 34211.634161.554152.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinPrefer CacheAutoPrefer Freq7001400210028003500Min: 4143.37 / Avg: 4211.63 / Max: 4416.2Min: 4141.86 / Avg: 4161.55 / Max: 4185.1Min: 4132.54 / Avg: 4152.58 / Max: 4178.961. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkPrefer FreqAutoPrefer Cache510152025SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.24, N = 320.9020.6320.46
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkPrefer FreqAutoPrefer Cache510152025Min: 20.64 / Avg: 20.9 / Max: 21.11Min: 20.5 / Avg: 20.63 / Max: 20.77Min: 19.99 / Avg: 20.46 / Max: 20.7

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1224364860SE +/- 0.06, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 352.2552.3752.54
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1122334455Min: 52.14 / Avg: 52.25 / Max: 52.34Min: 52.15 / Avg: 52.37 / Max: 52.59Min: 52.34 / Avg: 52.54 / Max: 52.76

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MPrefer FreqAutoPrefer Cache4K8K12K16K20KSE +/- 18.99, N = 3SE +/- 24.03, N = 3SE +/- 7.50, N = 319658.119626.119605.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MPrefer FreqAutoPrefer Cache3K6K9K12K15KMin: 19629.4 / Avg: 19658.13 / Max: 19694Min: 19589 / Avg: 19626.1 / Max: 19671.1Min: 19590.6 / Avg: 19605.43 / Max: 19614.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedPrefer CacheAutoPrefer Freq4K8K12K16K20KSE +/- 28.63, N = 3SE +/- 78.84, N = 4SE +/- 30.05, N = 318752.718483.018319.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedPrefer CacheAutoPrefer Freq3K6K9K12K15KMin: 18696.4 / Avg: 18752.67 / Max: 18790Min: 18347 / Avg: 18482.98 / Max: 18710.3Min: 18270.2 / Avg: 18319.83 / Max: 183741. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedAutoPrefer CachePrefer Freq20406080100SE +/- 0.95, N = 4SE +/- 0.15, N = 3SE +/- 0.68, N = 378.9575.7473.611. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedAutoPrefer CachePrefer Freq1530456075Min: 77.54 / Avg: 78.95 / Max: 81.66Min: 75.56 / Avg: 75.74 / Max: 76.04Min: 72.25 / Avg: 73.61 / Max: 74.341. (CC) gcc options: -O3

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimePrefer CacheAutoPrefer Freq306090120150123.66125.82129.421. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimePrefer FreqAutoPrefer Cache61218243022.9323.2023.981. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingAutoPrefer CachePrefer Freq15003000450060007500SE +/- 59.45, N = 9SE +/- 23.08, N = 3SE +/- 19.10, N = 37172.317097.397077.091. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingAutoPrefer CachePrefer Freq12002400360048006000Min: 7069.74 / Avg: 7172.31 / Max: 7642.63Min: 7052.27 / Avg: 7097.39 / Max: 7128.37Min: 7055.4 / Avg: 7077.09 / Max: 7115.171. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompilePrefer FreqAutoPrefer Cache1122334455SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 349.3749.3849.45
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompilePrefer FreqAutoPrefer Cache1020304050Min: 49.2 / Avg: 49.37 / Max: 49.56Min: 49.35 / Avg: 49.38 / Max: 49.42Min: 49.34 / Avg: 49.44 / Max: 49.5

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto0.09230.18460.27690.36920.4615SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.410.410.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto12345Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.41 / Avg: 0.41 / Max: 0.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024AutoPrefer FreqPrefer Cache0.82341.64682.47023.29364.117SE +/- 0.014082, N = 3SE +/- 0.021337, N = 3SE +/- 0.006243, N = 33.6594833.6545593.6374611. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024AutoPrefer FreqPrefer Cache246810Min: 3.63 / Avg: 3.66 / Max: 3.67Min: 3.61 / Avg: 3.65 / Max: 3.68Min: 3.63 / Avg: 3.64 / Max: 3.651. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineAutoPrefer FreqPrefer Cache1122334455SE +/- 0.45, N = 3SE +/- 0.33, N = 3SE +/- 0.43, N = 345.2346.3947.99
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineAutoPrefer FreqPrefer Cache1020304050Min: 44.35 / Avg: 45.23 / Max: 45.85Min: 45.94 / Avg: 46.39 / Max: 47.04Min: 47.14 / Avg: 47.99 / Max: 48.56

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigPrefer CachePrefer FreqAuto1122334455SE +/- 0.32, N = 3SE +/- 0.30, N = 3SE +/- 0.31, N = 346.1946.4346.57
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigPrefer CachePrefer FreqAuto918273645Min: 45.74 / Avg: 46.19 / Max: 46.81Min: 46.12 / Avg: 46.43 / Max: 47.02Min: 46.26 / Avg: 46.57 / Max: 47.19

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOPrefer FreqAutoPrefer Cache7001400210028003500SE +/- 17.17, N = 3SE +/- 31.81, N = 3SE +/- 47.73, N = 33439.33454.13489.0MIN: 3421.31 / MAX: 5134.14MIN: 3399.67 / MAX: 5374.55MIN: 3424.57 / MAX: 5127.41
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOPrefer FreqAutoPrefer Cache6001200180024003000Min: 3421.31 / Avg: 3439.29 / Max: 3473.62Min: 3399.67 / Avg: 3454.07 / Max: 3509.85Min: 3424.57 / Avg: 3489 / Max: 3582.21

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexPrefer FreqPrefer CacheAuto900K1800K2700K3600K4500KSE +/- 37858.78, N = 7SE +/- 54535.09, N = 3SE +/- 41048.77, N = 34134347.274120615.704087777.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KMin: 3957674.19 / Avg: 4134347.27 / Max: 4267266.62Min: 4035049.55 / Avg: 4120615.7 / Max: 4221973.96Min: 4005700.16 / Avg: 4087777.65 / Max: 4130387.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointPrefer FreqAutoPrefer Cache3691215SE +/- 0.14, N = 15SE +/- 0.12, N = 15SE +/- 0.09, N = 411.4111.5011.95
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointPrefer FreqAutoPrefer Cache3691215Min: 10.53 / Avg: 11.41 / Max: 12.34Min: 10.6 / Avg: 11.5 / Max: 12.11Min: 11.71 / Avg: 11.95 / Max: 12.14

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KSE +/- 783.77, N = 3SE +/- 214.95, N = 3SE +/- 32095.22, N = 73476539.493476176.553423759.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresPrefer FreqPrefer CacheAuto600K1200K1800K2400K3000KMin: 3475025.32 / Avg: 3476539.49 / Max: 3477647.78Min: 3475952.16 / Avg: 3476176.55 / Max: 3476606.31Min: 3287977.32 / Avg: 3423759.33 / Max: 3481227.121. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KPrefer CachePrefer FreqAuto1632486480SE +/- 0.72, N = 15SE +/- 0.71, N = 15SE +/- 0.68, N = 1571.7971.7871.391. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KPrefer CachePrefer FreqAuto1428425670Min: 62.96 / Avg: 71.79 / Max: 73.15Min: 63.23 / Avg: 71.78 / Max: 73.18Min: 63.5 / Avg: 71.39 / Max: 72.721. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurPrefer FreqAutoPrefer Cache918273645SE +/- 0.42, N = 3SE +/- 0.39, N = 3SE +/- 0.34, N = 339.2239.3439.51
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurPrefer FreqAutoPrefer Cache816243240Min: 38.4 / Avg: 39.22 / Max: 39.78Min: 38.77 / Avg: 39.34 / Max: 40.08Min: 39.09 / Avg: 39.51 / Max: 40.2

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsPrefer FreqPrefer CacheAuto0.18390.36780.55170.73560.9195SE +/- 0.00066, N = 3SE +/- 0.00028, N = 3SE +/- 0.00047, N = 30.813790.815940.81721
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsPrefer FreqPrefer CacheAuto246810Min: 0.81 / Avg: 0.81 / Max: 0.81Min: 0.82 / Avg: 0.82 / Max: 0.82Min: 0.82 / Avg: 0.82 / Max: 0.82

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SMP ParallelAutoPrefer FreqPrefer Cache306090120150112.05112.34114.03

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.70, N = 15SE +/- 0.85, N = 15SE +/- 0.86, N = 1589.8789.7289.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 86.02 / Avg: 89.87 / Max: 93.81Min: 85.24 / Avg: 89.72 / Max: 93.66Min: 84.52 / Avg: 89.07 / Max: 93.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeAutoPrefer FreqPrefer Cache816243240SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 334.8334.8634.941. RawTherapee, version 5.9, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeAutoPrefer FreqPrefer Cache714212835Min: 34.69 / Avg: 34.83 / Max: 35.05Min: 34.66 / Avg: 34.86 / Max: 34.99Min: 34.79 / Avg: 34.94 / Max: 35.031. RawTherapee, version 5.9, command line.

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Prefer CacheAutoPrefer Freq816243240SE +/- 0.08, N = 3SE +/- 0.20, N = 3SE +/- 0.09, N = 334.5234.8436.091. (CC) gcc options: -O2 -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Prefer CacheAutoPrefer Freq816243240Min: 34.37 / Avg: 34.52 / Max: 34.63Min: 34.52 / Avg: 34.84 / Max: 35.22Min: 35.92 / Avg: 36.09 / Max: 36.21. (CC) gcc options: -O2 -lz

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromePrefer FreqAutoPrefer Cache80160240320400SE +/- 4.91, N = 3SE +/- 2.65, N = 3SE +/- 2.08, N = 33693683671. chrome 110.0.5481.96
OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromePrefer FreqAutoPrefer Cache70140210280350Min: 361 / Avg: 369.33 / Max: 378Min: 364 / Avg: 368 / Max: 373Min: 363 / Avg: 367 / Max: 3701. chrome 110.0.5481.96

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4KAutoPrefer CachePrefer Freq510152025SE +/- 0.27, N = 3SE +/- 0.24, N = 5SE +/- 0.19, N = 322.6822.2021.991. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4KAutoPrefer CachePrefer Freq510152025Min: 22.14 / Avg: 22.68 / Max: 22.97Min: 21.68 / Avg: 22.2 / Max: 22.77Min: 21.75 / Avg: 21.99 / Max: 22.371. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesPrefer CacheAutoPrefer Freq816243240SE +/- 0.13, N = 3SE +/- 0.22, N = 3SE +/- 0.15, N = 334.2434.2734.97
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesPrefer CacheAutoPrefer Freq714212835Min: 34 / Avg: 34.24 / Max: 34.44Min: 33.91 / Avg: 34.27 / Max: 34.66Min: 34.78 / Avg: 34.97 / Max: 35.27

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceAutoPrefer FreqPrefer Cache816243240SE +/- 0.10, N = 3SE +/- 0.27, N = 3SE +/- 0.20, N = 333.4233.6034.25
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceAutoPrefer FreqPrefer Cache714212835Min: 33.27 / Avg: 33.42 / Max: 33.61Min: 33.2 / Avg: 33.6 / Max: 34.12Min: 34.05 / Avg: 34.25 / Max: 34.64

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RatePrefer FreqAutoPrefer Cache3691215SE +/- 0.09, N = 3SE +/- 0.10, N = 7SE +/- 0.13, N = 411.2810.7210.641. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RatePrefer FreqAutoPrefer Cache3691215Min: 11.11 / Avg: 11.28 / Max: 11.4Min: 10.4 / Avg: 10.72 / Max: 11.2Min: 10.27 / Avg: 10.64 / Max: 10.841. (CC) gcc options: -O3 -march=native -fopenmp

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigPrefer CachePrefer FreqAuto816243240SE +/- 0.18, N = 3SE +/- 0.12, N = 3SE +/- 0.21, N = 332.3933.0533.121. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigPrefer CachePrefer FreqAuto714212835Min: 32.14 / Avg: 32.39 / Max: 32.74Min: 32.81 / Avg: 33.05 / Max: 33.21Min: 32.79 / Avg: 33.12 / Max: 33.521. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025SE +/- 0.28, N = 3SE +/- 0.26, N = 3SE +/- 0.25, N = 420.3920.3520.161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025Min: 20.03 / Avg: 20.39 / Max: 20.95Min: 19.97 / Avg: 20.35 / Max: 20.85Min: 19.7 / Avg: 20.16 / Max: 20.861. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28AutoPrefer CachePrefer Freq3691215SE +/- 0.02, N = 4SE +/- 0.07, N = 4SE +/- 0.15, N = 1512.5712.7713.131. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28AutoPrefer CachePrefer Freq48121620Min: 12.55 / Avg: 12.57 / Max: 12.62Min: 12.63 / Avg: 12.77 / Max: 12.96Min: 12.55 / Avg: 13.13 / Max: 14.131. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512AutoPrefer CachePrefer Freq1.18242.36483.54724.72965.912SE +/- 0.033936, N = 3SE +/- 0.017595, N = 3SE +/- 0.035122, N = 35.2552895.1935285.1816071. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512AutoPrefer CachePrefer Freq246810Min: 5.19 / Avg: 5.26 / Max: 5.31Min: 5.16 / Avg: 5.19 / Max: 5.22Min: 5.12 / Avg: 5.18 / Max: 5.241. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromePrefer FreqAutoPrefer Cache510152025SE +/- 0.37, N = 15SE +/- 0.38, N = 15SE +/- 0.30, N = 1518.6418.8419.331. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromePrefer FreqAutoPrefer Cache510152025Min: 16.89 / Avg: 18.64 / Max: 20.16Min: 16.88 / Avg: 18.84 / Max: 20.32Min: 16.99 / Avg: 19.33 / Max: 20.141. chrome 110.0.5481.96

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KPrefer CacheAutoPrefer Freq612182430SE +/- 0.04, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 324.1824.1824.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KPrefer CacheAutoPrefer Freq612182430Min: 24.13 / Avg: 24.18 / Max: 24.25Min: 24.04 / Avg: 24.18 / Max: 24.4Min: 23.99 / Avg: 24.09 / Max: 24.151. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KPrefer CachePrefer FreqAuto1.1952.393.5854.785.975SE +/- 0.018, N = 3SE +/- 0.016, N = 3SE +/- 0.009, N = 35.3115.2985.2961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KPrefer CachePrefer FreqAuto246810Min: 5.29 / Avg: 5.31 / Max: 5.35Min: 5.27 / Avg: 5.3 / Max: 5.32Min: 5.28 / Avg: 5.3 / Max: 5.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.77, N = 15SE +/- 0.75, N = 15SE +/- 0.76, N = 15109.76109.57109.371. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 101.79 / Avg: 109.76 / Max: 111.5Min: 101.96 / Avg: 109.57 / Max: 111.08Min: 101.56 / Avg: 109.37 / Max: 111.281. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqAutoPrefer Cache306090120150SE +/- 1.16, N = 5SE +/- 0.19, N = 5SE +/- 0.93, N = 15143.9127.7126.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqAutoPrefer Cache306090120150Min: 141.5 / Avg: 143.92 / Max: 147Min: 127.3 / Avg: 127.7 / Max: 128.2Min: 113.9 / Avg: 126.32 / Max: 1301. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto50100150200250SE +/- 2.19, N = 5SE +/- 1.36, N = 15SE +/- 0.39, N = 5211.5191.6191.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto4080120160200Min: 207.5 / Avg: 211.52 / Max: 217.8Min: 187.6 / Avg: 191.59 / Max: 209.3Min: 189.9 / Avg: 191.18 / Max: 192.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustivePrefer FreqPrefer CacheAuto0.38280.76561.14841.53121.914SE +/- 0.0015, N = 3SE +/- 0.0022, N = 3SE +/- 0.0015, N = 31.70121.69761.69501. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustivePrefer FreqPrefer CacheAuto246810Min: 1.7 / Avg: 1.7 / Max: 1.7Min: 1.69 / Avg: 1.7 / Max: 1.7Min: 1.69 / Avg: 1.7 / Max: 1.71. (CXX) g++ options: -O3 -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUAutoPrefer CachePrefer Freq1.2212.4423.6634.8846.105SE +/- 0.03961, N = 15SE +/- 0.04406, N = 15SE +/- 0.07946, N = 155.141725.154565.42688MIN: 4.8MIN: 4.87MIN: 5.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUAutoPrefer CachePrefer Freq246810Min: 4.9 / Avg: 5.14 / Max: 5.35Min: 4.97 / Avg: 5.15 / Max: 5.7Min: 5.11 / Avg: 5.43 / Max: 5.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024AutoPrefer CachePrefer Freq1.25122.50243.75365.00486.256SE +/- 0.032795, N = 3SE +/- 0.052075, N = 3SE +/- 0.026611, N = 35.5609095.5499725.5064671. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024AutoPrefer CachePrefer Freq246810Min: 5.52 / Avg: 5.56 / Max: 5.63Min: 5.48 / Avg: 5.55 / Max: 5.65Min: 5.45 / Avg: 5.51 / Max: 5.541. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.99, N = 15SE +/- 1.26, N = 15SE +/- 1.44, N = 15110.91108.48105.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 102.54 / Avg: 110.91 / Max: 114.22Min: 101.71 / Avg: 108.48 / Max: 115.27Min: 100.04 / Avg: 105.96 / Max: 115.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinPrefer FreqPrefer CacheAuto90K180K270K360K450KSE +/- 1105.18, N = 3SE +/- 260.58, N = 3SE +/- 126.80, N = 34264804255504252371. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinPrefer FreqPrefer CacheAuto70K140K210K280K350KMin: 425340 / Avg: 426480 / Max: 428690Min: 425260 / Avg: 425550 / Max: 426070Min: 425100 / Avg: 425236.67 / Max: 4254901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptPrefer FreqPrefer CacheAuto140280420560700SE +/- 0.23, N = 3SE +/- 0.56, N = 3SE +/- 0.42, N = 3641.11640.64639.971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptPrefer FreqPrefer CacheAuto110220330440550Min: 640.65 / Avg: 641.11 / Max: 641.37Min: 639.54 / Avg: 640.64 / Max: 641.34Min: 639.24 / Avg: 639.97 / Max: 640.691. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDAutoPrefer FreqPrefer Cache30060090012001500SE +/- 2.10, N = 3SE +/- 3.18, N = 3SE +/- 2.63, N = 31281.141276.151266.521. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDAutoPrefer FreqPrefer Cache2004006008001000Min: 1277.19 / Avg: 1281.14 / Max: 1284.34Min: 1270.32 / Avg: 1276.15 / Max: 1281.26Min: 1261.88 / Avg: 1266.52 / Max: 1270.991. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsPrefer CachePrefer FreqAuto1000K2000K3000K4000K5000KSE +/- 55352.76, N = 3SE +/- 46384.47, N = 3SE +/- 4595.10, N = 34616386.564613558.354554878.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsPrefer CachePrefer FreqAuto800K1600K2400K3200K4000KMin: 4556657.78 / Avg: 4616386.56 / Max: 4726973.55Min: 4562787.85 / Avg: 4613558.35 / Max: 4706184.4Min: 4547240.33 / Avg: 4554878.75 / Max: 4563123.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 688.41, N = 3SE +/- 79.62, N = 3SE +/- 105.15, N = 3123819.50122869.22122833.101. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 123045.2 / Avg: 123819.5 / Max: 125192.59Min: 122719.33 / Avg: 122869.22 / Max: 122990.74Min: 122716.48 / Avg: 122833.1 / Max: 123042.971. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicAutoPrefer FreqPrefer Cache50K100K150K200K250KSE +/- 178.50, N = 3SE +/- 147.40, N = 3SE +/- 84.51, N = 3212936.16212527.91212454.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicAutoPrefer FreqPrefer Cache40K80K120K160K200KMin: 212694.94 / Avg: 212936.16 / Max: 213284.7Min: 212243.57 / Avg: 212527.91 / Max: 212737.49Min: 212320.94 / Avg: 212454.76 / Max: 212611.071. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinAutoPrefer FreqPrefer Cache4K8K12K16K20KSE +/- 193.42, N = 3SE +/- 190.35, N = 3SE +/- 83.53, N = 31977319670195171. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinAutoPrefer FreqPrefer Cache3K6K9K12K15KMin: 19410 / Avg: 19773.33 / Max: 20070Min: 19460 / Avg: 19670 / Max: 20050Min: 19350 / Avg: 19516.67 / Max: 196101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexPrefer CacheAutoPrefer Freq2M4M6M8M10MSE +/- 82230.41, N = 3SE +/- 41091.60, N = 3SE +/- 29801.71, N = 311230656.9911172291.2911172285.001. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexPrefer CacheAutoPrefer Freq2M4M6M8M10MMin: 11072978.22 / Avg: 11230656.99 / Max: 11349975.69Min: 11092121.74 / Avg: 11172291.29 / Max: 11228034.62Min: 11114475.24 / Avg: 11172285 / Max: 11213757.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocPrefer CachePrefer FreqAuto8M16M24M32M40MSE +/- 50781.26, N = 3SE +/- 10732.17, N = 3SE +/- 130418.56, N = 336157266.3336014622.7335942000.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocPrefer CachePrefer FreqAuto6M12M18M24M30MMin: 36068008.71 / Avg: 36157266.33 / Max: 36243859.62Min: 35993327.94 / Avg: 36014622.73 / Max: 36027601.97Min: 35770048.52 / Avg: 35942000.76 / Max: 36197833.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEPrefer FreqPrefer CacheAuto100K200K300K400K500KSE +/- 3764.61, N = 3SE +/- 398.35, N = 3SE +/- 821.42, N = 3489525.42485658.97484816.521. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEPrefer FreqPrefer CacheAuto80K160K240K320K400KMin: 485507.56 / Avg: 489525.42 / Max: 497048.82Min: 484951.47 / Avg: 485658.97 / Max: 486329.95Min: 483174.46 / Avg: 484816.52 / Max: 485681.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingAutoPrefer CachePrefer Freq16K32K48K64K80KSE +/- 262.93, N = 3SE +/- 174.31, N = 3SE +/- 268.01, N = 373511.8473184.0472995.041. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingAutoPrefer CachePrefer Freq13K26K39K52K65KMin: 73179.23 / Avg: 73511.84 / Max: 74030.88Min: 72838.85 / Avg: 73184.04 / Max: 73398.91Min: 72571.37 / Avg: 72995.04 / Max: 73491.251. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoPrefer FreqPrefer CacheAuto9K18K27K36K45KSE +/- 33.31, N = 3SE +/- 82.33, N = 3SE +/- 19.05, N = 343758.7643716.1343690.401. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoPrefer FreqPrefer CacheAuto8K16K24K32K40KMin: 43705.09 / Avg: 43758.76 / Max: 43819.79Min: 43551.7 / Avg: 43716.13 / Max: 43805.96Min: 43652.32 / Avg: 43690.4 / Max: 43710.451. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathPrefer FreqAutoPrefer Cache30K60K90K120K150KSE +/- 73.54, N = 3SE +/- 98.70, N = 3SE +/- 118.20, N = 3138137.06138021.66137735.111. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathPrefer FreqAutoPrefer Cache20K40K60K80K100KMin: 138007.51 / Avg: 138137.06 / Max: 138262.13Min: 137855.12 / Avg: 138021.66 / Max: 138196.72Min: 137538.82 / Avg: 137735.11 / Max: 137947.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPPrefer FreqAutoPrefer Cache80160240320400SE +/- 0.31, N = 3SE +/- 0.60, N = 3SE +/- 0.19, N = 3381.37381.21379.531. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPPrefer FreqAutoPrefer Cache70140210280350Min: 381.01 / Avg: 381.37 / Max: 381.98Min: 380.25 / Avg: 381.21 / Max: 382.32Min: 379.34 / Avg: 379.53 / Max: 379.91. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressPrefer CacheAutoPrefer Freq13K26K39K52K65KSE +/- 137.26, N = 3SE +/- 307.96, N = 3SE +/- 352.22, N = 358865.0158780.7556651.531. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressPrefer CacheAutoPrefer Freq10K20K30K40K50KMin: 58611.67 / Avg: 58865.01 / Max: 59083.27Min: 58415.66 / Avg: 58780.75 / Max: 59392.88Min: 56156.62 / Avg: 56651.53 / Max: 57333.111. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.94, N = 3SE +/- 1.15, N = 3SE +/- 0.71, N = 3373.08371.84368.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingPrefer FreqPrefer CacheAuto70140210280350Min: 371.2 / Avg: 373.08 / Max: 374.13Min: 369.57 / Avg: 371.84 / Max: 373.23Min: 367.3 / Avg: 368.05 / Max: 369.471. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessAutoPrefer CachePrefer Freq0.51531.03061.54592.06122.5765SE +/- 0.01, N = 5SE +/- 0.02, N = 5SE +/- 0.03, N = 152.292.272.231. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessAutoPrefer CachePrefer Freq246810Min: 2.24 / Avg: 2.29 / Max: 2.32Min: 2.23 / Avg: 2.27 / Max: 2.32Min: 2.03 / Avg: 2.23 / Max: 2.331. (CC) gcc options: -fvisibility=hidden -O2 -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsPrefer CachePrefer FreqAuto30K60K90K120K150KSE +/- 290.08, N = 3SE +/- 78.60, N = 3SE +/- 27.28, N = 31367831366131365131. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsPrefer CachePrefer FreqAuto20K40K60K80K100KMin: 136440 / Avg: 136783.33 / Max: 137360Min: 136460 / Avg: 136613.33 / Max: 136720Min: 136460 / Avg: 136513.33 / Max: 1365501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiPrefer CachePrefer FreqAuto2004006008001000SE +/- 3.26, N = 3SE +/- 0.99, N = 3SE +/- 2.10, N = 31105.251102.641101.711. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiPrefer CachePrefer FreqAuto2004006008001000Min: 1101.6 / Avg: 1105.25 / Max: 1111.75Min: 1101 / Avg: 1102.64 / Max: 1104.42Min: 1097.54 / Avg: 1101.71 / Max: 1104.231. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinPrefer FreqAutoPrefer Cache60K120K180K240K300KSE +/- 231.40, N = 3SE +/- 104.14, N = 3SE +/- 25.17, N = 32628872627872624201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinPrefer FreqAutoPrefer Cache50K100K150K200K250KMin: 262430 / Avg: 262886.67 / Max: 263180Min: 262600 / Avg: 262786.67 / Max: 262960Min: 262390 / Avg: 262420 / Max: 2624701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xPrefer CacheAutoPrefer Freq2004006008001000SE +/- 5.11, N = 3SE +/- 3.00, N = 3SE +/- 1.01, N = 31116.021110.551109.971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xPrefer CacheAutoPrefer Freq2004006008001000Min: 1106.87 / Avg: 1116.02 / Max: 1124.55Min: 1105.97 / Avg: 1110.55 / Max: 1116.2Min: 1108.03 / Avg: 1109.97 / Max: 1111.451. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedPrefer FreqAutoPrefer Cache4K8K12K16K20KSE +/- 23.55, N = 3SE +/- 30.12, N = 3SE +/- 89.48, N = 319602.619572.919460.11. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedPrefer FreqAutoPrefer Cache3K6K9K12K15KMin: 19560.4 / Avg: 19602.63 / Max: 19641.8Min: 19524.4 / Avg: 19572.9 / Max: 19628.1Min: 19285.7 / Avg: 19460.13 / Max: 195821. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedPrefer CachePrefer FreqAuto4K8K12K16K20KSE +/- 79.02, N = 3SE +/- 99.20, N = 3SE +/- 78.27, N = 317319.6717314.4017290.981. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedPrefer CachePrefer FreqAuto3K6K9K12K15KMin: 17168.9 / Avg: 17319.67 / Max: 17436.11Min: 17126.55 / Avg: 17314.4 / Max: 17463.63Min: 17190.25 / Avg: 17290.98 / Max: 17445.121. (CC) gcc options: -O3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyritePrefer FreqPrefer CacheAuto60K120K180K240K300KSE +/- 957.29, N = 3SE +/- 356.85, N = 3SE +/- 551.67, N = 32919532919332915471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyritePrefer FreqPrefer CacheAuto50K100K150K200K250KMin: 290040 / Avg: 291953.33 / Max: 292970Min: 291220 / Avg: 291933.33 / Max: 292310Min: 290490 / Avg: 291546.67 / Max: 2923501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowPrefer FreqAutoPrefer Cache510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 320.7420.7120.691. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowPrefer FreqAutoPrefer Cache510152025Min: 20.74 / Avg: 20.74 / Max: 20.75Min: 20.68 / Avg: 20.71 / Max: 20.72Min: 20.68 / Avg: 20.69 / Max: 20.691. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMAutoPrefer CachePrefer Freq60120180240300SE +/- 0.89, N = 5SE +/- 3.75, N = 15SE +/- 1.29, N = 5254.8220.7215.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMAutoPrefer CachePrefer Freq50100150200250Min: 252.6 / Avg: 254.8 / Max: 256.9Min: 212.1 / Avg: 220.72 / Max: 255.7Min: 212.8 / Avg: 215.94 / Max: 219.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMAutoPrefer CachePrefer Freq140280420560700SE +/- 2.33, N = 5SE +/- 6.69, N = 15SE +/- 4.08, N = 5633.1571.6563.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMAutoPrefer CachePrefer Freq110220330440550Min: 627.3 / Avg: 633.14 / Max: 638Min: 544.6 / Avg: 571.56 / Max: 632Min: 552.8 / Avg: 563.56 / Max: 572.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionPrefer FreqPrefer CacheAuto0.1890.3780.5670.7560.945SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.840.840.831. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionPrefer FreqPrefer CacheAuto246810Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.83 / Avg: 0.83 / Max: 0.841. (CC) gcc options: -fvisibility=hidden -O2 -lm

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolvePrefer FreqAutoPrefer Cache714212835SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 328.4828.5528.551. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolvePrefer FreqAutoPrefer Cache612182430Min: 28.44 / Avg: 28.48 / Max: 28.52Min: 28.52 / Avg: 28.54 / Max: 28.59Min: 28.5 / Avg: 28.55 / Max: 28.61. (CXX) g++ options: -fopenmp -O2 -march=native

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumPrefer FreqPrefer CacheAuto510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 321.1921.1621.151. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumPrefer FreqPrefer CacheAuto510152025Min: 21.19 / Avg: 21.19 / Max: 21.2Min: 21.14 / Avg: 21.16 / Max: 21.17Min: 21.11 / Avg: 21.15 / Max: 21.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsPrefer FreqAutoPrefer Cache400800120016002000SE +/- 11.77, N = 3SE +/- 12.79, N = 3SE +/- 10.69, N = 32080.42083.52096.1MIN: 1915.29 / MAX: 2096.84MIN: 1911.93 / MAX: 2147.25MIN: 1946.09 / MAX: 2151
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsPrefer FreqAutoPrefer Cache400800120016002000Min: 2056.93 / Avg: 2080.37 / Max: 2093.92Min: 2067.15 / Avg: 2083.53 / Max: 2108.72Min: 2075.08 / Avg: 2096.15 / Max: 2109.87

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 322.7322.6622.641. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache510152025Min: 22.69 / Avg: 22.73 / Max: 22.77Min: 22.61 / Avg: 22.66 / Max: 22.68Min: 22.51 / Avg: 22.64 / Max: 22.731. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KAutoPrefer CachePrefer Freq20406080100SE +/- 0.99, N = 7SE +/- 1.09, N = 15SE +/- 0.90, N = 15108.41105.98102.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KAutoPrefer CachePrefer Freq20406080100Min: 103.43 / Avg: 108.41 / Max: 111.58Min: 96.96 / Avg: 105.98 / Max: 110.43Min: 98.92 / Avg: 102.32 / Max: 108.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateAutoPrefer CachePrefer Freq0.16760.33520.50280.67040.838SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 30.7360.7410.745
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateAutoPrefer CachePrefer Freq246810Min: 0.73 / Avg: 0.74 / Max: 0.74Min: 0.74 / Avg: 0.74 / Max: 0.74Min: 0.74 / Avg: 0.75 / Max: 0.75

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.84, N = 3SE +/- 3.20, N = 3SE +/- 2.28, N = 3786.3788.2790.2MIN: 582.5 / MAX: 787.77MIN: 581.24 / MAX: 794.49MIN: 583.89 / MAX: 794.71
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesPrefer FreqPrefer CacheAuto140280420560700Min: 784.88 / Avg: 786.26 / Max: 787.77Min: 783.95 / Avg: 788.24 / Max: 794.49Min: 787.71 / Avg: 790.16 / Max: 794.71

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Prefer FreqPrefer CacheAuto4080120160200SE +/- 0.45, N = 4SE +/- 1.59, N = 9SE +/- 0.08, N = 4191.66192.25192.28MIN: 189.14 / MAX: 202.5MIN: 185.61 / MAX: 215.47MIN: 190.51 / MAX: 200.261. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Prefer FreqPrefer CacheAuto4080120160200Min: 190.68 / Avg: 191.66 / Max: 192.64Min: 188.9 / Avg: 192.25 / Max: 202.78Min: 192.03 / Avg: 192.28 / Max: 192.391. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUPrefer CacheAutoPrefer Freq0.18520.37040.55560.74080.926SE +/- 0.008538, N = 15SE +/- 0.001866, N = 5SE +/- 0.004461, N = 50.6646270.6655170.823009MIN: 0.58MIN: 0.62MIN: 0.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUPrefer CacheAutoPrefer Freq246810Min: 0.61 / Avg: 0.66 / Max: 0.75Min: 0.66 / Avg: 0.67 / Max: 0.67Min: 0.81 / Avg: 0.82 / Max: 0.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pPrefer CacheAutoPrefer Freq714212835SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 328.3428.3228.291. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pPrefer CacheAutoPrefer Freq612182430Min: 28.26 / Avg: 28.34 / Max: 28.44Min: 28.31 / Avg: 28.32 / Max: 28.34Min: 28.21 / Avg: 28.29 / Max: 28.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KPrefer CachePrefer FreqAuto20406080100SE +/- 1.11, N = 15SE +/- 1.20, N = 15SE +/- 1.08, N = 6109.81108.60106.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KPrefer CachePrefer FreqAuto20406080100Min: 102.8 / Avg: 109.81 / Max: 114.23Min: 102.97 / Avg: 108.6 / Max: 114.48Min: 102.23 / Avg: 106.78 / Max: 110.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasPrefer CacheAutoPrefer Freq612182430SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 324.6024.9425.06
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasPrefer CacheAutoPrefer Freq612182430Min: 24.49 / Avg: 24.6 / Max: 24.78Min: 24.81 / Avg: 24.94 / Max: 25.08Min: 24.91 / Avg: 25.06 / Max: 25.25

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestPrefer FreqPrefer CacheAuto50M100M150M200M250MSE +/- 2069084.61, N = 3SE +/- 1311911.24, N = 3SE +/- 2423625.04, N = 42411666672043666672026250001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestPrefer FreqPrefer CacheAuto40M80M120M160M200MMin: 237900000 / Avg: 241166666.67 / Max: 245000000Min: 202300000 / Avg: 204366666.67 / Max: 206800000Min: 197400000 / Avg: 202625000 / Max: 2068000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Prefer CacheAutoPrefer Freq400800120016002000SE +/- 31.69, N = 20SE +/- 32.29, N = 20SE +/- 36.66, N = 20159316321638
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Prefer CacheAutoPrefer Freq30060090012001500Min: 1386 / Avg: 1593.15 / Max: 1814Min: 1378 / Avg: 1631.7 / Max: 1882Min: 1358 / Avg: 1638.35 / Max: 1934

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigPrefer CachePrefer FreqAuto612182430SE +/- 0.18, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 322.6223.3924.851. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigPrefer CachePrefer FreqAuto612182430Min: 22.32 / Avg: 22.62 / Max: 22.95Min: 23.22 / Avg: 23.39 / Max: 23.6Min: 24.62 / Avg: 24.85 / Max: 25.021. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionPrefer FreqPrefer CacheAuto48121620SE +/- 0.09, N = 4SE +/- 0.15, N = 4SE +/- 0.14, N = 614.6214.6714.951. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionPrefer FreqPrefer CacheAuto48121620Min: 14.42 / Avg: 14.62 / Max: 14.8Min: 14.27 / Avg: 14.67 / Max: 14.97Min: 14.37 / Avg: 14.95 / Max: 15.421. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Prefer CacheAutoPrefer Freq246810SE +/- 0.018306, N = 3SE +/- 0.071291, N = 3SE +/- 0.049732, N = 37.2085997.1964017.1955161. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Prefer CacheAutoPrefer Freq3691215Min: 7.17 / Avg: 7.21 / Max: 7.24Min: 7.12 / Avg: 7.2 / Max: 7.34Min: 7.12 / Avg: 7.2 / Max: 7.291. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqAutoPrefer Cache50100150200250SE +/- 0.67, N = 3SE +/- 1.95, N = 4SE +/- 0.47, N = 3243.1242.3204.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqAutoPrefer Cache4080120160200Min: 241.8 / Avg: 243.13 / Max: 243.8Min: 238 / Avg: 242.25 / Max: 246.6Min: 204 / Avg: 204.93 / Max: 205.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqAutoPrefer Cache140280420560700SE +/- 1.65, N = 3SE +/- 6.79, N = 4SE +/- 0.12, N = 3633.3622.6566.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqAutoPrefer Cache110220330440550Min: 630 / Avg: 633.27 / Max: 635.3Min: 605.2 / Avg: 622.6 / Max: 638.3Min: 566.1 / Avg: 566.3 / Max: 566.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19AutoPrefer FreqPrefer Cache246810SE +/- 0.009, N = 6SE +/- 0.080, N = 15SE +/- 0.012, N = 57.4977.5618.714
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19AutoPrefer FreqPrefer Cache3691215Min: 7.47 / Avg: 7.5 / Max: 7.52Min: 7.44 / Avg: 7.56 / Max: 8.68Min: 8.69 / Avg: 8.71 / Max: 8.76

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Prefer FreqAutoPrefer Cache246810SE +/- 0.040332, N = 3SE +/- 0.076546, N = 3SE +/- 0.057662, N = 37.3914937.3394467.3244511. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Prefer FreqAutoPrefer Cache3691215Min: 7.32 / Avg: 7.39 / Max: 7.45Min: 7.19 / Avg: 7.34 / Max: 7.42Min: 7.21 / Avg: 7.32 / Max: 7.391. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential FillPrefer FreqAutoPrefer Cache300K600K900K1200K1500KSE +/- 3968.04, N = 3SE +/- 2647.98, N = 3SE +/- 3546.36, N = 31450599144549914418151. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential FillPrefer FreqAutoPrefer Cache300K600K900K1200K1500KMin: 1442663 / Avg: 1450599 / Max: 1454597Min: 1440223 / Avg: 1445499.33 / Max: 1448532Min: 1436267 / Avg: 1441815.33 / Max: 14484161. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian DragonPrefer CachePrefer FreqAuto918273645SE +/- 0.02, N = 4SE +/- 0.02, N = 4SE +/- 0.06, N = 439.7839.7639.74MIN: 39.43 / MAX: 40.87MIN: 39.42 / MAX: 40.83MIN: 39.35 / MAX: 40.85
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian DragonPrefer CachePrefer FreqAuto816243240Min: 39.74 / Avg: 39.78 / Max: 39.82Min: 39.72 / Avg: 39.76 / Max: 39.8Min: 39.62 / Avg: 39.74 / Max: 39.87

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompilePrefer CachePrefer FreqAuto510152025SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 321.8221.8521.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompilePrefer CachePrefer FreqAuto510152025Min: 21.71 / Avg: 21.82 / Max: 21.97Min: 21.78 / Avg: 21.85 / Max: 21.9Min: 21.89 / Avg: 21.96 / Max: 22.03

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StateAutoPrefer FreqPrefer Cache0.02770.05540.08310.11080.1385SE +/- 0.000, N = 4SE +/- 0.001, N = 4SE +/- 0.000, N = 40.1210.1220.123
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StateAutoPrefer FreqPrefer Cache12345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Prefer FreqAutoPrefer Cache246810SE +/- 0.035806, N = 3SE +/- 0.032079, N = 3SE +/- 0.016609, N = 37.7432607.7267387.7263961. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Prefer FreqAutoPrefer Cache3691215Min: 7.67 / Avg: 7.74 / Max: 7.78Min: 7.69 / Avg: 7.73 / Max: 7.79Min: 7.69 / Avg: 7.73 / Max: 7.751. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileAutoPrefer FreqPrefer Cache510152025SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.17, N = 321.2121.3021.44
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileAutoPrefer FreqPrefer Cache510152025Min: 21.11 / Avg: 21.21 / Max: 21.38Min: 21.14 / Avg: 21.3 / Max: 21.45Min: 21.1 / Avg: 21.44 / Max: 21.62

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUAutoPrefer CachePrefer Freq0.10470.20940.31410.41880.5235SE +/- 0.000909, N = 3SE +/- 0.001471, N = 3SE +/- 0.000410, N = 30.4608110.4644260.465139MIN: 0.44MIN: 0.45MIN: 0.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUAutoPrefer CachePrefer Freq12345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.47 / Max: 0.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestPrefer FreqAutoPrefer Cache80160240320400SE +/- 0.54, N = 3SE +/- 0.34, N = 3SE +/- 1.83, N = 3388.9391.5391.5MIN: 352.2 / MAX: 453.6MIN: 365 / MAX: 478.75MIN: 353.06 / MAX: 470.48
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestPrefer FreqAutoPrefer Cache70140210280350Min: 387.94 / Avg: 388.93 / Max: 389.79Min: 391.19 / Avg: 391.55 / Max: 392.23Min: 387.91 / Avg: 391.55 / Max: 393.69

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810SE +/- 0.007132, N = 3SE +/- 0.002520, N = 3SE +/- 0.034705, N = 37.9966417.9772557.9263231. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Prefer FreqPrefer CacheAuto3691215Min: 7.99 / Avg: 8 / Max: 8.01Min: 7.97 / Avg: 7.98 / Max: 7.98Min: 7.86 / Avg: 7.93 / Max: 7.971. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectPrefer FreqAutoPrefer Cache510152025SE +/- 0.17, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 320.3720.5820.74
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectPrefer FreqAutoPrefer Cache510152025Min: 20.13 / Avg: 20.37 / Max: 20.71Min: 20.5 / Avg: 20.58 / Max: 20.69Min: 20.65 / Avg: 20.74 / Max: 20.88

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto918273645SE +/- 0.08, N = 4SE +/- 0.18, N = 4SE +/- 0.20, N = 440.3439.8339.801. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto816243240Min: 40.24 / Avg: 40.34 / Max: 40.58Min: 39.35 / Avg: 39.83 / Max: 40.11Min: 39.37 / Avg: 39.8 / Max: 40.341. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompilePrefer FreqAutoPrefer Cache48121620SE +/- 0.02, N = 4SE +/- 0.05, N = 4SE +/- 0.07, N = 415.2615.3415.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompilePrefer FreqAutoPrefer Cache48121620Min: 15.23 / Avg: 15.26 / Max: 15.3Min: 15.25 / Avg: 15.34 / Max: 15.5Min: 15.2 / Avg: 15.35 / Max: 15.46

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassPrefer FreqAutoPrefer Cache510152025SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.16, N = 320.2720.2720.38
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassPrefer FreqAutoPrefer Cache510152025Min: 20.21 / Avg: 20.27 / Max: 20.33Min: 20.19 / Avg: 20.27 / Max: 20.38Min: 20.07 / Avg: 20.38 / Max: 20.58

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: CrownPrefer FreqPrefer CacheAuto714212835SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 332.2932.1332.00MIN: 31.94 / MAX: 32.98MIN: 31.83 / MAX: 32.72MIN: 31.58 / MAX: 32.66
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: CrownPrefer FreqPrefer CacheAuto714212835Min: 32.17 / Avg: 32.29 / Max: 32.46Min: 32.07 / Avg: 32.13 / Max: 32.18Min: 31.83 / Avg: 32 / Max: 32.12

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEAutoPrefer FreqPrefer Cache510152025SE +/- 0.15, N = 3SE +/- 0.18, N = 3SE +/- 0.07, N = 319.9019.9620.78
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEAutoPrefer FreqPrefer Cache510152025Min: 19.71 / Avg: 19.9 / Max: 20.19Min: 19.65 / Avg: 19.96 / Max: 20.28Min: 20.63 / Avg: 20.78 / Max: 20.86

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Prefer CacheAutoPrefer Freq90M180M270M360M450MSE +/- 457867.41, N = 3SE +/- 337440.37, N = 3SE +/- 1311932.65, N = 34381133004375862004354949331. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Prefer CacheAutoPrefer Freq80M160M240M320M400MMin: 437304500 / Avg: 438113300 / Max: 438889600Min: 437088300 / Avg: 437586200 / Max: 438229700Min: 432874300 / Avg: 435494933.33 / Max: 4369180001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Prefer FreqAutoPrefer Cache300M600M900M1200M1500MSE +/- 2628265.17, N = 3SE +/- 971253.49, N = 3SE +/- 1675642.50, N = 31488633333148770000014856666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Prefer FreqAutoPrefer Cache300M600M900M1200M1500MMin: 1484000000 / Avg: 1488633333.33 / Max: 1493100000Min: 1486400000 / Avg: 1487700000 / Max: 1489600000Min: 1482500000 / Avg: 1485666666.67 / Max: 14882000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Prefer FreqAutoPrefer Cache300M600M900M1200M1500MSE +/- 1880011.82, N = 3SE +/- 3012381.86, N = 3SE +/- 6133605.07, N = 31488033333148536666714714333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Prefer FreqAutoPrefer Cache300M600M900M1200M1500MMin: 1484900000 / Avg: 1488033333.33 / Max: 1491400000Min: 1480700000 / Avg: 1485366666.67 / Max: 1491000000Min: 1465200000 / Avg: 1471433333.33 / Max: 14837000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Prefer FreqAutoPrefer Cache160M320M480M640M800MSE +/- 135441.66, N = 3SE +/- 813961.51, N = 3SE +/- 1271434.01, N = 37579833337561300007517333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Prefer FreqAutoPrefer Cache130M260M390M520M650MMin: 757780000 / Avg: 757983333.33 / Max: 758240000Min: 755070000 / Avg: 756130000 / Max: 757730000Min: 749910000 / Avg: 751733333.33 / Max: 7541800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteAutoPrefer FreqPrefer Cache300K600K900K1200K1500KSE +/- 11338.88, N = 4SE +/- 12155.78, N = 4SE +/- 1625.68, N = 3127049612477861154341
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteAutoPrefer FreqPrefer Cache200K400K600K800K1000KMin: 1239861 / Avg: 1270495.5 / Max: 1292683Min: 1217414 / Avg: 1247786.25 / Max: 1276407Min: 1151835 / Avg: 1154341 / Max: 1157388

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingPrefer FreqAutoPrefer Cache40K80K120K160K200KSE +/- 194.35, N = 3SE +/- 73.05, N = 3SE +/- 304.82, N = 31770241766161763981. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingPrefer FreqAutoPrefer Cache30K60K90K120K150KMin: 176704 / Avg: 177023.67 / Max: 177375Min: 176497 / Avg: 176616.33 / Max: 176749Min: 176076 / Avg: 176397.67 / Max: 1770071. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingPrefer FreqPrefer CacheAuto40K80K120K160K200KSE +/- 439.96, N = 3SE +/- 298.57, N = 3SE +/- 177.38, N = 31906591898711896711. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingPrefer FreqPrefer CacheAuto30K60K90K120K150KMin: 190141 / Avg: 190659 / Max: 191534Min: 189292 / Avg: 189871 / Max: 190287Min: 189455 / Avg: 189671.33 / Max: 1900231. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian DragonPrefer FreqPrefer CacheAuto816243240SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 334.6234.5934.47MIN: 34.35 / MAX: 35.31MIN: 34.42 / MAX: 34.96MIN: 34.23 / MAX: 34.9
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian DragonPrefer FreqPrefer CacheAuto714212835Min: 34.52 / Avg: 34.62 / Max: 34.69Min: 34.57 / Avg: 34.59 / Max: 34.62Min: 34.37 / Avg: 34.47 / Max: 34.59

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Prefer CachePrefer FreqAuto10002000300040005000SE +/- 27.33, N = 3SE +/- 34.67, N = 3SE +/- 38.40, N = 34577.44512.44169.21. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Prefer CachePrefer FreqAuto8001600240032004000Min: 4525.4 / Avg: 4577.37 / Max: 4618Min: 4443.4 / Avg: 4512.4 / Max: 4552.9Min: 4098.8 / Avg: 4169.17 / Max: 42311. (CXX) g++ options: -O3 -march=native -rdynamic

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastPrefer CachePrefer FreqAuto80160240320400SE +/- 0.29, N = 5SE +/- 0.25, N = 5SE +/- 0.23, N = 5368.25368.10367.891. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastPrefer CachePrefer FreqAuto70140210280350Min: 367.44 / Avg: 368.25 / Max: 368.92Min: 367.69 / Avg: 368.1 / Max: 369.04Min: 367.23 / Avg: 367.89 / Max: 368.351. (CXX) g++ options: -O3 -flto -pthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMAutoPrefer FreqPrefer Cache60120180240300SE +/- 2.41, N = 5SE +/- 0.87, N = 4SE +/- 0.71, N = 4264.9223.1222.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMAutoPrefer FreqPrefer Cache50100150200250Min: 259.2 / Avg: 264.92 / Max: 270.8Min: 221.2 / Avg: 223.08 / Max: 225.4Min: 221.3 / Avg: 222.45 / Max: 224.51. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMAutoPrefer FreqPrefer Cache150300450600750SE +/- 5.81, N = 5SE +/- 1.49, N = 4SE +/- 1.16, N = 4675.6613.0610.41. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMAutoPrefer FreqPrefer Cache120240360480600Min: 660 / Avg: 675.62 / Max: 688.2Min: 610.1 / Avg: 612.95 / Max: 616.6Min: 608.7 / Avg: 610.35 / Max: 613.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPAutoPrefer CachePrefer Freq120240360480600SE +/- 2.26, N = 5SE +/- 2.00, N = 5SE +/- 0.00, N = 5551.31550.08549.451. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPAutoPrefer CachePrefer Freq100200300400500Min: 543.48 / Avg: 551.31 / Max: 555.56Min: 543.48 / Avg: 550.08 / Max: 555.56Min: 549.45 / Avg: 549.45 / Max: 549.451. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: CrownPrefer FreqAutoPrefer Cache816243240SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 336.2436.0336.02MIN: 35.81 / MAX: 36.91MIN: 35.6 / MAX: 36.69MIN: 35.7 / MAX: 36.65
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: CrownPrefer FreqAutoPrefer Cache816243240Min: 36.09 / Avg: 36.24 / Max: 36.37Min: 35.87 / Avg: 36.03 / Max: 36.12Min: 35.95 / Avg: 36.02 / Max: 36.11

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitAutoPrefer CachePrefer Freq2004006008001000SE +/- 1.52, N = 5SE +/- 0.54, N = 5SE +/- 1.23, N = 5830.58830.28827.991. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitAutoPrefer CachePrefer Freq150300450600750Min: 825.07 / Avg: 830.58 / Max: 834.17Min: 829.34 / Avg: 830.28 / Max: 831.66Min: 825.23 / Avg: 827.99 / Max: 832.231. (CC) gcc options: -pthread -lm

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipPrefer CachePrefer FreqAuto246810SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 36.05.95.9
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipPrefer CachePrefer FreqAuto246810Min: 5.9 / Avg: 6 / Max: 6.1Min: 5.9 / Avg: 5.93 / Max: 6Min: 5.9 / Avg: 5.93 / Max: 6

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19Prefer FreqAutoPrefer Cache3691215SE +/- 0.02, N = 5SE +/- 0.03, N = 5SE +/- 0.06, N = 510.1910.2011.45
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19Prefer FreqAutoPrefer Cache3691215Min: 10.16 / Avg: 10.19 / Max: 10.23Min: 10.12 / Avg: 10.2 / Max: 10.25Min: 11.3 / Avg: 11.45 / Max: 11.61

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeAutoPrefer CachePrefer Freq0.83251.6652.49753.334.1625SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 43.73.73.71. chrome 110.0.5481.96
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeAutoPrefer CachePrefer Freq246810Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.7 / Avg: 3.7 / Max: 3.71. chrome 110.0.5481.96

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughPrefer FreqPrefer CacheAuto48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 316.1516.1216.041. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughPrefer FreqPrefer CacheAuto48121620Min: 16.14 / Avg: 16.15 / Max: 16.16Min: 16.1 / Avg: 16.12 / Max: 16.13Min: 16.03 / Avg: 16.04 / Max: 16.061. (CXX) g++ options: -O3 -flto -pthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: unsharp-maskAutoPrefer FreqPrefer Cache3691215SE +/- 0.03, N = 4SE +/- 0.05, N = 4SE +/- 0.05, N = 413.1113.1313.14
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: unsharp-maskAutoPrefer FreqPrefer Cache48121620Min: 13.04 / Avg: 13.11 / Max: 13.18Min: 13.05 / Avg: 13.12 / Max: 13.27Min: 13.02 / Avg: 13.14 / Max: 13.23

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastPrefer FreqAutoPrefer Cache1020304050SE +/- 0.03, N = 4SE +/- 0.05, N = 4SE +/- 0.04, N = 445.9245.8445.781. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastPrefer FreqAutoPrefer Cache918273645Min: 45.86 / Avg: 45.92 / Max: 45.96Min: 45.71 / Avg: 45.84 / Max: 45.95Min: 45.72 / Avg: 45.78 / Max: 45.91. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto816243240SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.24, N = 334.6534.6434.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto714212835Min: 34.47 / Avg: 34.65 / Max: 34.83Min: 34.55 / Avg: 34.64 / Max: 34.69Min: 34.03 / Avg: 34.28 / Max: 34.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Prefer FreqPrefer CacheAuto1.06312.12623.18934.25245.3155SE +/- 0.011, N = 4SE +/- 0.008, N = 4SE +/- 0.033, N = 44.7254.6894.633
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Prefer FreqPrefer CacheAuto246810Min: 4.7 / Avg: 4.73 / Max: 4.75Min: 4.66 / Avg: 4.69 / Max: 4.7Min: 4.55 / Avg: 4.63 / Max: 4.71

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionPrefer FreqAutoPrefer Cache8001600240032004000SE +/- 27.55, N = 15SE +/- 26.74, N = 15SE +/- 23.31, N = 83195319635431. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionPrefer FreqAutoPrefer Cache6001200180024003000Min: 3141 / Avg: 3194.53 / Max: 3578Min: 3145 / Avg: 3196.47 / Max: 3566Min: 3497 / Avg: 3543.25 / Max: 36921. (CXX) g++ options: -O3

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimePrefer FreqAutoPrefer Cache3M6M9M12M15MSE +/- 134359.73, N = 4SE +/- 107676.89, N = 4SE +/- 154949.74, N = 41538738015335667152316021. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimePrefer FreqAutoPrefer Cache3M6M9M12M15MMin: 14989523 / Avg: 15387380.25 / Max: 15576360Min: 15144082 / Avg: 15335666.5 / Max: 15533991Min: 14777842 / Avg: 15231602 / Max: 154719811. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 1Prefer CachePrefer FreqAuto0.26570.53140.79711.06281.3285SE +/- 0.006, N = 3SE +/- 0.002, N = 3SE +/- 0.009, N = 31.1811.1791.176
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 1Prefer CachePrefer FreqAuto246810Min: 1.17 / Avg: 1.18 / Max: 1.19Min: 1.18 / Avg: 1.18 / Max: 1.18Min: 1.16 / Avg: 1.18 / Max: 1.19

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto0.05330.10660.15990.21320.2665SE +/- 0.002, N = 4SE +/- 0.001, N = 4SE +/- 0.001, N = 40.2340.2360.237
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto12345Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KAutoPrefer FreqPrefer Cache50100150200250SE +/- 3.51, N = 15SE +/- 3.40, N = 15SE +/- 3.44, N = 15211.00209.98209.861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KAutoPrefer FreqPrefer Cache4080120160200Min: 175.9 / Avg: 211 / Max: 217.65Min: 174.83 / Avg: 209.98 / Max: 217.39Min: 176.67 / Avg: 209.86 / Max: 217.21. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto0.27230.54460.81691.08921.3615SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.211.201.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto246810Min: 1.2 / Avg: 1.21 / Max: 1.21Min: 1.19 / Avg: 1.2 / Max: 1.21Min: 1.19 / Avg: 1.2 / Max: 1.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: resizePrefer CacheAutoPrefer Freq3691215SE +/- 0.08, N = 4SE +/- 0.08, N = 4SE +/- 0.09, N = 412.4512.5312.65
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: resizePrefer CacheAutoPrefer Freq48121620Min: 12.2 / Avg: 12.45 / Max: 12.53Min: 12.34 / Avg: 12.53 / Max: 12.72Min: 12.44 / Avg: 12.65 / Max: 12.85

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastPrefer FreqAutoPrefer Cache1428425670SE +/- 0.06, N = 5SE +/- 0.04, N = 5SE +/- 0.04, N = 560.4760.4660.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastPrefer FreqAutoPrefer Cache1224364860Min: 60.28 / Avg: 60.47 / Max: 60.63Min: 60.36 / Avg: 60.46 / Max: 60.6Min: 60.3 / Avg: 60.4 / Max: 60.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pPrefer FreqAutoPrefer Cache2004006008001000SE +/- 0.73, N = 5SE +/- 0.76, N = 5SE +/- 0.55, N = 5915.11913.51912.381. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pPrefer FreqAutoPrefer Cache160320480640800Min: 913.32 / Avg: 915.11 / Max: 916.99Min: 911 / Avg: 913.51 / Max: 915.71Min: 910.78 / Avg: 912.38 / Max: 913.661. (CC) gcc options: -pthread -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessPrefer FreqAutoPrefer Cache0.74181.48362.22542.96723.709SE +/- 0.022, N = 15SE +/- 0.027, N = 15SE +/- 0.022, N = 153.2503.2833.2971. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessPrefer FreqAutoPrefer Cache246810Min: 3.15 / Avg: 3.25 / Max: 3.52Min: 3.18 / Avg: 3.28 / Max: 3.48Min: 3.2 / Avg: 3.3 / Max: 3.491. (CXX) g++ options: -O3 -fPIC -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1AutoPrefer FreqPrefer Cache4080120160200SE +/- 0.42, N = 4SE +/- 0.45, N = 4SE +/- 1.78, N = 4174.11174.63175.72MIN: 173.47 / MAX: 175.49MIN: 173.65 / MAX: 179.74MIN: 172.53 / MAX: 179.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1AutoPrefer FreqPrefer Cache306090120150Min: 173.68 / Avg: 174.11 / Max: 175.38Min: 173.92 / Avg: 174.63 / Max: 175.82Min: 172.62 / Avg: 175.72 / Max: 1791. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Prefer CacheAutoPrefer Freq246810SE +/- 0.013, N = 6SE +/- 0.038, N = 6SE +/- 0.056, N = 67.4057.3707.367
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Prefer CacheAutoPrefer Freq3691215Min: 7.35 / Avg: 7.41 / Max: 7.43Min: 7.22 / Avg: 7.37 / Max: 7.5Min: 7.23 / Avg: 7.37 / Max: 7.53

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Prefer FreqAutoPrefer Cache1.09782.19563.29344.39125.489SE +/- 0.019, N = 8SE +/- 0.019, N = 8SE +/- 0.064, N = 154.5624.5674.8791. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Prefer FreqAutoPrefer Cache246810Min: 4.5 / Avg: 4.56 / Max: 4.66Min: 4.5 / Avg: 4.57 / Max: 4.67Min: 4.51 / Avg: 4.88 / Max: 5.111. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUPrefer CachePrefer FreqAuto48121620SE +/- 0.08, N = 5SE +/- 0.08, N = 5SE +/- 0.03, N = 517.6717.5817.57MIN: 15.83 / MAX: 18.03MIN: 15.92 / MAX: 18.06MIN: 15.92 / MAX: 17.78
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUPrefer CachePrefer FreqAuto48121620Min: 17.4 / Avg: 17.67 / Max: 17.85Min: 17.44 / Avg: 17.58 / Max: 17.88Min: 17.49 / Avg: 17.57 / Max: 17.66

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesAutoPrefer FreqPrefer Cache120240360480600SE +/- 2.96, N = 4SE +/- 4.09, N = 4SE +/- 3.30, N = 4500504562
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesAutoPrefer FreqPrefer Cache100200300400500Min: 493 / Avg: 499.5 / Max: 505Min: 497 / Avg: 503.5 / Max: 514Min: 555 / Avg: 562.25 / Max: 570

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryAutoPrefer FreqPrefer Cache3K6K9K12K15KSE +/- 52.79, N = 6SE +/- 62.18, N = 6SE +/- 53.25, N = 612821.2512712.2412696.411. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryAutoPrefer FreqPrefer Cache2K4K6K8K10KMin: 12702.69 / Avg: 12821.25 / Max: 12980.61Min: 12544.47 / Avg: 12712.24 / Max: 12956.74Min: 12633.98 / Avg: 12696.41 / Max: 12962.451. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionPrefer CacheAutoPrefer Freq1.24432.48863.73294.97726.2215SE +/- 0.01, N = 8SE +/- 0.01, N = 8SE +/- 0.07, N = 155.535.465.321. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionPrefer CacheAutoPrefer Freq246810Min: 5.48 / Avg: 5.53 / Max: 5.55Min: 5.41 / Avg: 5.46 / Max: 5.51Min: 4.98 / Avg: 5.32 / Max: 5.621. (CC) gcc options: -fvisibility=hidden -O2 -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1632486480SE +/- 0.12, N = 5SE +/- 0.17, N = 5SE +/- 0.21, N = 570.4870.3970.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1428425670Min: 70.25 / Avg: 70.48 / Max: 70.84Min: 69.83 / Avg: 70.39 / Max: 70.79Min: 69.46 / Avg: 70.09 / Max: 70.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: rotatePrefer CacheAutoPrefer Freq3691215SE +/- 0.008, N = 5SE +/- 0.009, N = 5SE +/- 0.017, N = 59.1049.4609.471
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: rotatePrefer CacheAutoPrefer Freq3691215Min: 9.07 / Avg: 9.1 / Max: 9.12Min: 9.44 / Avg: 9.46 / Max: 9.49Min: 9.42 / Avg: 9.47 / Max: 9.52

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pAutoPrefer FreqPrefer Cache1530456075SE +/- 0.36, N = 4SE +/- 0.09, N = 4SE +/- 0.28, N = 469.2268.5868.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pAutoPrefer FreqPrefer Cache1326395265Min: 68.28 / Avg: 69.22 / Max: 70.04Min: 68.39 / Avg: 68.58 / Max: 68.81Min: 67.97 / Avg: 68.47 / Max: 69.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: auto-levelsAutoPrefer CachePrefer Freq3691215SE +/- 0.04, N = 5SE +/- 0.04, N = 4SE +/- 0.03, N = 410.6410.7710.78
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: auto-levelsAutoPrefer CachePrefer Freq3691215Min: 10.48 / Avg: 10.64 / Max: 10.74Min: 10.7 / Avg: 10.77 / Max: 10.85Min: 10.72 / Avg: 10.78 / Max: 10.88

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache48121620SE +/- 0.02, N = 4SE +/- 0.04, N = 4SE +/- 0.05, N = 414.4714.4214.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache48121620Min: 14.43 / Avg: 14.47 / Max: 14.52Min: 14.3 / Avg: 14.42 / Max: 14.48Min: 14.3 / Avg: 14.4 / Max: 14.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadePrefer FreqAutoPrefer Cache10002000300040005000SE +/- 2.19, N = 8SE +/- 57.92, N = 15SE +/- 6.68, N = 73623367745171. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadePrefer FreqAutoPrefer Cache8001600240032004000Min: 3610 / Avg: 3623.38 / Max: 3628Min: 3603 / Avg: 3676.87 / Max: 4486Min: 4500 / Avg: 4517.29 / Max: 45371. (CXX) g++ options: -O3

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestPrefer FreqAutoPrefer Cache2K4K6K8K10KSE +/- 51.45, N = 5SE +/- 17.51, N = 5SE +/- 21.96, N = 5110561100610871
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestPrefer FreqAutoPrefer Cache2K4K6K8K10KMin: 10921 / Avg: 11055.8 / Max: 11237Min: 10958 / Avg: 11006.4 / Max: 11055Min: 10838 / Avg: 10870.8 / Max: 10949

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto20406080100SE +/- 0.02, N = 6SE +/- 0.06, N = 6SE +/- 0.20, N = 679.3279.1979.141. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto1530456075Min: 79.24 / Avg: 79.32 / Max: 79.39Min: 78.97 / Avg: 79.19 / Max: 79.43Min: 78.88 / Avg: 79.14 / Max: 80.131. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4KPrefer CachePrefer FreqAuto90180270360450SE +/- 1.20, N = 5SE +/- 2.66, N = 5SE +/- 1.15, N = 5401.81399.60399.461. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4KPrefer CachePrefer FreqAuto70140210280350Min: 399.75 / Avg: 401.81 / Max: 406.32Min: 395.01 / Avg: 399.6 / Max: 409.75Min: 395.53 / Avg: 399.46 / Max: 401.81. (CC) gcc options: -pthread -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowPrefer FreqPrefer CacheAuto20406080100SE +/- 0.07, N = 6SE +/- 0.06, N = 6SE +/- 0.12, N = 679.8479.7979.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowPrefer FreqPrefer CacheAuto1530456075Min: 79.6 / Avg: 79.84 / Max: 80.04Min: 79.57 / Avg: 79.79 / Max: 79.96Min: 79.49 / Avg: 79.72 / Max: 80.261. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumPrefer FreqAutoPrefer Cache20406080100SE +/- 0.09, N = 6SE +/- 0.11, N = 6SE +/- 0.09, N = 682.6482.5882.361. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumPrefer FreqAutoPrefer Cache1632486480Min: 82.21 / Avg: 82.64 / Max: 82.84Min: 82.15 / Avg: 82.58 / Max: 82.85Min: 82.09 / Avg: 82.36 / Max: 82.651. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScalePrefer FreqPrefer CacheAuto246810SE +/- 0.025, N = 6SE +/- 0.021, N = 6SE +/- 0.033, N = 67.2337.2647.344
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScalePrefer FreqPrefer CacheAuto3691215Min: 7.14 / Avg: 7.23 / Max: 7.3Min: 7.2 / Avg: 7.26 / Max: 7.32Min: 7.2 / Avg: 7.34 / Max: 7.41

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Prefer CachePrefer FreqAuto2K4K6K8K10KSE +/- 24.38, N = 7SE +/- 18.86, N = 7SE +/- 16.15, N = 79073.599066.299039.151. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Prefer CachePrefer FreqAuto16003200480064008000Min: 8947.88 / Avg: 9073.59 / Max: 9137.29Min: 8978.96 / Avg: 9066.29 / Max: 9133.4Min: 8983.36 / Avg: 9039.15 / Max: 9119.021. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyAutoPrefer CachePrefer Freq246810SE +/- 0.049, N = 6SE +/- 0.052, N = 6SE +/- 0.015, N = 67.1597.2787.283
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyAutoPrefer CachePrefer Freq3691215Min: 7.04 / Avg: 7.16 / Max: 7.3Min: 7.09 / Avg: 7.28 / Max: 7.4Min: 7.21 / Avg: 7.28 / Max: 7.31

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto50100150200250SE +/- 2.36, N = 15SE +/- 1.96, N = 15SE +/- 2.75, N = 15221.70220.58219.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto4080120160200Min: 202.92 / Avg: 221.7 / Max: 231.61Min: 203.06 / Avg: 220.58 / Max: 227.22Min: 205.03 / Avg: 219.5 / Max: 236.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzAutoPrefer FreqPrefer Cache3691215SE +/- 0.09, N = 4SE +/- 0.08, N = 4SE +/- 0.03, N = 410.4910.7210.78
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzAutoPrefer FreqPrefer Cache3691215Min: 10.24 / Avg: 10.49 / Max: 10.67Min: 10.5 / Avg: 10.72 / Max: 10.87Min: 10.7 / Avg: 10.78 / Max: 10.85

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumPrefer FreqPrefer CacheAuto306090120150SE +/- 0.06, N = 7SE +/- 0.05, N = 7SE +/- 0.04, N = 7129.53129.16129.151. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumPrefer FreqPrefer CacheAuto20406080100Min: 129.18 / Avg: 129.53 / Max: 129.73Min: 128.93 / Avg: 129.16 / Max: 129.26Min: 128.98 / Avg: 129.15 / Max: 129.231. (CXX) g++ options: -O3 -flto -pthread

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeAutoPrefer FreqPrefer Cache1.34642.69284.03925.38566.732SE +/- 0.005, N = 7SE +/- 0.003, N = 7SE +/- 0.004, N = 75.9815.9835.9841. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeAutoPrefer FreqPrefer Cache246810Min: 5.97 / Avg: 5.98 / Max: 6.01Min: 5.97 / Avg: 5.98 / Max: 5.99Min: 5.97 / Avg: 5.98 / Max: 61. (CC) gcc options: -static -fopenmp -O3 -march=native

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10AutoPrefer CachePrefer Freq48121620SE +/- 0.11, N = 7SE +/- 0.09, N = 7SE +/- 0.16, N = 717.0516.7916.74
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10AutoPrefer CachePrefer Freq48121620Min: 16.64 / Avg: 17.05 / Max: 17.48Min: 16.53 / Avg: 16.79 / Max: 17.23Min: 16.13 / Avg: 16.74 / Max: 17.42

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropPrefer FreqAutoPrefer Cache246810SE +/- 0.014, N = 6SE +/- 0.010, N = 6SE +/- 0.019, N = 66.8846.8906.933
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropPrefer FreqAutoPrefer Cache3691215Min: 6.85 / Avg: 6.88 / Max: 6.94Min: 6.86 / Avg: 6.89 / Max: 6.93Min: 6.87 / Avg: 6.93 / Max: 6.99

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150SE +/- 0.22, N = 7SE +/- 0.23, N = 7SE +/- 0.16, N = 7118.00117.97117.841. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 116.99 / Avg: 118 / Max: 118.55Min: 117.3 / Avg: 117.97 / Max: 118.74Min: 117.19 / Avg: 117.84 / Max: 118.611. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Prefer FreqAutoPrefer Cache246810SE +/- 0.007, N = 6SE +/- 0.005, N = 6SE +/- 0.004, N = 66.8456.8626.8681. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Prefer FreqAutoPrefer Cache3691215Min: 6.82 / Avg: 6.84 / Max: 6.87Min: 6.84 / Avg: 6.86 / Max: 6.88Min: 6.86 / Avg: 6.87 / Max: 6.891. (CXX) g++ options: -O3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingAutoPrefer FreqPrefer Cache2K4K6K8K10KSE +/- 171.09, N = 7SE +/- 80.38, N = 6SE +/- 80.38, N = 611519.911495.911495.91. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingAutoPrefer FreqPrefer Cache2K4K6K8K10KMin: 11094 / Avg: 11519.94 / Max: 12102.5Min: 11094 / Avg: 11495.92 / Max: 11576.3Min: 11094 / Avg: 11495.92 / Max: 11576.31. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingPrefer FreqPrefer CacheAuto15003000450060007500SE +/- 60.06, N = 6SE +/- 29.94, N = 6SE +/- 59.67, N = 76918.416857.026855.311. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingPrefer FreqPrefer CacheAuto12002400360048006000Min: 6656.4 / Avg: 6918.41 / Max: 7006.74Min: 6827.08 / Avg: 6857.02 / Max: 7006.74Min: 6656.4 / Avg: 6855.31 / Max: 7006.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto50100150200250SE +/- 2.64, N = 15SE +/- 2.18, N = 15SE +/- 2.55, N = 15241.09238.97238.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pPrefer CachePrefer FreqAuto4080120160200Min: 219.74 / Avg: 241.09 / Max: 251.58Min: 219.67 / Avg: 238.97 / Max: 251.74Min: 224.19 / Avg: 238.38 / Max: 257.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150SE +/- 0.18, N = 7SE +/- 0.11, N = 7SE +/- 0.17, N = 7126.63126.34126.221. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 125.9 / Avg: 126.63 / Max: 127.36Min: 125.95 / Avg: 126.34 / Max: 126.78Min: 125.47 / Avg: 126.22 / Max: 126.731. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Prefer CacheAutoPrefer Freq1020304050SE +/- 0.35, N = 9SE +/- 0.36, N = 15SE +/- 0.43, N = 1542.2542.8443.36MIN: 40.19 / MAX: 43.45MIN: 39.8 / MAX: 45.69MIN: 39.83 / MAX: 47.051. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Prefer CacheAutoPrefer Freq918273645Min: 40.34 / Avg: 42.25 / Max: 43.37Min: 40.01 / Avg: 42.84 / Max: 45.47Min: 40.02 / Avg: 43.36 / Max: 46.021. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pAutoPrefer CachePrefer Freq50100150200250SE +/- 3.46, N = 15SE +/- 2.80, N = 15SE +/- 2.13, N = 15248.22242.10241.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pAutoPrefer CachePrefer Freq4080120160200Min: 232.03 / Avg: 248.22 / Max: 269.08Min: 228.36 / Avg: 242.1 / Max: 260.91Min: 231.29 / Avg: 241.38 / Max: 254.71. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KPrefer CacheAutoPrefer Freq20406080100SE +/- 0.21, N = 6SE +/- 0.12, N = 6SE +/- 0.10, N = 6101.10101.10101.091. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KPrefer CacheAutoPrefer Freq20406080100Min: 100.38 / Avg: 101.1 / Max: 101.57Min: 100.69 / Avg: 101.1 / Max: 101.51Min: 100.67 / Avg: 101.09 / Max: 101.321. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessPrefer FreqAutoPrefer Cache1.24132.48263.72394.96526.2065SE +/- 0.012, N = 7SE +/- 0.036, N = 7SE +/- 0.021, N = 75.4215.4885.5171. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessPrefer FreqAutoPrefer Cache246810Min: 5.38 / Avg: 5.42 / Max: 5.47Min: 5.4 / Avg: 5.49 / Max: 5.64Min: 5.45 / Avg: 5.52 / Max: 5.61. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pAutoPrefer CachePrefer Freq50100150200250SE +/- 2.26, N = 15SE +/- 1.80, N = 15SE +/- 0.85, N = 10221.03219.56217.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pAutoPrefer CachePrefer Freq4080120160200Min: 201.99 / Avg: 221.03 / Max: 230.37Min: 202.55 / Avg: 219.56 / Max: 224.26Min: 214.6 / Avg: 217.88 / Max: 221.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pPrefer CachePrefer FreqAuto306090120150SE +/- 0.22, N = 7SE +/- 0.26, N = 7SE +/- 0.17, N = 7113.59113.42113.231. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pPrefer CachePrefer FreqAuto20406080100Min: 112.92 / Avg: 113.59 / Max: 114.7Min: 112.37 / Avg: 113.42 / Max: 114.29Min: 112.67 / Avg: 113.23 / Max: 114.051. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 7.3.0Prefer FreqAutoPrefer Cache1.04692.09383.14074.18765.2345SE +/- 0.012, N = 8SE +/- 0.007, N = 8SE +/- 0.013, N = 84.5854.6524.653
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 7.3.0Prefer FreqAutoPrefer Cache246810Min: 4.54 / Avg: 4.58 / Max: 4.63Min: 4.62 / Avg: 4.65 / Max: 4.69Min: 4.62 / Avg: 4.65 / Max: 4.73

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3Prefer CachePrefer FreqAuto1.18782.37563.56344.75125.939SE +/- 0.005, N = 7SE +/- 0.002, N = 7SE +/- 0.003, N = 75.1305.2725.279
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3Prefer CachePrefer FreqAuto246810Min: 5.12 / Avg: 5.13 / Max: 5.15Min: 5.26 / Avg: 5.27 / Max: 5.28Min: 5.27 / Avg: 5.28 / Max: 5.29

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzAutoPrefer CachePrefer Freq0.91221.82442.73663.64884.561SE +/- 0.030, N = 8SE +/- 0.033, N = 9SE +/- 0.011, N = 83.9534.0074.054
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzAutoPrefer CachePrefer Freq246810Min: 3.81 / Avg: 3.95 / Max: 4.05Min: 3.82 / Avg: 4.01 / Max: 4.1Min: 4 / Avg: 4.05 / Max: 4.11

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KAutoPrefer CachePrefer Freq50100150200250SE +/- 0.30, N = 9SE +/- 0.53, N = 9SE +/- 0.30, N = 9214.07213.53212.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KAutoPrefer CachePrefer Freq4080120160200Min: 212.69 / Avg: 214.07 / Max: 215.69Min: 211.31 / Avg: 213.53 / Max: 216.06Min: 211.27 / Avg: 212.52 / Max: 214.391. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KPrefer CachePrefer FreqAuto4080120160200SE +/- 0.10, N = 8SE +/- 0.34, N = 8SE +/- 0.36, N = 8174.96174.93174.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KPrefer CachePrefer FreqAuto306090120150Min: 174.32 / Avg: 174.96 / Max: 175.18Min: 173.86 / Avg: 174.93 / Max: 176.68Min: 173.66 / Avg: 174.66 / Max: 176.681. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastPrefer FreqAutoPrefer Cache4080120160200SE +/- 0.11, N = 9SE +/- 0.10, N = 9SE +/- 0.11, N = 9173.58173.45173.351. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastPrefer FreqAutoPrefer Cache306090120150Min: 173.19 / Avg: 173.58 / Max: 174.06Min: 173.16 / Avg: 173.45 / Max: 174.16Min: 172.86 / Avg: 173.35 / Max: 173.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianPrefer CacheAutoPrefer Freq0.77221.54442.31663.08883.861SE +/- 0.010, N = 9SE +/- 0.020, N = 9SE +/- 0.013, N = 93.4093.4143.432
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianPrefer CacheAutoPrefer Freq246810Min: 3.36 / Avg: 3.41 / Max: 3.47Min: 3.35 / Avg: 3.41 / Max: 3.53Min: 3.39 / Avg: 3.43 / Max: 3.51

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.49, N = 9SE +/- 0.51, N = 9SE +/- 0.63, N = 9195.81194.84193.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 192.74 / Avg: 195.81 / Max: 197.17Min: 192.17 / Avg: 194.84 / Max: 196.62Min: 191.37 / Avg: 193.76 / Max: 196.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto60120180240300SE +/- 1.82, N = 15SE +/- 1.51, N = 15SE +/- 2.12, N = 10269.52268.80268.241. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250Min: 244.55 / Avg: 269.52 / Max: 273.74Min: 248.45 / Avg: 268.8 / Max: 272.84Min: 249.58 / Avg: 268.24 / Max: 272.071. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Masskrug - Acceleration: CPU-onlyPrefer FreqAutoPrefer Cache0.5961.1921.7882.3842.98SE +/- 0.010, N = 9SE +/- 0.003, N = 9SE +/- 0.008, N = 92.6362.6432.649
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Masskrug - Acceleration: CPU-onlyPrefer FreqAutoPrefer Cache246810Min: 2.59 / Avg: 2.64 / Max: 2.69Min: 2.62 / Avg: 2.64 / Max: 2.65Min: 2.61 / Avg: 2.65 / Max: 2.67

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Prefer FreqAutoPrefer Cache0.73581.47162.20742.94323.679SE +/- 0.008, N = 9SE +/- 0.012, N = 9SE +/- 0.013, N = 93.2093.2243.2701. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Prefer FreqAutoPrefer Cache246810Min: 3.18 / Avg: 3.21 / Max: 3.25Min: 3.18 / Avg: 3.22 / Max: 3.3Min: 3.21 / Avg: 3.27 / Max: 3.331. (CXX) g++ options: -O3 -fPIC -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonPrefer CachePrefer FreqAuto5001000150020002500SE +/- 6.57, N = 9SE +/- 3.91, N = 9SE +/- 8.79, N = 9231423212329
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonPrefer CachePrefer FreqAuto400800120016002000Min: 2292 / Avg: 2314.22 / Max: 2352Min: 2301 / Avg: 2320.56 / Max: 2341Min: 2298 / Avg: 2328.89 / Max: 2371

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Room - Acceleration: CPU-onlyPrefer CachePrefer FreqAuto0.52791.05581.58372.11162.6395SE +/- 0.006, N = 9SE +/- 0.002, N = 9SE +/- 0.006, N = 92.3262.3312.346
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Room - Acceleration: CPU-onlyPrefer CachePrefer FreqAuto246810Min: 2.31 / Avg: 2.33 / Max: 2.35Min: 2.32 / Avg: 2.33 / Max: 2.34Min: 2.32 / Avg: 2.35 / Max: 2.38

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUPrefer FreqAutoPrefer Cache0.1430.2860.4290.5720.715SE +/- 0.001721, N = 9SE +/- 0.001435, N = 9SE +/- 0.001156, N = 90.6343570.6344550.635767MIN: 0.61MIN: 0.61MIN: 0.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUPrefer FreqAutoPrefer Cache246810Min: 0.63 / Avg: 0.63 / Max: 0.64Min: 0.63 / Avg: 0.63 / Max: 0.64Min: 0.63 / Avg: 0.64 / Max: 0.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Boat - Acceleration: CPU-onlyPrefer FreqAutoPrefer Cache0.55711.11421.67132.22842.7855SE +/- 0.003, N = 9SE +/- 0.004, N = 9SE +/- 0.005, N = 92.4522.4562.476
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Boat - Acceleration: CPU-onlyPrefer FreqAutoPrefer Cache246810Min: 2.44 / Avg: 2.45 / Max: 2.46Min: 2.43 / Avg: 2.46 / Max: 2.47Min: 2.45 / Avg: 2.48 / Max: 2.5

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastPrefer FreqAutoPrefer Cache50100150200250SE +/- 0.21, N = 10SE +/- 0.15, N = 10SE +/- 0.18, N = 10230.80230.49229.991. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastPrefer FreqAutoPrefer Cache4080120160200Min: 229.36 / Avg: 230.8 / Max: 231.56Min: 229.95 / Avg: 230.49 / Max: 231.34Min: 229.06 / Avg: 229.99 / Max: 230.711. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pAutoPrefer FreqPrefer Cache30060090012001500SE +/- 1.39, N = 10SE +/- 1.15, N = 10SE +/- 1.44, N = 101409.231407.911406.811. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pAutoPrefer FreqPrefer Cache2004006008001000Min: 1400.89 / Avg: 1409.23 / Max: 1415.08Min: 1402.75 / Avg: 1407.91 / Max: 1412.86Min: 1401.97 / Avg: 1406.81 / Max: 1416.871. (CC) gcc options: -pthread -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinAutoPrefer FreqPrefer Cache48121620SE +/- 0.11, N = 13SE +/- 0.14, N = 15SE +/- 0.17, N = 1516.3116.2516.111. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinAutoPrefer FreqPrefer Cache48121620Min: 15.4 / Avg: 16.31 / Max: 16.73Min: 15.12 / Avg: 16.25 / Max: 16.7Min: 14.2 / Avg: 16.11 / Max: 16.751. (CXX) g++ options: -O3 -lm -ldl

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.31, N = 11SE +/- 0.34, N = 11SE +/- 0.28, N = 11281.74281.56281.371. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto50100150200250Min: 280.09 / Avg: 281.74 / Max: 283.51Min: 279.87 / Avg: 281.56 / Max: 283.02Min: 279.95 / Avg: 281.37 / Max: 282.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache80160240320400SE +/- 1.24, N = 15SE +/- 1.44, N = 11SE +/- 1.35, N = 11384.34383.36382.651. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache70140210280350Min: 368.3 / Avg: 384.34 / Max: 388.32Min: 369.94 / Avg: 383.36 / Max: 387.45Min: 370.56 / Avg: 382.65 / Max: 386.661. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto70140210280350SE +/- 0.30, N = 10SE +/- 0.72, N = 10SE +/- 0.39, N = 10308.04306.62305.831. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto60120180240300Min: 306.59 / Avg: 308.04 / Max: 309.6Min: 301.81 / Avg: 306.62 / Max: 309.12Min: 303.18 / Avg: 305.83 / Max: 307.221. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100AutoPrefer FreqPrefer Cache48121620SE +/- 0.14, N = 15SE +/- 0.11, N = 12SE +/- 0.18, N = 1517.4417.1616.951. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100AutoPrefer FreqPrefer Cache48121620Min: 16.12 / Avg: 17.44 / Max: 17.71Min: 16.12 / Avg: 17.16 / Max: 17.71Min: 15.94 / Avg: 16.95 / Max: 17.71. (CC) gcc options: -fvisibility=hidden -O2 -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache100200300400500SE +/- 0.64, N = 12SE +/- 0.41, N = 12SE +/- 0.89, N = 12450.47449.83446.121. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache80160240320400Min: 445.8 / Avg: 450.47 / Max: 454.73Min: 447.46 / Avg: 449.83 / Max: 452.11Min: 440.25 / Avg: 446.12 / Max: 450.891. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache100200300400500SE +/- 0.43, N = 12SE +/- 0.50, N = 12SE +/- 0.64, N = 12459.41458.64457.941. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache80160240320400Min: 455.26 / Avg: 459.41 / Max: 461.28Min: 455.51 / Avg: 458.64 / Max: 461.95Min: 454.49 / Avg: 457.94 / Max: 461.051. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pAutoPrefer CachePrefer Freq130260390520650SE +/- 1.15, N = 12SE +/- 1.13, N = 12SE +/- 1.08, N = 12592.13591.30591.011. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pAutoPrefer CachePrefer Freq100200300400500Min: 582.52 / Avg: 592.13 / Max: 598.21Min: 585.37 / Avg: 591.3 / Max: 600Min: 581.4 / Avg: 591.01 / Max: 594.651. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache2004006008001000SE +/- 4.32, N = 15SE +/- 4.42, N = 15SE +/- 4.51, N = 15796.32789.95787.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pPrefer FreqAutoPrefer Cache140280420560700Min: 737.88 / Avg: 796.32 / Max: 807.5Min: 732.8 / Avg: 789.95 / Max: 806.84Min: 728.99 / Avg: 787.79 / Max: 802.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pPrefer CacheAutoPrefer Freq170340510680850SE +/- 1.73, N = 13SE +/- 3.12, N = 13SE +/- 1.28, N = 13773.41772.34771.611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pPrefer CacheAutoPrefer Freq140280420560700Min: 765.28 / Avg: 773.41 / Max: 784.63Min: 744.03 / Avg: 772.34 / Max: 783.38Min: 764.14 / Avg: 771.61 / Max: 782.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultAutoPrefer CachePrefer Freq714212835SE +/- 0.07, N = 13SE +/- 0.02, N = 13SE +/- 0.30, N = 1528.1227.9226.571. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultAutoPrefer CachePrefer Freq612182430Min: 27.55 / Avg: 28.12 / Max: 28.3Min: 27.78 / Avg: 27.92 / Max: 28.04Min: 25.37 / Avg: 26.57 / Max: 28.31. (CC) gcc options: -fvisibility=hidden -O2 -lm

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Rack - Acceleration: CPU-onlyAutoPrefer FreqPrefer Cache0.03510.07020.10530.14040.1755SE +/- 0.000, N = 14SE +/- 0.000, N = 14SE +/- 0.000, N = 140.1520.1520.156
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Rack - Acceleration: CPU-onlyAutoPrefer FreqPrefer Cache12345Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.16 / Avg: 0.16 / Max: 0.16

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

Test: x86_64 RdRand

Auto: The test run did not produce a result. E: stress-ng: error: [982014] No stress workers invoked (one or more were unsupported)

Prefer Cache: The test run did not produce a result. E: stress-ng: error: [943105] No stress workers invoked (one or more were unsupported)

Prefer Freq: The test run did not produce a result. E: stress-ng: error: [939716] No stress workers invoked (one or more were unsupported)

415 Results Shown

GNU Radio:
  Hilbert Transform
  FM Deemphasis Filter
  IIR Filter
  FIR Filter
  Signal Source (Cosine)
  Five Back to Back FIR Filters
LAMMPS Molecular Dynamics Simulator
Timed Linux Kernel Compilation
Blender
ONNX Runtime:
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
ASKAP:
  tConvolve MT - Degridding
  tConvolve MT - Gridding
ONNX Runtime:
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
LuaRadio:
  Complex Phase
  Hilbert Transform
  FM Deemphasis Filter
  Five Back to Back FIR Filters
OpenVKL
OSPRay
OpenEMS
OpenVKL
LeelaChessZero:
  BLAS
  Eigen
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
OSPRay
ONNX Runtime:
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
Himeno Benchmark
Timed LLVM Compilation:
  Unix Makefiles
  Ninja
Numpy Benchmark
ONNX Runtime:
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
BRL-CAD
OSPRay
ClickHouse:
  100M Rows Hits Dataset, Third Run
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, First Run / Cold Cache
High Performance Conjugate Gradient
Selenium
Renaissance
libavif avifenc
Blender
Renaissance
NCNN:
  CPU - mnasnet
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Selenium
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
OSPRay Studio
Stress-NG
Cpuminer-Opt
WireGuard + Linux Networking Stack Stress Test
Gcrypt Library
OSPRay Studio
TNN
OSPRay Studio
Blender
OSPRay Studio
Stress-NG
OSPRay Studio:
  2 - 4K - 32 - Path Tracer
  1 - 4K - 32 - Path Tracer
Stress-NG
GPAW
Renaissance
KTX-Software toktx
OSPRay Studio
Stress-NG
GraphicsMagick
Radiance Benchmark
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
PyHPC Benchmarks
Cpuminer-Opt
libavif avifenc
OSPRay Studio:
  3 - 1080p - 1 - Path Tracer
  2 - 1080p - 32 - Path Tracer
  2 - 1080p - 1 - Path Tracer
SVT-HEVC
OSPRay Studio:
  1 - 1080p - 1 - Path Tracer
  1 - 1080p - 32 - Path Tracer
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
Appleseed
simdjson
Numenta Anomaly Benchmark
Sysbench
Stress-NG
Primesieve
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
simdjson
Zstd Compression:
  8 - Decompression Speed
  8 - Compression Speed
Stargate Digital Audio Workstation
Selenium
Timed MrBayes Analysis
Selenium
Chaos Group V-RAY
GROMACS
DeepSpeech
Xcompact3d Incompact3d
Blender
GraphicsMagick
Appleseed
simdjson:
  DistinctUserID
  TopTweet
  PartialTweets
Xmrig
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
LuxCoreRender
GEGL
Zstd Compression:
  12 - Decompression Speed
  12 - Compression Speed
OpenVINO:
  Face Detection FP16 - CPU:
    ms
    FPS
LuxCoreRender
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
Zstd Compression:
  3, Long Mode - Decompression Speed
  3, Long Mode - Compression Speed
Renaissance
Zstd Compression:
  3 - Decompression Speed
  3 - Compression Speed
LuxCoreRender
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
Zstd Compression:
  8, Long Mode - Decompression Speed
  8, Long Mode - Compression Speed
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
LuxCoreRender
TensorFlow Lite
Selenium
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
TensorFlow Lite:
  Mobilenet Float
  Mobilenet Quant
RocksDB
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
RocksDB
GraphicsMagick:
  Sharpen
  Enhanced
RocksDB:
  Update Rand
  Read While Writing
  Read Rand Write Rand
GraphicsMagick:
  Rotate
  Swirl
  HWB Color Space
RocksDB
VP9 libvpx Encoding
Tachyon
libjpeg-turbo tjbench
AOM AV1
Selenium
Appleseed
Renaissance
Dolfyn
Build2
Renaissance
oneDNN
Cpuminer-Opt
Node.js V8 Web Tooling Benchmark
Blender
Xmrig
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
Stress-NG
Timed Godot Game Engine Compilation
AOM AV1
Stargate Digital Audio Workstation
Numenta Anomaly Benchmark
Timed Linux Kernel Compilation
Renaissance
Stress-NG
Numenta Anomaly Benchmark
Stress-NG
x264
GEGL
NAMD
Radiance Benchmark
AOM AV1
RawTherapee
SQLite Speedtest
Selenium
VP9 libvpx Encoding
GEGL:
  Rotate 90 Degrees
  Color Enhance
ACES DGEMM
Pennant
VP9 libvpx Encoding
RNNoise
Stargate Digital Audio Workstation
Selenium
AOM AV1
SVT-AV1
SVT-VP9
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
ASTC Encoder
oneDNN
Stargate Digital Audio Workstation
AOM AV1
Cpuminer-Opt:
  Triple SHA-256, Onecoin
  scrypt
Stress-NG:
  MEMFD
  Glibc C String Functions
  Matrix Math
  Atomic
Cpuminer-Opt
Stress-NG:
  Mutex
  Malloc
  SENDFILE
  Forking
  Crypto
  Vector Math
  MMAP
  CPU Stress
  Glibc Qsort Data Sorting
WebP Image Encode
Cpuminer-Opt:
  LBC, LBRY Credits
  Magi
  Skeincoin
  x25x
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
Cpuminer-Opt
Kvazaar
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
WebP Image Encode
m-queens
Kvazaar
Renaissance
SVT-HEVC
AOM AV1
PyHPC Benchmarks
Renaissance
TNN
oneDNN
AOM AV1:
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 4K
GEGL
srsRAN
DaCapo Benchmark
Pennant
Xcompact3d Incompact3d
Stargate Digital Audio Workstation
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 64-QAM:
    UE Mb/s
    eNb Mb/s
KTX-Software toktx
Stargate Digital Audio Workstation
RocksDB
Embree
Timed FFmpeg Compilation
PyHPC Benchmarks
Stargate Digital Audio Workstation
Timed Mesa Compilation
oneDNN
Renaissance
Stargate Digital Audio Workstation
GEGL
VP9 libvpx Encoding
Timed MPlayer Compilation
GEGL
Embree
Numenta Anomaly Benchmark
Algebraic Multi-Grid Benchmark
Liquid-DSP:
  32 - 256 - 57
  16 - 256 - 57
  8 - 256 - 57
PHPBench
7-Zip Compression:
  Decompression Rating
  Compression Rating
Embree
QuantLib
ASTC Encoder
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
ASKAP
Embree
dav1d
Natron
KTX-Software toktx
Selenium
ASTC Encoder
GIMP
Kvazaar
x265
rav1e
Google Draco
Crafty
rav1e
PyHPC Benchmarks
SVT-AV1
AOM AV1
GIMP
Kvazaar
dav1d
libavif avifenc
TNN
rav1e
LAME MP3 Encoding
LuxCoreRender
PyBench
Sysbench
WebP Image Encode
SVT-AV1
GIMP
AOM AV1
GIMP
SVT-AV1
Google Draco
Node.js Express HTTP Load Test
Kvazaar
dav1d
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
GEGL
LULESH
Numenta Anomaly Benchmark
AOM AV1
Unpacking Firefox
ASTC Encoder
N-Queens
rav1e
GEGL
SVT-VP9
Primesieve
ASKAP:
  tConvolve OpenMP - Degridding
  tConvolve OpenMP - Gridding
AOM AV1
SVT-VP9
TNN
AOM AV1
SVT-HEVC
libavif avifenc
AOM AV1
x265
GNU Octave Benchmark
KTX-Software toktx
Unpacking The Linux Kernel
SVT-AV1
SVT-HEVC
Kvazaar
Numenta Anomaly Benchmark
SVT-AV1
x264
Darktable
libavif avifenc
DaCapo Benchmark
Darktable
oneDNN
Darktable
Kvazaar
dav1d
LAMMPS Molecular Dynamics Simulator
Kvazaar
SVT-VP9
SVT-HEVC
WebP Image Encode
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
SVT-HEVC
SVT-AV1:
  Preset 13 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
WebP Image Encode
Darktable