AMD Ryzen 9 7950X3D Modes On Linux

Ryzen 9 7950X3D benchmarks for a future article by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302261-NE-7950X3DMO02
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 5 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 4 Tests
Web Browsers 1 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 21 Tests
Compression Tests 3 Tests
CPU Massive 41 Tests
Creator Workloads 42 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Database Test Suite 3 Tests
Encoding 13 Tests
Fortran Tests 4 Tests
Game Development 6 Tests
HPC - High Performance Computing 28 Tests
Imaging 8 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Linear Algebra 2 Tests
Machine Learning 11 Tests
Molecular Dynamics 8 Tests
MPI Benchmarks 8 Tests
Multi-Core 47 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 11 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 4 Tests
Raytracing 3 Tests
Renderers 10 Tests
Scientific Computing 14 Tests
Software Defined Radio 4 Tests
Server 7 Tests
Server CPU Tests 28 Tests
Single-Threaded 8 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 11 Tests
Common Workstation Benchmarks 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Auto
February 17 2023
  1 Day, 8 Hours, 38 Minutes
Prefer Cache
February 19 2023
  1 Day, 9 Hours, 1 Minute
Prefer Freq
February 20 2023
  1 Day, 12 Hours, 27 Minutes
Invert Hiding All Results Option
  1 Day, 10 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD Ryzen 9 7950X3D Modes On LinuxOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR X670E HERO (9922 BIOS)AMD Device 14d832GBWestern Digital WD_BLACK SN850X 1000GB + 2000GBAMD Radeon RX 7900 XTX 24GB (2304/1249MHz)AMD Device ab30ASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 23.046.2.0-060200rc8daily20230213-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.64.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49)GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionAMD Ryzen 9 7950X3D Modes On Linux BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203- OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-0ubuntu1)- Python 3.11.1- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AutoPrefer CachePrefer FreqResult OverviewPhoronix Test Suite100%104%108%112%Google DracoHimeno BenchmarkOSPRay StudioPyBenchPHPBenchQuantLibDeepSpeechStress-NGsrsRANNumpy BenchmarksimdjsonLAME MP3 EncodingKTX-Software toktxACES DGEMMPennantRadiance BenchmarkSQLite SpeedtestRNNoiseTensorFlow LiteoneDNN

AutoPrefer CachePrefer FreqPer Watt Result OverviewPhoronix Test Suite100%107%113%120%Numpy BenchmarkPHPBenchQuantLibsimdjsonHimeno BenchmarkLZ4 CompressionStress-NGsrsRANACES DGEMMASKAPClickHouseGraphicsMagickNode.js V8 Web Tooling Benchmarklibjpeg-turbo tjbenchAlgebraic Multi-Grid BenchmarkLiquid-DSPASTC EncoderWebP Image EncodeBRL-CADNode.js Express HTTP Load TestOpenVKLChaos Group V-RAYLuxCoreRenderStargate Digital Audio WorkstationIndigoBenchCraftyNatronSeleniumLuaRadioHigh Performance Conjugate Gradientx2657-Zip CompressionXmrigOpenEMSLULESHOSPRaySVT-VP9GNU Radiodav1dEmbreeSVT-HEVCVP9 libvpx EncodingSysbenchx264LAMMPS Molecular Dynamics SimulatorSVT-AV1Cpuminer-OptKvazaarZstd CompressionLeelaChessZeroRocksDBAOM AV1rav1eGROMACSP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

AMD Ryzen 9 7950X3D Modes On Linuxgnuradio: Hilbert Transformgnuradio: FM Deemphasis Filtergnuradio: IIR Filtergnuradio: FIR Filtergnuradio: Signal Source (Cosine)gnuradio: Five Back to Back FIR Filterslammps: 20k Atomsbuild-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlyonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardaskap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardluaradio: Complex Phaseluaradio: Hilbert Transformluaradio: FM Deemphasis Filterluaradio: Five Back to Back FIR Filtersopenvkl: vklBenchmark ISPCospray: particle_volume/pathtracer/real_timeopenems: pyEMS Coupleropenvkl: vklBenchmark Scalarlczero: BLASlczero: Eigencompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedospray: particle_volume/scivis/real_timeonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardhimeno: Poisson Pressure Solverbuild-llvm: Unix Makefilesbuild-llvm: Ninjanumpy: onnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardbrl-cad: VGR Performance Metricospray: particle_volume/ao/real_timeclickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachehpcg: selenium: PSPDFKit WASM - Google Chromerenaissance: ALS Movie Lensavifenc: 0blender: Pabellon Barcelona - CPU-Onlyrenaissance: Akka Unbalanced Cobwebbed Treencnn: CPU - mnasnetncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetselenium: Octane - Google Chromecompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedospray-studio: 3 - 4K - 32 - Path Tracerstress-ng: Socket Activitycpuminer-opt: Myriad-Groestlwireguard: gcrypt: ospray-studio: 3 - 4K - 1 - Path Tracertnn: CPU - DenseNetospray-studio: 2 - 4K - 1 - Path Tracerblender: Classroom - CPU-Onlyospray-studio: 1 - 4K - 1 - Path Tracerstress-ng: IO_uringospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerstress-ng: NUMAgpaw: Carbon Nanotuberenaissance: Scala Dottytoktx: UASTC 4 + Zstd Compression 19ospray-studio: 3 - 1080p - 32 - Path Tracerstress-ng: CPU Cachegraphics-magick: Noise-Gaussianradiance: Serialcompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingcpuminer-opt: Garlicoinavifenc: 2ospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 32 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracersvt-hevc: 1 - Bosphorus 4Kospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeappleseed: Emilysimdjson: Kostyanumenta-nab: KNN CADsysbench: CPUstress-ng: System V Message Passingprimesieve: 1e13onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUsimdjson: LargeRandcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedstargate: 192000 - 512selenium: ARES-6 - Google Chromemrbayes: Primate Phylogeny Analysisselenium: Jetstream 2 - Google Chromev-ray: CPUgromacs: MPI CPU - water_GMX50_baredeepspeech: CPUincompact3d: input.i3d 193 Cells Per Directionblender: Fishy Cat - CPU-Onlygraphics-magick: Resizingappleseed: Material Testersimdjson: DistinctUserIDsimdjson: TopTweetsimdjson: PartialTweetsxmrig: Monero - 1Mopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUluxcorerender: Orange Juice - CPUgegl: Cartooncompress-zstd: 12 - Decompression Speedcompress-zstd: 12 - Compression Speedopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUluxcorerender: Danish Mood - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedrenaissance: Apache Spark ALScompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedluxcorerender: LuxCore Benchmark - CPUsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedindigobench: CPU - Supercarindigobench: CPU - Bedroomopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUluxcorerender: DLSC - CPUtensorflow-lite: Inception V4selenium: Kraken - Google Chromeopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUtensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantrocksdb: Rand Fill Syncopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUrocksdb: Rand Fillgraphics-magick: Sharpengraphics-magick: Enhancedrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randgraphics-magick: Rotategraphics-magick: Swirlgraphics-magick: HWB Color Spacerocksdb: Rand Readvpxenc: Speed 0 - Bosphorus 4Ktachyon: Total Timetjbench: Decompression Throughputaom-av1: Speed 4 Two-Pass - Bosphorus 4Kselenium: WASM collisionDetection - Google Chromeappleseed: Disney Materialrenaissance: Genetic Algorithm Using Jenetics + Futuresdolfyn: Computational Fluid Dynamicsbuild2: Time To Compilerenaissance: Apache Spark PageRankonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUcpuminer-opt: Ringcoinnode-web-tooling: blender: BMW27 - CPU-Onlyxmrig: Wownero - 1Mcompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timestress-ng: Memory Copyingbuild-godot: Time To Compileaom-av1: Speed 0 Two-Pass - Bosphorus 4Kstargate: 192000 - 1024numenta-nab: Earthgecko Skylinebuild-linux-kernel: defconfigrenaissance: Savina Reactors.IOstress-ng: Futexnumenta-nab: Bayesian Changepointstress-ng: Semaphoresx264: Bosphorus 4Kgegl: Wavelet Blurnamd: ATPase Simulation - 327,506 Atomsradiance: SMP Parallelaom-av1: Speed 8 Realtime - Bosphorus 4Krawtherapee: Total Benchmark Timesqlite-speedtest: Timed Time - Size 1,000selenium: Speedometer - Google Chromevpxenc: Speed 5 - Bosphorus 4Kgegl: Rotate 90 Degreesgegl: Color Enhancemt-dgemm: Sustained Floating-Point Ratepennant: sedovbigvpxenc: Speed 0 - Bosphorus 1080prnnoise: stargate: 96000 - 512selenium: WASM imageConvolute - Google Chromeaom-av1: Speed 6 Two-Pass - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Ksrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMastcenc: Exhaustiveonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUstargate: 96000 - 1024aom-av1: Speed 9 Realtime - Bosphorus 4Kcpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: scryptstress-ng: MEMFDstress-ng: Glibc C String Functionsstress-ng: Matrix Mathstress-ng: Atomiccpuminer-opt: Deepcoinstress-ng: Mutexstress-ng: Mallocstress-ng: SENDFILEstress-ng: Forkingstress-ng: Cryptostress-ng: Vector Mathstress-ng: MMAPstress-ng: CPU Stressstress-ng: Glibc Qsort Data Sortingwebp: Quality 100, Losslesscpuminer-opt: LBC, LBRY Creditscpuminer-opt: Magicpuminer-opt: Skeincoincpuminer-opt: x25xcompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedcpuminer-opt: Quad SHA-256, Pyritekvazaar: Bosphorus 4K - Slowsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMwebp: Quality 100, Lossless, Highest Compressionm-queens: Time To Solvekvazaar: Bosphorus 4K - Mediumrenaissance: Finagle HTTP Requestssvt-hevc: 1 - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Kpyhpc: CPU - Numpy - 4194304 - Equation of Staterenaissance: Apache Spark Bayestnn: CPU - MobileNet v2onednn: IP Shapes 3D - u8s8f32 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 4Kgegl: Antialiassrsran: OFDM_Testdacapobench: H2pennant: leblancbigincompact3d: input.i3d 129 Cells Per Directionstargate: 480000 - 512srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMtoktx: UASTC 3 + Zstd Compression 19stargate: 44100 - 512rocksdb: Seq Fillembree: Pathtracer ISPC - Asian Dragonbuild-ffmpeg: Time To Compilepyhpc: CPU - Numpy - 1048576 - Equation of Statestargate: 480000 - 1024build-mesa: Time To Compileonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUrenaissance: Rand Foreststargate: 44100 - 1024gegl: Reflectvpxenc: Speed 5 - Bosphorus 1080pbuild-mplayer: Time To Compilegegl: Tile Glassembree: Pathtracer - Crownnumenta-nab: Contextual Anomaly Detector OSEamg: liquid-dsp: 32 - 256 - 57liquid-dsp: 16 - 256 - 57liquid-dsp: 8 - 256 - 57phpbench: PHP Benchmark Suitecompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingembree: Pathtracer - Asian Dragonquantlib: astcenc: Fastsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMaskap: Hogbom Clean OpenMPembree: Pathtracer ISPC - Crowndav1d: Chimera 1080p 10-bitnatron: Spaceshiptoktx: Zstd Compression 19selenium: Maze Solver - Google Chromeastcenc: Thoroughgimp: unsharp-maskkvazaar: Bosphorus 4K - Very Fastx265: Bosphorus 4Krav1e: 5draco: Lioncrafty: Elapsed Timerav1e: 1pyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingsvt-av1: Preset 13 - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080pgimp: resizekvazaar: Bosphorus 4K - Super Fastdav1d: Chimera 1080pavifenc: 10, Losslesstnn: CPU - SqueezeNet v1.1rav1e: 6encode-mp3: WAV To MP3luxcorerender: Rainbow Colors and Prism - CPUpybench: Total For Average Test Timessysbench: RAM / Memorywebp: Quality 100, Highest Compressionsvt-av1: Preset 8 - Bosphorus 4Kgimp: rotateaom-av1: Speed 6 Two-Pass - Bosphorus 1080pgimp: auto-levelssvt-av1: Preset 4 - Bosphorus 1080pdraco: Church Facadenode-express-loadtest: kvazaar: Bosphorus 4K - Ultra Fastdav1d: Summer Nature 4Kkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumgegl: Scalelulesh: numenta-nab: Relative Entropyaom-av1: Speed 6 Realtime - Bosphorus 1080punpack-firefox: firefox-84.0.source.tar.xzastcenc: Mediumn-queens: Elapsed Timerav1e: 10gegl: Cropsvt-vp9: VMAF Optimized - Bosphorus 4Kprimesieve: 1e12askap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingaom-av1: Speed 10 Realtime - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ktnn: CPU - SqueezeNet v2aom-av1: Speed 9 Realtime - Bosphorus 1080psvt-hevc: 7 - Bosphorus 4Kavifenc: 6, Losslessaom-av1: Speed 8 Realtime - Bosphorus 1080px265: Bosphorus 1080poctave-benchmark: toktx: UASTC 3unpack-linux: linux-5.19.tar.xzsvt-av1: Preset 12 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Very Fastnumenta-nab: Windowed Gaussiansvt-av1: Preset 8 - Bosphorus 1080px264: Bosphorus 1080pdarktable: Masskrug - CPU-onlyavifenc: 6dacapobench: Jythondarktable: Server Room - CPU-onlyonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUdarktable: Boat - CPU-onlykvazaar: Bosphorus 1080p - Super Fastdav1d: Summer Nature 1080plammps: Rhodopsin Proteinkvazaar: Bosphorus 1080p - Ultra Fastsvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pwebp: Quality 100svt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pwebp: Defaultdarktable: Server Rack - CPU-onlystress-ng: x86_64 RdRandAutoPrefer CachePrefer Freq720.21119.7520.01390.74813.01356.516.339523.312490.1959.638817.09662964.862371.38346.4633.0135523.645142.75995.52425185.5441078.4153.7527.51959.1407235.94561.61198192818252004.926.07.5642314.696769.08135350.249908285.528252.372899.7381.701512.24455.39574185.3442.11966474.9353947667.57692313.67311.62275.838.3288531398108.573.668168.207732.73.374.7381.5212.1211.7714.3211.604.827.0324.518.481.624.643.843.313.818.91950991968.014.915006834995.7859358149.039145.31345812035.1623988138.42388232571.40127948126604578.01126.883475.2127.25737535183.32611331.61218495.372.101.0853841.4736.2261154313119805.80969309437.465707.661028.95807146.3079585.5490.680107446.9725423648.1483.1561280.481280.49674.836673.2361.702414.11051.03.2808747.2470.376333.012312242.68538.5671460.472933567.61216996.9104319.148.838.6416134.41084.607.341097.267.257.9061.8872485.5296.0589.5113.534.38305.5126.142252.51418.11874.32198.64033.34.77215.3609.22433.01015.011.7865.42859.08135.285.0217797.3373.24.801663.091108.201626.40397284.701700.140.3940664.138.01997.600.6723579.676.122613.426.001332.5013987172604869471324196944332366710301144163414729613110.1058.9928337.85606113.19230.1084.5798731126.511.11855.2911566.90.1488204161.5520.6352.5419626.118483.078.95125.8245723.2020387172.3149.3820.413.65948345.22646.5743454.14087777.6511.5033423759.3371.3939.3420.81721112.04589.0734.82834.84036822.6834.27033.42110.72284133.1153020.1612.5705.25528918.8424.185.296109.37127.7191.21.69505.141725.560909105.96425237639.971281.144554878.75122833.10212936.161977311172291.2935942000.76484816.5273511.8443690.40138021.66381.2158780.75368.052.291365131101.712627871110.5519572.917290.9829154720.71254.8633.10.8328.54521.152083.522.66108.410.736790.2192.2820.66551728.32106.7824.943202625000163224.8518414.95184397.196401242.3622.67.4977.339446144549939.740121.9640.1217.72673821.2130.460811391.57.92632320.58039.8015.34220.26831.995519.90443758620014877000001485366667756130000127049617661618967134.47004169.2367.8940264.9675.6551.30536.0301830.585.910.1993.716.042213.11045.8434.284.6333196153356671.1760.237210.9981.2012.53460.46913.513.283174.1117.3704.56717.5750012821.255.4670.0869.46069.2210.64014.41936771100679.14399.4679.7282.587.3449039.15287.159219.5010.488129.14605.98117.0506.890117.846.86211519.96855.31238.38126.2242.843248.22101.105.488221.03113.234.6525.2793.953214.074174.66173.453.414193.762268.242.6433.22423292.3460.6344552.456230.491409.2316.314281.37383.36305.8317.44449.83458.64592.13789.949772.34428.120.152721.31115.0518.21390.54918.11404.416.328517.734489.9858.063317.69352970.042530.47297.8193.4280824.135842.26745.55023184.0151089.7155.6527.71945.7407235.72362.01198192618101949.125.97.5968113.420974.50985163.440763282.589252.317958.4894.131910.793155.29251188.8822.00833497.8813967457.56189314.70316.99280.938.3283031038103.676.168168.247772.73.374.7582.0112.1811.7014.3511.494.756.9024.578.471.634.663.843.333.828.92956441847.014.615363335308.6358791147.187143.14146812027.2803990138.45394831486.82131552129677576.94126.814521.1127.44537690181.06612360.30718764.573.611.0673781.0737.1241152320619975.80986315937.507977.614878.98545145.150936.0489.861107434.0625978492.7783.0671276.221286.51683.479685.4231.872428.51046.43.3072517.3670.337326.220312802.67635.3232560.817441967.62215896.5937869.159.577.5916231.91082.367.351093.407.277.8662.5992476.5299.9591.1413.484.36305.7126.112158.61423.91874.12210.73967.54.78223.1622.32410.11012.011.8195.42558.99135.515.0717748.5373.24.821659.221109.841622.75396534.691702.470.3940512.397.921009.730.6723486.366.122613.725.981335.2913887872614869507864212246331176610271162170214776010910.0659.0962342.25167313.11229.0984.7898191116.211.08855.3531562.20.1700314211.6320.4652.3719605.418752.775.74123.6636723.9817817097.3949.4450.413.63746147.98646.1873489.04120615.7011.9523476176.5571.7939.5140.81594114.02689.7234.94234.51736722.2034.23634.25010.64250432.3875520.3512.7695.19352819.3324.185.311109.57126.3191.61.69765.154565.549972108.48425550640.641266.524616386.56122869.22212454.761951711230656.9936157266.33485658.9773184.0443716.13137735.11379.5358865.01371.842.271367831105.252624201116.0219460.117319.6729193320.69220.7571.60.8428.55021.162096.122.64105.980.741788.2192.2470.66462728.34109.8124.598204366667159322.6229914.67195847.208599204.9566.38.7147.324451144181539.778921.8230.1237.72639621.4370.464426391.57.97725520.74040.3415.34820.37632.129820.77843811330014856666671471433333751733333115434117639818987134.59144577.4368.2456222.5610.4550.08436.0175830.286.011.4463.716.115013.14045.7834.644.6893543152316021.1810.236209.8611.2112.44660.40912.383.297175.7157.4054.87917.6756212696.415.5370.3949.10468.4710.76714.40245171087179.19401.8179.7982.367.2649073.59047.278221.7010.775129.15755.98416.7866.933117.976.86811495.96857.02241.09126.3442.247242.10101.105.517219.56113.594.6535.1304.007213.532174.96173.353.409194.838268.802.6493.27023142.3260.6357672.476229.991406.8116.114281.56382.65306.6216.95446.12457.94591.30787.790773.41327.920.156687.41136.9524.71267.75008.91352.316.386519.516488.9155.593818.31802959.182322.20337.2163.0870222.984743.73585.53549185.9531071.4154.6527.92002.5419235.28360.92199193117951990.625.47.5301814.389170.08994638.291812276.209252.070885.6182.102112.18165.46230183.5702.00100499.6973959457.57299323.36321.27281.138.3319731247991.471.804167.947797.23.424.7880.5512.3011.8714.4111.614.747.0224.728.611.654.763.883.383.889.16955431835.114.915214835444.4258189148.626143.79546292028.8373988138.43394629496.76131199129900581.60126.820470.7126.5623758831.91605366.41518587.572.841.0943869.3735.93511783200510015.82984315517.564007.700028.98056145.5887335.9089.953107729.7825052017.7382.9281288.241279.16678.655673.1571.892423.11046.83.2423177.0470.413325.828312382.69135.1951661.343949667.48221696.86082810.069.738.5316109.41080.717.371089.897.297.9262.3442469.8297.9589.7113.534.46306.2326.092267.81405.51857.12193.44023.54.78256.0682.22417.31012.911.8525.43458.93135.635.0517769.5372.84.771674.001108.321627.87394784.651721.240.3940605.448.05993.160.6723592.856.102621.905.991333.2413960242614929502944198454331541510121153164614774001710.1858.9602344.08021313.22228.6484.733221111.311.00155.2911584.10.1485854152.5820.9052.2519658.118319.873.61129.4152422.9255097077.0949.3660.413.65455946.38646.4253439.34134347.2711.4113476539.4971.7839.2230.81379112.3489.8734.85936.09236921.9934.97233.59811.28118833.0465520.3913.1305.18160718.6424.095.298109.76143.9211.51.70125.426885.506467110.91426480641.111276.154613558.35123819.50212527.911967011172285.0036014622.73489525.4272995.0443758.76138137.06381.3756651.53373.082.231366131102.642628871109.9719602.617314.4029195320.74215.9563.60.8428.47921.192080.422.73102.320.745786.3191.6590.82300928.29108.6025.055241166667163823.3901714.61817127.195516243.1633.37.5617.391493145059939.764521.8460.1227.74326021.3010.465139388.97.99664120.37339.8315.26220.26532.292019.96243549493314886333331488033333757983333124778617702419065934.62084512.4368.1015223.1613.0549.45136.2407827.995.910.1913.716.150013.12545.9234.654.7253195153873801.1790.234209.9831.2012.64760.47915.113.250174.6277.3674.56217.5850412712.245.3270.4839.47168.5810.78214.46636231105679.32399.6079.8482.647.2339066.29337.283220.5810.722129.53095.98316.7446.884118.006.84511495.96918.41238.97126.6343.359241.38101.095.421217.88113.424.5855.2724.054212.522174.93173.583.432195.812269.522.6363.20923212.3310.6343572.452230.801407.9116.252281.74384.34308.0417.16450.47459.41591.01796.324771.61126.570.152OpenBenchmarking.org

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformPrefer FreqPrefer CacheAuto160320480640800SE +/- 3.07, N = 9SE +/- 3.09, N = 9SE +/- 3.80, N = 9687.4721.3720.21. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformPrefer FreqPrefer CacheAuto130260390520650Min: 674.7 / Avg: 687.4 / Max: 697.5Min: 702.8 / Avg: 721.32 / Max: 735.1Min: 703.7 / Avg: 720.18 / Max: 7381. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterPrefer FreqPrefer CacheAuto2004006008001000SE +/- 3.20, N = 9SE +/- 4.98, N = 9SE +/- 2.62, N = 91136.91115.01119.71. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterPrefer FreqPrefer CacheAuto2004006008001000Min: 1121 / Avg: 1136.87 / Max: 1153.5Min: 1096.5 / Avg: 1115.02 / Max: 1139.1Min: 1100.8 / Avg: 1119.71 / Max: 1128.71. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.92, N = 9SE +/- 1.65, N = 9SE +/- 2.24, N = 9524.7518.2520.01. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterPrefer FreqPrefer CacheAuto90180270360450Min: 518.5 / Avg: 524.66 / Max: 527.1Min: 511.7 / Avg: 518.16 / Max: 526.2Min: 512.4 / Avg: 519.99 / Max: 534.11. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterPrefer FreqPrefer CacheAuto30060090012001500SE +/- 2.90, N = 9SE +/- 3.63, N = 9SE +/- 2.89, N = 91267.71390.51390.71. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterPrefer FreqPrefer CacheAuto2004006008001000Min: 1255.6 / Avg: 1267.73 / Max: 1282.4Min: 1372.7 / Avg: 1390.5 / Max: 1408.1Min: 1379.7 / Avg: 1390.74 / Max: 1404.21. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Prefer FreqPrefer CacheAuto11002200330044005500SE +/- 41.28, N = 9SE +/- 45.51, N = 9SE +/- 48.78, N = 95008.94918.14813.01. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Prefer FreqPrefer CacheAuto9001800270036004500Min: 4777.3 / Avg: 5008.9 / Max: 5150.7Min: 4753.7 / Avg: 4918.1 / Max: 5121.9Min: 4541 / Avg: 4813.03 / Max: 5000.61. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto30060090012001500SE +/- 17.40, N = 9SE +/- 25.73, N = 9SE +/- 22.39, N = 91352.31404.41356.51. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto2004006008001000Min: 1232.5 / Avg: 1352.26 / Max: 1395.5Min: 1300 / Avg: 1404.44 / Max: 1509.6Min: 1269.3 / Avg: 1356.54 / Max: 1506.61. 3.10.5.1

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsPrefer FreqPrefer CacheAuto48121620SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 316.3916.3316.341. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsPrefer FreqPrefer CacheAuto48121620Min: 16.21 / Avg: 16.39 / Max: 16.52Min: 16.13 / Avg: 16.33 / Max: 16.45Min: 16.19 / Avg: 16.34 / Max: 16.51. (CXX) g++ options: -O3 -lm -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.32, N = 3SE +/- 0.45, N = 3SE +/- 0.26, N = 3519.52517.73523.31
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigPrefer FreqPrefer CacheAuto90180270360450Min: 518.89 / Avg: 519.52 / Max: 519.88Min: 516.97 / Avg: 517.73 / Max: 518.51Min: 522.79 / Avg: 523.31 / Max: 523.6

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.59, N = 3SE +/- 0.55, N = 3SE +/- 0.17, N = 3488.91489.98490.19
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto90180270360450Min: 488.32 / Avg: 488.91 / Max: 490.1Min: 489.16 / Avg: 489.98 / Max: 491.03Min: 489.98 / Avg: 490.19 / Max: 490.52

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1326395265SE +/- 2.10, N = 15SE +/- 2.61, N = 15SE +/- 2.18, N = 1555.5958.0659.641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1224364860Min: 49.03 / Avg: 55.59 / Max: 68.82Min: 49.04 / Avg: 58.06 / Max: 70.11Min: 49.39 / Avg: 59.64 / Max: 68.441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto510152025SE +/- 0.63, N = 15SE +/- 0.75, N = 15SE +/- 0.65, N = 1518.3217.6917.101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto510152025Min: 14.53 / Avg: 18.32 / Max: 20.39Min: 14.26 / Avg: 17.69 / Max: 20.39Min: 14.61 / Avg: 17.1 / Max: 20.251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingPrefer FreqPrefer CacheAuto6001200180024003000SE +/- 11.63, N = 15SE +/- 9.57, N = 15SE +/- 12.65, N = 152959.182970.042964.861. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingPrefer FreqPrefer CacheAuto5001000150020002500Min: 2884.29 / Avg: 2959.18 / Max: 3006.42Min: 2877.47 / Avg: 2970.04 / Max: 3006.42Min: 2860.08 / Avg: 2964.86 / Max: 3004.31. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 18.48, N = 15SE +/- 31.33, N = 15SE +/- 34.96, N = 152322.202530.472371.381. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingPrefer FreqPrefer CacheAuto400800120016002000Min: 2248.67 / Avg: 2322.2 / Max: 2498.59Min: 2333.66 / Avg: 2530.47 / Max: 2694.56Min: 2242.16 / Avg: 2371.38 / Max: 2672.581. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto80160240320400SE +/- 19.31, N = 15SE +/- 15.84, N = 12SE +/- 20.18, N = 15337.22297.82346.461. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto60120180240300Min: 276.35 / Avg: 337.22 / Max: 469.59Min: 277.23 / Avg: 297.82 / Max: 470.93Min: 276.66 / Avg: 346.46 / Max: 473.821. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto0.77131.54262.31393.08523.8565SE +/- 0.15247, N = 15SE +/- 0.12057, N = 12SE +/- 0.15740, N = 153.087023.428083.013551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 2.13 / Avg: 3.09 / Max: 3.62Min: 2.12 / Avg: 3.43 / Max: 3.61Min: 2.11 / Avg: 3.01 / Max: 3.611. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto612182430SE +/- 0.52, N = 12SE +/- 0.98, N = 15SE +/- 0.72, N = 1522.9824.1423.651. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto612182430Min: 21.57 / Avg: 22.98 / Max: 26.12Min: 21.52 / Avg: 24.14 / Max: 30.7Min: 21.59 / Avg: 23.65 / Max: 30.661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1020304050SE +/- 0.93, N = 12SE +/- 1.48, N = 15SE +/- 1.11, N = 1543.7442.2742.761. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto918273645Min: 38.28 / Avg: 43.74 / Max: 46.35Min: 32.57 / Avg: 42.27 / Max: 46.46Min: 32.61 / Avg: 42.76 / Max: 46.321. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1.24882.49763.74644.99526.244SE +/- 0.29232, N = 12SE +/- 0.21808, N = 15SE +/- 0.23907, N = 155.535495.550235.524251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 4.7 / Avg: 5.54 / Max: 7.11Min: 4.71 / Avg: 5.55 / Max: 6.72Min: 4.69 / Avg: 5.52 / Max: 7.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto4080120160200SE +/- 9.16, N = 12SE +/- 7.02, N = 15SE +/- 7.51, N = 15185.95184.02185.541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto306090120150Min: 140.55 / Avg: 185.95 / Max: 212.97Min: 148.88 / Avg: 184.02 / Max: 212.43Min: 139.91 / Avg: 185.54 / Max: 213.061. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhasePrefer FreqPrefer CacheAuto2004006008001000SE +/- 4.41, N = 7SE +/- 4.40, N = 5SE +/- 4.07, N = 31071.41089.71078.4
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhasePrefer FreqPrefer CacheAuto2004006008001000Min: 1056.1 / Avg: 1071.39 / Max: 1088.3Min: 1080.7 / Avg: 1089.72 / Max: 1102.1Min: 1071.9 / Avg: 1078.4 / Max: 1085.9

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformPrefer FreqPrefer CacheAuto306090120150SE +/- 0.35, N = 7SE +/- 1.10, N = 5SE +/- 1.47, N = 3154.6155.6153.7
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformPrefer FreqPrefer CacheAuto306090120150Min: 153.9 / Avg: 154.59 / Max: 156.5Min: 152.9 / Avg: 155.58 / Max: 158.4Min: 150.8 / Avg: 153.73 / Max: 155.3

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterPrefer FreqPrefer CacheAuto110220330440550SE +/- 2.13, N = 7SE +/- 3.13, N = 5SE +/- 4.15, N = 3527.9527.7527.5
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterPrefer FreqPrefer CacheAuto90180270360450Min: 518.6 / Avg: 527.9 / Max: 534Min: 515.7 / Avg: 527.66 / Max: 532.2Min: 519.4 / Avg: 527.53 / Max: 533

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto400800120016002000SE +/- 17.45, N = 7SE +/- 20.86, N = 5SE +/- 22.01, N = 32002.51945.71959.1
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto30060090012001500Min: 1926.2 / Avg: 2002.47 / Max: 2045.3Min: 1885.2 / Avg: 1945.72 / Max: 2004.1Min: 1915.1 / Avg: 1959.07 / Max: 1982.9

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCPrefer FreqPrefer CacheAuto90180270360450SE +/- 1.86, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3419407407MIN: 53 / MAX: 5936MIN: 53 / MAX: 5165MIN: 53 / MAX: 5149
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCPrefer FreqPrefer CacheAuto70140210280350Min: 415 / Avg: 418.67 / Max: 421Min: 407 / Avg: 407.33 / Max: 408Min: 407 / Avg: 407.33 / Max: 408

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timePrefer FreqPrefer CacheAuto50100150200250SE +/- 0.84, N = 3SE +/- 0.12, N = 3SE +/- 0.40, N = 3235.28235.72235.95
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timePrefer FreqPrefer CacheAuto4080120160200Min: 233.72 / Avg: 235.28 / Max: 236.58Min: 235.53 / Avg: 235.72 / Max: 235.95Min: 235.18 / Avg: 235.94 / Max: 236.51

OpenEMS

OpenEMS is a free and open electromagnetic field solver using the FDTD method. This test profile runs OpenEMS and pyEMS benchmark demos. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerPrefer FreqPrefer CacheAuto1428425670SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.35, N = 360.9262.0161.611. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat
OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerPrefer FreqPrefer CacheAuto1224364860Min: 60.82 / Avg: 60.92 / Max: 61Min: 61.81 / Avg: 62.01 / Max: 62.21Min: 61.01 / Avg: 61.61 / Max: 62.211. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.00, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3199198198MIN: 18 / MAX: 3749MIN: 18 / MAX: 3736MIN: 17 / MAX: 3749
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarPrefer FreqPrefer CacheAuto4080120160200Min: 199 / Avg: 199 / Max: 199Min: 197 / Avg: 197.67 / Max: 198Min: 197 / Avg: 198.33 / Max: 199

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASPrefer FreqPrefer CacheAuto400800120016002000SE +/- 2.96, N = 3SE +/- 6.23, N = 3SE +/- 22.10, N = 31931192619281. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASPrefer FreqPrefer CacheAuto30060090012001500Min: 1927 / Avg: 1931.33 / Max: 1937Min: 1914 / Avg: 1926.33 / Max: 1934Min: 1897 / Avg: 1928.33 / Max: 19711. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenPrefer FreqPrefer CacheAuto400800120016002000SE +/- 14.57, N = 3SE +/- 19.08, N = 3SE +/- 12.03, N = 31795181018251. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenPrefer FreqPrefer CacheAuto30060090012001500Min: 1767 / Avg: 1795 / Max: 1816Min: 1778 / Avg: 1810 / Max: 1844Min: 1807 / Avg: 1825.33 / Max: 18481. (CXX) g++ options: -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000SE +/- 22.67, N = 15SE +/- 2.99, N = 15SE +/- 26.46, N = 151990.61949.12004.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedPrefer FreqPrefer CacheAuto30060090012001500Min: 1925 / Avg: 1990.55 / Max: 2190.1Min: 1925.1 / Avg: 1949.14 / Max: 1975.7Min: 1938.2 / Avg: 2004.89 / Max: 2201.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedPrefer FreqPrefer CacheAuto612182430SE +/- 0.35, N = 15SE +/- 0.28, N = 15SE +/- 0.22, N = 1525.425.926.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedPrefer FreqPrefer CacheAuto612182430Min: 23 / Avg: 25.44 / Max: 26.9Min: 24 / Avg: 25.9 / Max: 27.1Min: 24.2 / Avg: 26.03 / Max: 26.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.03588, N = 3SE +/- 0.00464, N = 3SE +/- 0.00618, N = 37.530187.596817.56423
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.46 / Avg: 7.53 / Max: 7.57Min: 7.59 / Avg: 7.6 / Max: 7.61Min: 7.55 / Avg: 7.56 / Max: 7.57

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto48121620SE +/- 0.43, N = 12SE +/- 0.06, N = 3SE +/- 0.51, N = 1514.3913.4214.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto48121620Min: 13.39 / Avg: 14.39 / Max: 17.57Min: 13.3 / Avg: 13.42 / Max: 13.49Min: 13.27 / Avg: 14.7 / Max: 181. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto20406080100SE +/- 1.82, N = 12SE +/- 0.35, N = 3SE +/- 2.15, N = 1570.0974.5169.081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1428425670Min: 56.91 / Avg: 70.09 / Max: 74.69Min: 74.11 / Avg: 74.51 / Max: 75.2Min: 55.55 / Avg: 69.08 / Max: 75.381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverPrefer FreqPrefer CacheAuto11002200330044005500SE +/- 83.07, N = 15SE +/- 120.69, N = 15SE +/- 140.79, N = 154638.295163.445350.251. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverPrefer FreqPrefer CacheAuto9001800270036004500Min: 4161.95 / Avg: 4638.29 / Max: 5055.63Min: 4533.32 / Avg: 5163.44 / Max: 5888.94Min: 4682.76 / Avg: 5350.25 / Max: 6576.311. (CC) gcc options: -O3 -mavx2

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesPrefer FreqPrefer CacheAuto60120180240300SE +/- 2.01, N = 3SE +/- 3.72, N = 3SE +/- 0.94, N = 3276.21282.59285.53
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesPrefer FreqPrefer CacheAuto50100150200250Min: 273.48 / Avg: 276.21 / Max: 280.13Min: 275.48 / Avg: 282.59 / Max: 288.07Min: 283.92 / Avg: 285.53 / Max: 287.16

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.26, N = 3SE +/- 0.39, N = 3SE +/- 0.14, N = 3252.07252.32252.37
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaPrefer FreqPrefer CacheAuto50100150200250Min: 251.67 / Avg: 252.07 / Max: 252.56Min: 251.79 / Avg: 252.32 / Max: 253.09Min: 252.1 / Avg: 252.37 / Max: 252.58

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.73, N = 3SE +/- 9.21, N = 6SE +/- 6.01, N = 15885.61958.48899.73
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkPrefer FreqPrefer CacheAuto2004006008001000Min: 883.71 / Avg: 885.61 / Max: 889.07Min: 914.45 / Avg: 958.48 / Max: 979.92Min: 872.04 / Avg: 899.73 / Max: 977.41

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto20406080100SE +/- 0.73, N = 3SE +/- 3.28, N = 15SE +/- 0.82, N = 582.1094.1381.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto20406080100Min: 80.65 / Avg: 82.1 / Max: 82.88Min: 82.6 / Avg: 94.13 / Max: 113.58Min: 78.51 / Avg: 81.7 / Max: 82.841. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto3691215SE +/- 0.11, N = 3SE +/- 0.35, N = 15SE +/- 0.13, N = 512.1810.7912.241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto48121620Min: 12.06 / Avg: 12.18 / Max: 12.4Min: 8.8 / Avg: 10.79 / Max: 12.11Min: 12.07 / Avg: 12.24 / Max: 12.741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1.2292.4583.6874.9166.145SE +/- 0.08402, N = 15SE +/- 0.01610, N = 3SE +/- 0.06583, N = 45.462305.292515.395741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 5.22 / Avg: 5.46 / Max: 6.11Min: 5.27 / Avg: 5.29 / Max: 5.32Min: 5.22 / Avg: 5.4 / Max: 5.541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto4080120160200SE +/- 2.62, N = 15SE +/- 0.57, N = 3SE +/- 2.28, N = 4183.57188.88185.341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto306090120150Min: 163.72 / Avg: 183.57 / Max: 191.45Min: 187.79 / Avg: 188.88 / Max: 189.73Min: 180.45 / Avg: 185.34 / Max: 191.431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto0.47690.95381.43071.90762.3845SE +/- 0.00189, N = 3SE +/- 0.00651, N = 3SE +/- 0.04888, N = 152.001002.008332.119661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 2 / Avg: 2 / Max: 2Min: 2 / Avg: 2.01 / Max: 2.02Min: 1.98 / Avg: 2.12 / Max: 2.531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.47, N = 3SE +/- 1.61, N = 3SE +/- 9.98, N = 15499.70497.88474.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto90180270360450Min: 498.89 / Avg: 499.7 / Max: 500.52Min: 494.71 / Avg: 497.88 / Max: 499.94Min: 395.89 / Avg: 474.93 / Max: 503.771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.34VGR Performance MetricPrefer FreqPrefer CacheAuto80K160K240K320K400K3959453967453947661. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.00336, N = 3SE +/- 0.00154, N = 3SE +/- 0.00120, N = 37.572997.561897.57692
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.57 / Avg: 7.57 / Max: 7.58Min: 7.56 / Avg: 7.56 / Max: 7.56Min: 7.57 / Avg: 7.58 / Max: 7.58

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunPrefer FreqPrefer CacheAuto70140210280350SE +/- 2.63, N = 3SE +/- 0.72, N = 3SE +/- 3.39, N = 3323.36314.70313.67MIN: 18.55 / MAX: 12000MIN: 15.62 / MAX: 10000MIN: 15.76 / MAX: 10000
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunPrefer FreqPrefer CacheAuto60120180240300Min: 319.52 / Avg: 323.36 / Max: 328.39Min: 313.38 / Avg: 314.7 / Max: 315.84Min: 308.81 / Avg: 313.67 / Max: 320.19

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunPrefer FreqPrefer CacheAuto70140210280350SE +/- 1.19, N = 3SE +/- 2.19, N = 3SE +/- 1.92, N = 3321.27316.99311.62MIN: 19.84 / MAX: 12000MIN: 19.73 / MAX: 10000MIN: 15.96 / MAX: 12000
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunPrefer FreqPrefer CacheAuto60120180240300Min: 319.52 / Avg: 321.27 / Max: 323.54Min: 312.94 / Avg: 316.99 / Max: 320.48Min: 307.91 / Avg: 311.62 / Max: 314.31

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CachePrefer FreqPrefer CacheAuto60120180240300SE +/- 0.59, N = 3SE +/- 1.64, N = 3SE +/- 1.37, N = 3281.13280.93275.83MIN: 12.85 / MAX: 7500MIN: 13.18 / MAX: 8571.43MIN: 13.18 / MAX: 7500
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CachePrefer FreqPrefer CacheAuto50100150200250Min: 279.97 / Avg: 281.13 / Max: 281.9Min: 278.71 / Avg: 280.93 / Max: 284.14Min: 273.64 / Avg: 275.83 / Max: 278.35

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Prefer FreqPrefer CacheAuto246810SE +/- 0.00077, N = 3SE +/- 0.01712, N = 3SE +/- 0.00551, N = 38.331978.328308.328851. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Prefer FreqPrefer CacheAuto3691215Min: 8.33 / Avg: 8.33 / Max: 8.33Min: 8.3 / Avg: 8.33 / Max: 8.35Min: 8.32 / Avg: 8.33 / Max: 8.341. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromePrefer FreqPrefer CacheAuto7001400210028003500SE +/- 33.73, N = 15SE +/- 42.76, N = 15SE +/- 42.95, N = 153124310331391. chrome 110.0.5481.96
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromePrefer FreqPrefer CacheAuto5001000150020002500Min: 2833 / Avg: 3124.2 / Max: 3259Min: 2760 / Avg: 3103.33 / Max: 3254Min: 2743 / Avg: 3139.47 / Max: 32661. chrome 110.0.5481.96

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensPrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 112.03, N = 3SE +/- 88.81, N = 3SE +/- 77.81, N = 37991.48103.68108.5MIN: 7874.73 / MAX: 9046.54MIN: 7926.83 / MAX: 8896.3MIN: 7956.94 / MAX: 8936.02
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensPrefer FreqPrefer CacheAuto14002800420056007000Min: 7874.73 / Avg: 7991.37 / Max: 8215.37Min: 7926.83 / Avg: 8103.63 / Max: 8206.88Min: 7956.94 / Avg: 8108.52 / Max: 8214.85

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Prefer FreqPrefer CacheAuto20406080100SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.56, N = 1571.8076.1773.671. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Prefer FreqPrefer CacheAuto1530456075Min: 71.24 / Avg: 71.8 / Max: 72.23Min: 75.88 / Avg: 76.17 / Max: 76.48Min: 71.57 / Avg: 73.67 / Max: 76.751. (CXX) g++ options: -O3 -fPIC -lm

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 3167.94168.24168.20
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto306090120150Min: 167.76 / Avg: 167.94 / Max: 168.22Min: 168.05 / Avg: 168.24 / Max: 168.38Min: 168.03 / Avg: 168.2 / Max: 168.42

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreePrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 94.41, N = 4SE +/- 79.65, N = 3SE +/- 58.84, N = 37797.27772.77732.7MIN: 5695.75 / MAX: 7918.79MIN: 5787.99 / MAX: 7897.04MIN: 5663.78 / MAX: 7847.58
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreePrefer FreqPrefer CacheAuto14002800420056007000Min: 7519.9 / Avg: 7797.2 / Max: 7918.79Min: 7624.29 / Avg: 7772.68 / Max: 7897.04Min: 7653.14 / Avg: 7732.69 / Max: 7847.58

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetPrefer FreqPrefer CacheAuto0.76951.5392.30853.0783.8475SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.01, N = 33.423.373.37MIN: 3.32 / MAX: 3.96MIN: 3.3 / MAX: 3.84MIN: 3.32 / MAX: 3.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetPrefer FreqPrefer CacheAuto246810Min: 3.36 / Avg: 3.42 / Max: 3.48Min: 3.33 / Avg: 3.37 / Max: 3.39Min: 3.35 / Avg: 3.37 / Max: 3.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetPrefer FreqPrefer CacheAuto1.07552.1513.22654.3025.3775SE +/- 0.02, N = 13SE +/- 0.04, N = 3SE +/- 0.02, N = 34.784.754.73MIN: 4.62 / MAX: 10.44MIN: 4.62 / MAX: 5.91MIN: 4.62 / MAX: 9.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetPrefer FreqPrefer CacheAuto246810Min: 4.68 / Avg: 4.78 / Max: 5Min: 4.67 / Avg: 4.75 / Max: 4.8Min: 4.69 / Avg: 4.73 / Max: 4.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerPrefer FreqPrefer CacheAuto20406080100SE +/- 0.23, N = 15SE +/- 0.91, N = 3SE +/- 1.09, N = 380.5582.0181.52MIN: 79.58 / MAX: 94.08MIN: 79.65 / MAX: 95.45MIN: 80.09 / MAX: 86.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerPrefer FreqPrefer CacheAuto1632486480Min: 79.88 / Avg: 80.55 / Max: 83.7Min: 80.23 / Avg: 82.01 / Max: 83.25Min: 80.36 / Avg: 81.52 / Max: 83.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mPrefer FreqPrefer CacheAuto3691215SE +/- 0.06, N = 15SE +/- 0.16, N = 3SE +/- 0.05, N = 312.3012.1812.12MIN: 11.79 / MAX: 39.44MIN: 11.67 / MAX: 12.88MIN: 11.83 / MAX: 17.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mPrefer FreqPrefer CacheAuto48121620Min: 11.97 / Avg: 12.3 / Max: 12.68Min: 11.88 / Avg: 12.18 / Max: 12.42Min: 12.01 / Avg: 12.12 / Max: 12.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdPrefer FreqPrefer CacheAuto3691215SE +/- 0.03, N = 15SE +/- 0.02, N = 3SE +/- 0.02, N = 311.8711.7011.77MIN: 11.61 / MAX: 18.31MIN: 11.52 / MAX: 17.65MIN: 11.6 / MAX: 12.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdPrefer FreqPrefer CacheAuto3691215Min: 11.72 / Avg: 11.87 / Max: 12.16Min: 11.66 / Avg: 11.7 / Max: 11.73Min: 11.73 / Avg: 11.77 / Max: 11.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyPrefer FreqPrefer CacheAuto48121620SE +/- 0.09, N = 15SE +/- 0.02, N = 3SE +/- 0.04, N = 314.4114.3514.32MIN: 14.02 / MAX: 20.18MIN: 14.11 / MAX: 14.93MIN: 14.06 / MAX: 19.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyPrefer FreqPrefer CacheAuto48121620Min: 14.15 / Avg: 14.41 / Max: 15.55Min: 14.31 / Avg: 14.35 / Max: 14.39Min: 14.25 / Avg: 14.32 / Max: 14.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Prefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 311.6111.4911.60MIN: 11.23 / MAX: 17.95MIN: 11.32 / MAX: 12.4MIN: 11.46 / MAX: 17.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Prefer FreqPrefer CacheAuto3691215Min: 11.34 / Avg: 11.61 / Max: 12.03Min: 11.43 / Avg: 11.49 / Max: 11.55Min: 11.58 / Avg: 11.6 / Max: 11.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetPrefer FreqPrefer CacheAuto1.08452.1693.25354.3385.4225SE +/- 0.01, N = 14SE +/- 0.12, N = 3SE +/- 0.13, N = 34.744.754.82MIN: 4.6 / MAX: 10.72MIN: 4.42 / MAX: 5.49MIN: 4.45 / MAX: 5.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetPrefer FreqPrefer CacheAuto246810Min: 4.67 / Avg: 4.74 / Max: 4.82Min: 4.51 / Avg: 4.75 / Max: 4.87Min: 4.56 / Avg: 4.82 / Max: 4.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Prefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.01, N = 37.026.907.03MIN: 6.83 / MAX: 10.65MIN: 6.76 / MAX: 7.67MIN: 6.9 / MAX: 7.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Prefer FreqPrefer CacheAuto3691215Min: 6.93 / Avg: 7.02 / Max: 7.09Min: 6.89 / Avg: 6.9 / Max: 6.91Min: 7.01 / Avg: 7.03 / Max: 7.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Prefer FreqPrefer CacheAuto612182430SE +/- 0.08, N = 15SE +/- 0.03, N = 3SE +/- 0.03, N = 324.7224.5724.51MIN: 24.22 / MAX: 59.52MIN: 24.29 / MAX: 29.41MIN: 24.28 / MAX: 37.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Prefer FreqPrefer CacheAuto612182430Min: 24.48 / Avg: 24.72 / Max: 25.52Min: 24.52 / Avg: 24.57 / Max: 24.6Min: 24.45 / Avg: 24.51 / Max: 24.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetPrefer FreqPrefer CacheAuto246810SE +/- 0.02, N = 15SE +/- 0.04, N = 3SE +/- 0.02, N = 38.618.478.48MIN: 8.31 / MAX: 9.47MIN: 8.29 / MAX: 9.39MIN: 8.36 / MAX: 9.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetPrefer FreqPrefer CacheAuto3691215Min: 8.46 / Avg: 8.61 / Max: 8.73Min: 8.41 / Avg: 8.47 / Max: 8.54Min: 8.45 / Avg: 8.48 / Max: 8.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefacePrefer FreqPrefer CacheAuto0.37130.74261.11391.48521.8565SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.01, N = 31.651.631.62MIN: 1.58 / MAX: 2.22MIN: 1.57 / MAX: 1.98MIN: 1.59 / MAX: 1.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefacePrefer FreqPrefer CacheAuto246810Min: 1.61 / Avg: 1.65 / Max: 1.71Min: 1.6 / Avg: 1.63 / Max: 1.65Min: 1.61 / Avg: 1.62 / Max: 1.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Prefer FreqPrefer CacheAuto1.0712.1423.2134.2845.355SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.00, N = 34.764.664.64MIN: 4.62 / MAX: 5.4MIN: 4.56 / MAX: 10.75MIN: 4.57 / MAX: 5.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Prefer FreqPrefer CacheAuto246810Min: 4.68 / Avg: 4.76 / Max: 4.87Min: 4.6 / Avg: 4.66 / Max: 4.71Min: 4.63 / Avg: 4.64 / Max: 4.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Prefer FreqPrefer CacheAuto0.8731.7462.6193.4924.365SE +/- 0.01, N = 13SE +/- 0.02, N = 3SE +/- 0.01, N = 33.883.843.84MIN: 3.75 / MAX: 9.76MIN: 3.73 / MAX: 4.32MIN: 3.76 / MAX: 4.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Prefer FreqPrefer CacheAuto246810Min: 3.84 / Avg: 3.88 / Max: 3.98Min: 3.8 / Avg: 3.84 / Max: 3.87Min: 3.83 / Avg: 3.84 / Max: 3.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Prefer FreqPrefer CacheAuto0.76051.5212.28153.0423.8025SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.01, N = 33.383.333.31MIN: 3.27 / MAX: 3.9MIN: 3.24 / MAX: 3.96MIN: 3.25 / MAX: 3.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Prefer FreqPrefer CacheAuto246810Min: 3.31 / Avg: 3.38 / Max: 3.45Min: 3.29 / Avg: 3.33 / Max: 3.36Min: 3.3 / Avg: 3.31 / Max: 3.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Prefer FreqPrefer CacheAuto0.8731.7462.6193.4924.365SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.01, N = 33.883.823.81MIN: 3.78 / MAX: 7.82MIN: 3.74 / MAX: 4.27MIN: 3.75 / MAX: 4.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Prefer FreqPrefer CacheAuto246810Min: 3.83 / Avg: 3.88 / Max: 3.93Min: 3.78 / Avg: 3.82 / Max: 3.85Min: 3.79 / Avg: 3.81 / Max: 3.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetPrefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 15SE +/- 0.01, N = 3SE +/- 0.03, N = 39.168.928.91MIN: 8.92 / MAX: 10.19MIN: 8.81 / MAX: 14.7MIN: 8.81 / MAX: 9.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetPrefer FreqPrefer CacheAuto3691215Min: 9.01 / Avg: 9.16 / Max: 9.84Min: 8.89 / Avg: 8.92 / Max: 8.93Min: 8.86 / Avg: 8.91 / Max: 8.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromePrefer FreqPrefer CacheAuto20K40K60K80K100KSE +/- 958.51, N = 6SE +/- 916.53, N = 15SE +/- 950.48, N = 159554395644950991. chrome 110.0.5481.96
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromePrefer FreqPrefer CacheAuto17K34K51K68K85KMin: 91363 / Avg: 95543 / Max: 97911Min: 90881 / Avg: 95643.93 / Max: 100143Min: 89650 / Avg: 95099.07 / Max: 1013041. chrome 110.0.5481.96

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000SE +/- 10.27, N = 3SE +/- 17.36, N = 15SE +/- 53.52, N = 31835.11847.01968.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto30060090012001500Min: 1815.8 / Avg: 1835.13 / Max: 1850.8Min: 1746.5 / Avg: 1846.95 / Max: 2073.2Min: 1861 / Avg: 1968.03 / Max: 2021.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto48121620SE +/- 0.06, N = 3SE +/- 0.15, N = 15SE +/- 0.00, N = 314.914.614.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto48121620Min: 14.8 / Avg: 14.9 / Max: 15Min: 12.9 / Avg: 14.59 / Max: 15Min: 14.9 / Avg: 14.9 / Max: 14.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 1166.54, N = 3SE +/- 538.77, N = 3SE +/- 142.59, N = 31521481536331500681. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KMin: 149844 / Avg: 152148 / Max: 153618Min: 152762 / Avg: 153633.33 / Max: 154618Min: 149841 / Avg: 150068 / Max: 1503311. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityPrefer FreqPrefer CacheAuto8K16K24K32K40KSE +/- 393.19, N = 15SE +/- 279.96, N = 15SE +/- 433.26, N = 1535444.4235308.6334995.781. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityPrefer FreqPrefer CacheAuto6K12K18K24K30KMin: 33825.18 / Avg: 35444.42 / Max: 38803.83Min: 33727.42 / Avg: 35308.63 / Max: 37543.66Min: 33412.07 / Avg: 34995.78 / Max: 39339.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlPrefer FreqPrefer CacheAuto13K26K39K52K65KSE +/- 649.11, N = 15SE +/- 588.74, N = 15SE +/- 450.34, N = 155818958791593581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlPrefer FreqPrefer CacheAuto10K20K30K40K50KMin: 51510 / Avg: 58188.67 / Max: 61010Min: 53740 / Avg: 58791.33 / Max: 61900Min: 56150 / Avg: 59358 / Max: 623801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestPrefer FreqPrefer CacheAuto306090120150SE +/- 1.81, N = 3SE +/- 1.26, N = 3SE +/- 0.72, N = 3148.63147.19149.04
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestPrefer FreqPrefer CacheAuto306090120150Min: 146.68 / Avg: 148.63 / Max: 152.24Min: 145.52 / Avg: 147.19 / Max: 149.66Min: 148.04 / Avg: 149.04 / Max: 150.43

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Prefer FreqPrefer CacheAuto306090120150SE +/- 1.17, N = 3SE +/- 0.78, N = 3SE +/- 0.50, N = 3143.80143.14145.311. (CC) gcc options: -O2 -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Prefer FreqPrefer CacheAuto306090120150Min: 142.4 / Avg: 143.79 / Max: 146.12Min: 141.61 / Avg: 143.14 / Max: 144.15Min: 144.55 / Avg: 145.31 / Max: 146.241. (CC) gcc options: -O2 -fvisibility=hidden

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto10002000300040005000SE +/- 31.34, N = 3SE +/- 4.91, N = 3SE +/- 3.46, N = 34629468145811. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto8001600240032004000Min: 4566 / Avg: 4628.67 / Max: 4661Min: 4671 / Avg: 4680.67 / Max: 4687Min: 4575 / Avg: 4581 / Max: 45871. (CXX) g++ options: -O3 -lm -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetPrefer FreqPrefer CacheAuto400800120016002000SE +/- 6.39, N = 3SE +/- 2.74, N = 3SE +/- 5.47, N = 32028.842027.282035.16MIN: 1972.23 / MAX: 2118.4MIN: 1978.04 / MAX: 2109.64MIN: 1988.46 / MAX: 2119.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetPrefer FreqPrefer CacheAuto400800120016002000Min: 2020.55 / Avg: 2028.84 / Max: 2041.4Min: 2021.86 / Avg: 2027.28 / Max: 2030.67Min: 2025.61 / Avg: 2035.16 / Max: 2044.561. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto9001800270036004500SE +/- 7.55, N = 3SE +/- 8.89, N = 3SE +/- 3.51, N = 33988399039881. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7001400210028003500Min: 3973 / Avg: 3988 / Max: 3997Min: 3977 / Avg: 3990 / Max: 4007Min: 3981 / Avg: 3988 / Max: 39921. (CXX) g++ options: -O3 -lm -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto306090120150SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3138.43138.45138.42
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto306090120150Min: 138.19 / Avg: 138.43 / Max: 138.63Min: 138.26 / Avg: 138.45 / Max: 138.56Min: 138.28 / Avg: 138.42 / Max: 138.53

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto8001600240032004000SE +/- 0.58, N = 3SE +/- 4.58, N = 3SE +/- 35.92, N = 33946394838821. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7001400210028003500Min: 3945 / Avg: 3946 / Max: 3947Min: 3939 / Avg: 3948 / Max: 3954Min: 3833 / Avg: 3882 / Max: 39521. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 283.23, N = 12SE +/- 592.21, N = 14SE +/- 645.93, N = 1529496.7631486.8232571.401. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringPrefer FreqPrefer CacheAuto6K12K18K24K30KMin: 26897.82 / Avg: 29496.76 / Max: 31276.89Min: 26166.76 / Avg: 31486.82 / Max: 33875.27Min: 28136.72 / Avg: 32571.4 / Max: 38716.831. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 208.57, N = 3SE +/- 123.29, N = 3SE +/- 113.14, N = 31311991315521279481. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 130799 / Avg: 131199.33 / Max: 131501Min: 131322 / Avg: 131552 / Max: 131744Min: 127787 / Avg: 127947.67 / Max: 1281661. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 180.62, N = 3SE +/- 114.33, N = 3SE +/- 180.00, N = 31299001296771266041. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 129560 / Avg: 129899.67 / Max: 130176Min: 129449 / Avg: 129676.67 / Max: 129809Min: 126422 / Avg: 126604 / Max: 1269641. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAPrefer FreqPrefer CacheAuto130260390520650SE +/- 4.86, N = 13SE +/- 3.65, N = 13SE +/- 3.50, N = 14581.60576.94578.011. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAPrefer FreqPrefer CacheAuto100200300400500Min: 572.26 / Avg: 581.6 / Max: 639.3Min: 570.19 / Avg: 576.94 / Max: 620.33Min: 570.49 / Avg: 578.01 / Max: 622.671. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubePrefer FreqPrefer CacheAuto306090120150SE +/- 0.10, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3126.82126.81126.881. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubePrefer FreqPrefer CacheAuto20406080100Min: 126.69 / Avg: 126.82 / Max: 127.03Min: 126.55 / Avg: 126.81 / Max: 127.02Min: 126.64 / Avg: 126.88 / Max: 127.141. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyPrefer FreqPrefer CacheAuto110220330440550SE +/- 7.93, N = 15SE +/- 1.94, N = 3SE +/- 9.27, N = 15470.7521.1475.2MIN: 360.24 / MAX: 753.62MIN: 368.91 / MAX: 746.18MIN: 358.08 / MAX: 752.1
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyPrefer FreqPrefer CacheAuto90180270360450Min: 434.4 / Avg: 470.72 / Max: 522.64Min: 519.07 / Avg: 521.13 / Max: 525.01Min: 427.47 / Avg: 475.19 / Max: 525.69

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19Prefer FreqPrefer CacheAuto306090120150SE +/- 0.44, N = 3SE +/- 0.12, N = 3SE +/- 0.49, N = 3126.56127.45127.26
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19Prefer FreqPrefer CacheAuto20406080100Min: 125.97 / Avg: 126.56 / Max: 127.42Min: 127.23 / Avg: 127.44 / Max: 127.64Min: 126.28 / Avg: 127.26 / Max: 127.75

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto8K16K24K32K40KSE +/- 63.89, N = 3SE +/- 97.48, N = 3SE +/- 49.58, N = 33758837690375351. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7K14K21K28K35KMin: 37465 / Avg: 37587.67 / Max: 37680Min: 37579 / Avg: 37689.67 / Max: 37884Min: 37459 / Avg: 37534.67 / Max: 376281. (CXX) g++ options: -O3 -lm -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CachePrefer FreqPrefer CacheAuto4080120160200SE +/- 0.31, N = 6SE +/- 1.74, N = 15SE +/- 3.49, N = 1531.91181.06183.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CachePrefer FreqPrefer CacheAuto306090120150Min: 30.82 / Avg: 31.91 / Max: 32.59Min: 171.79 / Avg: 181.06 / Max: 198.11Min: 168.28 / Avg: 183.32 / Max: 224.441. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianPrefer FreqPrefer CacheAuto130260390520650SE +/- 5.79, N = 12SE +/- 0.88, N = 3SE +/- 0.33, N = 36056126111. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianPrefer FreqPrefer CacheAuto110220330440550Min: 542 / Avg: 604.67 / Max: 616Min: 611 / Avg: 612.33 / Max: 614Min: 610 / Avg: 610.67 / Max: 6111. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SerialPrefer FreqPrefer CacheAuto80160240320400366.42360.31331.61

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 123.72, N = 4SE +/- 33.45, N = 15SE +/- 23.00, N = 318587.518764.518495.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 18362 / Avg: 18587.45 / Max: 18887.3Min: 18501.8 / Avg: 18764.47 / Max: 18861.9Min: 18456.5 / Avg: 18495.27 / Max: 18536.11. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedPrefer FreqPrefer CacheAuto1632486480SE +/- 0.81, N = 4SE +/- 0.53, N = 15SE +/- 0.40, N = 372.8473.6172.101. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedPrefer FreqPrefer CacheAuto1428425670Min: 70.73 / Avg: 72.84 / Max: 74.48Min: 71.96 / Avg: 73.61 / Max: 79.78Min: 71.62 / Avg: 72.1 / Max: 72.891. (CC) gcc options: -O3

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto0.24620.49240.73860.98481.231SE +/- 0.001, N = 3SE +/- 0.008, N = 12SE +/- 0.005, N = 31.0941.0671.085
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto246810Min: 1.09 / Avg: 1.09 / Max: 1.1Min: 0.98 / Avg: 1.07 / Max: 1.09Min: 1.08 / Avg: 1.09 / Max: 1.09

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinPrefer FreqPrefer CacheAuto8001600240032004000SE +/- 137.18, N = 15SE +/- 45.48, N = 15SE +/- 55.34, N = 33869.373781.073841.471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinPrefer FreqPrefer CacheAuto7001400210028003500Min: 3119.44 / Avg: 3869.37 / Max: 4560.72Min: 3375.88 / Avg: 3781.07 / Max: 4063.33Min: 3782.16 / Avg: 3841.47 / Max: 3952.051. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Prefer FreqPrefer CacheAuto918273645SE +/- 0.07, N = 3SE +/- 0.31, N = 15SE +/- 0.32, N = 835.9437.1236.231. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Prefer FreqPrefer CacheAuto816243240Min: 35.84 / Avg: 35.94 / Max: 36.08Min: 35.59 / Avg: 37.12 / Max: 38.5Min: 35.64 / Avg: 36.23 / Max: 38.411. (CXX) g++ options: -O3 -fPIC -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30060090012001500SE +/- 1.53, N = 3SE +/- 1.86, N = 3SE +/- 1.73, N = 31178115211541. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000Min: 1175 / Avg: 1178 / Max: 1180Min: 1148 / Avg: 1151.67 / Max: 1154Min: 1151 / Avg: 1154 / Max: 11571. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 33.67, N = 3SE +/- 39.37, N = 3SE +/- 52.98, N = 33200532061313111. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto6K12K18K24K30KMin: 31971 / Avg: 32004.67 / Max: 32072Min: 32019 / Avg: 32061.33 / Max: 32140Min: 31206 / Avg: 31311.33 / Max: 313741. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.00, N = 3SE +/- 1.76, N = 3SE +/- 0.00, N = 310019979801. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000Min: 999 / Avg: 1001 / Max: 1002Min: 994 / Avg: 996.67 / Max: 1000Min: 980 / Avg: 980 / Max: 9801. (CXX) g++ options: -O3 -lm -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1.30952.6193.92855.2386.5475SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.825.805.801. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto246810Min: 5.82 / Avg: 5.82 / Max: 5.82Min: 5.79 / Avg: 5.8 / Max: 5.8Min: 5.8 / Avg: 5.8 / Max: 5.811. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.53, N = 3SE +/- 1.86, N = 3SE +/- 1.33, N = 39849869691. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000Min: 981 / Avg: 984 / Max: 986Min: 984 / Avg: 986.33 / Max: 990Min: 966 / Avg: 968.67 / Max: 9701. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 68.70, N = 3SE +/- 38.68, N = 3SE +/- 52.27, N = 33155131593309431. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto5K10K15K20K25KMin: 31423 / Avg: 31551.33 / Max: 31658Min: 31526 / Avg: 31592.67 / Max: 31660Min: 30859 / Avg: 30943.33 / Max: 310391. (CXX) g++ options: -O3 -lm -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.03167, N = 3SE +/- 0.03086, N = 3SE +/- 0.01046, N = 37.564007.507977.46570
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.5 / Avg: 7.56 / Max: 7.61Min: 7.48 / Avg: 7.51 / Max: 7.57Min: 7.45 / Avg: 7.47 / Max: 7.48

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.01807, N = 3SE +/- 0.00354, N = 3SE +/- 0.02259, N = 37.700027.614877.66102
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.68 / Avg: 7.7 / Max: 7.74Min: 7.61 / Avg: 7.61 / Max: 7.62Min: 7.63 / Avg: 7.66 / Max: 7.71

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timePrefer FreqPrefer CacheAuto3691215SE +/- 0.00141, N = 3SE +/- 0.00821, N = 3SE +/- 0.01032, N = 38.980568.985458.95807
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timePrefer FreqPrefer CacheAuto3691215Min: 8.98 / Avg: 8.98 / Max: 8.98Min: 8.97 / Avg: 8.99 / Max: 9Min: 8.94 / Avg: 8.96 / Max: 8.97

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyPrefer FreqPrefer CacheAuto306090120150145.59145.15146.31

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 35.906.045.541. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaPrefer FreqPrefer CacheAuto246810Min: 5.89 / Avg: 5.9 / Max: 5.91Min: 6 / Avg: 6.04 / Max: 6.08Min: 5.53 / Avg: 5.54 / Max: 5.551. (CXX) g++ options: -O3

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADPrefer FreqPrefer CacheAuto20406080100SE +/- 0.20, N = 3SE +/- 0.40, N = 3SE +/- 0.32, N = 389.9589.8690.68
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADPrefer FreqPrefer CacheAuto20406080100Min: 89.57 / Avg: 89.95 / Max: 90.23Min: 89.06 / Avg: 89.86 / Max: 90.34Min: 90.16 / Avg: 90.68 / Max: 91.26

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUPrefer FreqPrefer CacheAuto20K40K60K80K100KSE +/- 32.58, N = 3SE +/- 67.91, N = 3SE +/- 81.18, N = 3107729.78107434.06107446.971. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 107683.21 / Avg: 107729.78 / Max: 107792.54Min: 107336.86 / Avg: 107434.06 / Max: 107564.81Min: 107324.45 / Avg: 107446.97 / Max: 107600.51. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingPrefer FreqPrefer CacheAuto6M12M18M24M30MSE +/- 15666.97, N = 3SE +/- 518902.00, N = 15SE +/- 221720.67, N = 725052017.7325978492.7725423648.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingPrefer FreqPrefer CacheAuto5M10M15M20M25MMin: 25025852.55 / Avg: 25052017.73 / Max: 25080030.15Min: 24923838.67 / Avg: 25978492.77 / Max: 30420911.82Min: 24969753.1 / Avg: 25423648.14 / Max: 26330489.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Prefer FreqPrefer CacheAuto20406080100SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 382.9383.0783.161. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Prefer FreqPrefer CacheAuto1632486480Min: 82.86 / Avg: 82.93 / Max: 83.05Min: 83.04 / Avg: 83.07 / Max: 83.09Min: 83.06 / Avg: 83.16 / Max: 83.241. (CXX) g++ options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto30060090012001500SE +/- 3.20, N = 3SE +/- 5.17, N = 3SE +/- 10.71, N = 31288.241276.221280.48MIN: 1274.09MIN: 1255.9MIN: 1249.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1284.22 / Avg: 1288.24 / Max: 1294.56Min: 1267.48 / Avg: 1276.22 / Max: 1285.37Min: 1260.49 / Avg: 1280.48 / Max: 1297.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto30060090012001500SE +/- 12.75, N = 3SE +/- 5.43, N = 3SE +/- 3.31, N = 31279.161286.511280.49MIN: 1247.77MIN: 1267.38MIN: 1262.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1258.42 / Avg: 1279.16 / Max: 1302.39Min: 1278.69 / Avg: 1286.51 / Max: 1296.94Min: 1273.93 / Avg: 1280.49 / Max: 1284.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto150300450600750SE +/- 2.87, N = 3SE +/- 1.86, N = 3SE +/- 7.25, N = 3678.66683.48674.84MIN: 670.71MIN: 674.73MIN: 656.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto120240360480600Min: 675.71 / Avg: 678.66 / Max: 684.4Min: 680.13 / Avg: 683.48 / Max: 686.56Min: 660.46 / Avg: 674.84 / Max: 683.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto150300450600750SE +/- 3.86, N = 3SE +/- 3.19, N = 3SE +/- 2.26, N = 3673.16685.42673.24MIN: 663.46MIN: 674.86MIN: 666.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto120240360480600Min: 666.58 / Avg: 673.16 / Max: 679.96Min: 679.77 / Avg: 685.42 / Max: 690.82Min: 670.49 / Avg: 673.24 / Max: 677.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomPrefer FreqPrefer CacheAuto0.42530.85061.27591.70122.1265SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.891.871.701. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomPrefer FreqPrefer CacheAuto246810Min: 1.88 / Avg: 1.89 / Max: 1.9Min: 1.86 / Avg: 1.87 / Max: 1.87Min: 1.7 / Avg: 1.7 / Max: 1.711. (CXX) g++ options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 9.76, N = 3SE +/- 13.16, N = 5SE +/- 2.63, N = 32423.12428.52414.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2411.8 / Avg: 2423.07 / Max: 2442.5Min: 2386.1 / Avg: 2428.5 / Max: 2456.1Min: 2410 / Avg: 2414.1 / Max: 24191. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000SE +/- 6.16, N = 3SE +/- 11.34, N = 5SE +/- 5.17, N = 31046.81046.41051.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000Min: 1036.4 / Avg: 1046.77 / Max: 1057.7Min: 1001.8 / Avg: 1046.38 / Max: 1065Min: 1045.1 / Avg: 1051 / Max: 1061.31. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Prefer FreqPrefer CacheAuto0.74411.48822.23232.97643.7205SE +/- 0.028257, N = 7SE +/- 0.017434, N = 3SE +/- 0.026295, N = 33.2423173.3072513.2808741. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810Min: 3.16 / Avg: 3.24 / Max: 3.37Min: 3.28 / Avg: 3.31 / Max: 3.34Min: 3.23 / Avg: 3.28 / Max: 3.321. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromePrefer FreqPrefer CacheAuto246810SE +/- 0.09, N = 3SE +/- 0.08, N = 15SE +/- 0.09, N = 37.047.367.241. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromePrefer FreqPrefer CacheAuto3691215Min: 6.94 / Avg: 7.04 / Max: 7.23Min: 6.94 / Avg: 7.36 / Max: 7.87Min: 7.12 / Avg: 7.24 / Max: 7.421. chrome 110.0.5481.96

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisPrefer FreqPrefer CacheAuto1632486480SE +/- 0.02, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 370.4170.3470.381. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisPrefer FreqPrefer CacheAuto1428425670Min: 70.38 / Avg: 70.41 / Max: 70.45Min: 70.06 / Avg: 70.34 / Max: 70.56Min: 70.2 / Avg: 70.38 / Max: 70.511. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromePrefer FreqPrefer CacheAuto70140210280350SE +/- 4.12, N = 3SE +/- 3.84, N = 4SE +/- 3.14, N = 3325.83326.22333.011. chrome 110.0.5481.96
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromePrefer FreqPrefer CacheAuto60120180240300Min: 317.66 / Avg: 325.83 / Max: 330.86Min: 318.99 / Avg: 326.22 / Max: 333.71Min: 327.03 / Avg: 333.01 / Max: 337.691. chrome 110.0.5481.96

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 97.34, N = 3SE +/- 72.19, N = 3SE +/- 58.09, N = 3312383128031224
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUPrefer FreqPrefer CacheAuto5K10K15K20K25KMin: 31093 / Avg: 31238 / Max: 31423Min: 31193 / Avg: 31279.67 / Max: 31423Min: 31126 / Avg: 31223.67 / Max: 31327

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barePrefer FreqPrefer CacheAuto0.60551.2111.81652.4223.0275SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 32.6912.6762.6851. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barePrefer FreqPrefer CacheAuto246810Min: 2.69 / Avg: 2.69 / Max: 2.7Min: 2.67 / Avg: 2.68 / Max: 2.68Min: 2.68 / Avg: 2.69 / Max: 2.691. (CXX) g++ options: -O3

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUPrefer FreqPrefer CacheAuto918273645SE +/- 0.40, N = 3SE +/- 0.33, N = 3SE +/- 0.61, N = 1535.2035.3238.57
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUPrefer FreqPrefer CacheAuto816243240Min: 34.55 / Avg: 35.2 / Max: 35.92Min: 34.68 / Avg: 35.32 / Max: 35.78Min: 33.72 / Avg: 38.57 / Max: 40.67

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionPrefer FreqPrefer CacheAuto1428425670SE +/- 0.39, N = 3SE +/- 0.70, N = 4SE +/- 0.26, N = 361.3460.8260.471. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionPrefer FreqPrefer CacheAuto1224364860Min: 60.57 / Avg: 61.34 / Max: 61.8Min: 59.48 / Avg: 60.82 / Max: 62.78Min: 60.02 / Avg: 60.47 / Max: 60.911. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1530456075SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 367.4867.6267.61
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1326395265Min: 67.43 / Avg: 67.48 / Max: 67.58Min: 67.49 / Avg: 67.62 / Max: 67.74Min: 67.43 / Avg: 67.61 / Max: 67.83

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 31.52, N = 3SE +/- 17.52, N = 3SE +/- 26.27, N = 42216215821691. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingPrefer FreqPrefer CacheAuto400800120016002000Min: 2163 / Avg: 2215.67 / Max: 2272Min: 2139 / Avg: 2158 / Max: 2193Min: 2141 / Avg: 2169.25 / Max: 22481. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterPrefer FreqPrefer CacheAuto2040608010096.8696.5996.91

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDPrefer FreqPrefer CacheAuto3691215SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 310.069.159.141. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDPrefer FreqPrefer CacheAuto3691215Min: 9.96 / Avg: 10.06 / Max: 10.2Min: 9.12 / Avg: 9.15 / Max: 9.21Min: 9.05 / Avg: 9.14 / Max: 9.191. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetPrefer FreqPrefer CacheAuto3691215SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 39.739.578.831. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetPrefer FreqPrefer CacheAuto3691215Min: 9.62 / Avg: 9.73 / Max: 9.85Min: 9.5 / Avg: 9.57 / Max: 9.64Min: 8.81 / Avg: 8.83 / Max: 8.851. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsPrefer FreqPrefer CacheAuto246810SE +/- 0.11, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 38.537.598.641. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsPrefer FreqPrefer CacheAuto3691215Min: 8.31 / Avg: 8.53 / Max: 8.65Min: 7.58 / Avg: 7.59 / Max: 7.59Min: 8.59 / Avg: 8.64 / Max: 8.691. (CXX) g++ options: -O3

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MPrefer FreqPrefer CacheAuto3K6K9K12K15KSE +/- 52.59, N = 3SE +/- 57.72, N = 3SE +/- 84.38, N = 316109.416231.916134.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 16032.1 / Avg: 16109.37 / Max: 16209.8Min: 16173.4 / Avg: 16231.87 / Max: 16347.3Min: 16025.1 / Avg: 16134.4 / Max: 16300.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 2.74, N = 3SE +/- 2.11, N = 3SE +/- 3.67, N = 31080.711082.361084.60MIN: 591.34 / MAX: 1282.05MIN: 781.51 / MAX: 1247.1MIN: 963.74 / MAX: 1282.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1075.73 / Avg: 1080.71 / Max: 1085.19Min: 1079.79 / Avg: 1082.36 / Max: 1086.54Min: 1077.96 / Avg: 1084.6 / Max: 1090.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 37.377.357.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.32 / Avg: 7.37 / Max: 7.42Min: 7.31 / Avg: 7.35 / Max: 7.37Min: 7.3 / Avg: 7.34 / Max: 7.371. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 2.23, N = 3SE +/- 2.98, N = 3SE +/- 0.42, N = 31089.891093.401097.26MIN: 566.46 / MAX: 1284.73MIN: 638.38 / MAX: 1255.16MIN: 580.94 / MAX: 1284.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1085.75 / Avg: 1089.89 / Max: 1093.38Min: 1087.73 / Avg: 1093.4 / Max: 1097.81Min: 1096.43 / Avg: 1097.26 / Max: 1097.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 37.297.277.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.28 / Avg: 7.29 / Max: 7.31Min: 7.23 / Avg: 7.27 / Max: 7.31Min: 7.24 / Avg: 7.25 / Max: 7.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 37.927.867.90MIN: 7.07 / MAX: 8.33MIN: 7.02 / MAX: 8.29MIN: 7.04 / MAX: 8.37
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.91 / Avg: 7.92 / Max: 7.93Min: 7.86 / Avg: 7.86 / Max: 7.87Min: 7.88 / Avg: 7.9 / Max: 7.93

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonPrefer FreqPrefer CacheAuto1428425670SE +/- 0.66, N = 3SE +/- 0.19, N = 3SE +/- 0.19, N = 362.3462.6061.89
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonPrefer FreqPrefer CacheAuto1224364860Min: 61.67 / Avg: 62.34 / Max: 63.67Min: 62.22 / Avg: 62.6 / Max: 62.85Min: 61.51 / Avg: 61.89 / Max: 62.12

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 23.52, N = 3SE +/- 10.59, N = 3SE +/- 14.86, N = 32469.82476.52485.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2426.2 / Avg: 2469.8 / Max: 2506.9Min: 2457.5 / Avg: 2476.47 / Max: 2494.1Min: 2469.8 / Avg: 2485.5 / Max: 2515.21. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedPrefer FreqPrefer CacheAuto70140210280350SE +/- 2.19, N = 3SE +/- 1.82, N = 3SE +/- 1.64, N = 3297.9299.9296.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedPrefer FreqPrefer CacheAuto50100150200250Min: 294.1 / Avg: 297.93 / Max: 301.7Min: 296.3 / Avg: 299.87 / Max: 302.3Min: 293.5 / Avg: 296 / Max: 299.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto130260390520650SE +/- 0.84, N = 3SE +/- 0.31, N = 3SE +/- 0.52, N = 3589.71591.14589.51MIN: 329.37 / MAX: 617.43MIN: 290.4 / MAX: 616.78MIN: 368.88 / MAX: 615.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto100200300400500Min: 588.03 / Avg: 589.71 / Max: 590.71Min: 590.67 / Avg: 591.14 / Max: 591.72Min: 588.83 / Avg: 589.51 / Max: 590.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 313.5313.4813.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto48121620Min: 13.5 / Avg: 13.53 / Max: 13.57Min: 13.47 / Avg: 13.48 / Max: 13.51Min: 13.51 / Avg: 13.53 / Max: 13.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUPrefer FreqPrefer CacheAuto1.00352.0073.01054.0145.0175SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 34.464.364.38MIN: 2.05 / MAX: 5MIN: 1.84 / MAX: 4.92MIN: 1.82 / MAX: 4.93
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUPrefer FreqPrefer CacheAuto246810Min: 4.44 / Avg: 4.46 / Max: 4.48Min: 4.33 / Avg: 4.36 / Max: 4.38Min: 4.34 / Avg: 4.38 / Max: 4.43

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto70140210280350SE +/- 0.10, N = 3SE +/- 0.48, N = 3SE +/- 0.44, N = 3306.23305.71305.51MIN: 293.51 / MAX: 313.39MIN: 264.82 / MAX: 314.53MIN: 290.82 / MAX: 316.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto50100150200250Min: 306.05 / Avg: 306.23 / Max: 306.38Min: 304.86 / Avg: 305.71 / Max: 306.51Min: 304.65 / Avg: 305.51 / Max: 306.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto612182430SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 326.0926.1126.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto612182430Min: 26.06 / Avg: 26.09 / Max: 26.12Min: 26.02 / Avg: 26.11 / Max: 26.22Min: 26.09 / Avg: 26.14 / Max: 26.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 2.30, N = 3SE +/- 58.56, N = 3SE +/- 10.66, N = 32267.82158.62252.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2263.3 / Avg: 2267.83 / Max: 2270.8Min: 2046.1 / Avg: 2158.63 / Max: 2243Min: 2235.4 / Avg: 2252.53 / Max: 2272.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto30060090012001500SE +/- 7.52, N = 3SE +/- 9.62, N = 3SE +/- 1.91, N = 31405.51423.91418.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000Min: 1390.5 / Avg: 1405.53 / Max: 1413.2Min: 1407.6 / Avg: 1423.93 / Max: 1440.9Min: 1414.7 / Avg: 1418.13 / Max: 1421.31. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSPrefer FreqPrefer CacheAuto400800120016002000SE +/- 1.83, N = 3SE +/- 7.98, N = 3SE +/- 11.39, N = 31857.11874.11874.3MIN: 1796.07 / MAX: 1967.45MIN: 1789.64 / MAX: 2506.58MIN: 1794.8 / MAX: 2057.95
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSPrefer FreqPrefer CacheAuto30060090012001500Min: 1853.43 / Avg: 1857.06 / Max: 1859.23Min: 1862.09 / Avg: 1874.09 / Max: 1889.21Min: 1857.57 / Avg: 1874.32 / Max: 1896.05

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 7.91, N = 3SE +/- 3.70, N = 3SE +/- 2.23, N = 32193.42210.72198.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2177.9 / Avg: 2193.4 / Max: 2203.9Min: 2203.3 / Avg: 2210.67 / Max: 2215Min: 2195 / Avg: 2198.63 / Max: 2202.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto9001800270036004500SE +/- 29.94, N = 3SE +/- 26.15, N = 3SE +/- 20.17, N = 34023.53967.54033.31. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto7001400210028003500Min: 3991.5 / Avg: 4023.47 / Max: 4083.3Min: 3929.9 / Avg: 3967.53 / Max: 4017.8Min: 3993 / Avg: 4033.27 / Max: 4055.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUPrefer FreqPrefer CacheAuto1.07552.1513.22654.3025.3775SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 34.784.784.77MIN: 2.12 / MAX: 5.4MIN: 2.1 / MAX: 5.41MIN: 2.09 / MAX: 5.39
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUPrefer FreqPrefer CacheAuto246810Min: 4.77 / Avg: 4.78 / Max: 4.79Min: 4.75 / Avg: 4.78 / Max: 4.8Min: 4.76 / Avg: 4.77 / Max: 4.77

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.99, N = 3SE +/- 3.93, N = 15SE +/- 1.33, N = 3256.0223.1215.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto50100150200250Min: 254 / Avg: 255.97 / Max: 257.1Min: 212.5 / Avg: 223.09 / Max: 253Min: 213.6 / Avg: 215.27 / Max: 217.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto150300450600750SE +/- 2.79, N = 3SE +/- 7.08, N = 15SE +/- 3.55, N = 3682.2622.3609.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto120240360480600Min: 676.7 / Avg: 682.2 / Max: 685.8Min: 602.3 / Avg: 622.35 / Max: 677.7Min: 605.4 / Avg: 609.2 / Max: 616.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 3.64, N = 3SE +/- 6.75, N = 3SE +/- 6.20, N = 32417.32410.12433.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2410.3 / Avg: 2417.27 / Max: 2422.6Min: 2397.3 / Avg: 2410.13 / Max: 2420.2Min: 2423.6 / Avg: 2433 / Max: 2444.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.70, N = 3SE +/- 4.68, N = 3SE +/- 5.38, N = 31012.91012.01015.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000Min: 1011 / Avg: 1012.9 / Max: 1016.3Min: 1005 / Avg: 1012.03 / Max: 1020.9Min: 1004.4 / Avg: 1015.03 / Max: 1021.81. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarPrefer FreqPrefer CacheAuto3691215SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 311.8511.8211.79
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarPrefer FreqPrefer CacheAuto3691215Min: 11.77 / Avg: 11.85 / Max: 11.9Min: 11.79 / Avg: 11.82 / Max: 11.87Min: 11.75 / Avg: 11.79 / Max: 11.84

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomPrefer FreqPrefer CacheAuto1.22272.44543.66814.89086.1135SE +/- 0.009, N = 3SE +/- 0.010, N = 3SE +/- 0.011, N = 35.4345.4255.428
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomPrefer FreqPrefer CacheAuto246810Min: 5.42 / Avg: 5.43 / Max: 5.45Min: 5.41 / Avg: 5.43 / Max: 5.45Min: 5.41 / Avg: 5.43 / Max: 5.44

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto1326395265SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 358.9358.9959.08MIN: 41.49 / MAX: 69.51MIN: 29.03 / MAX: 69.75MIN: 27.39 / MAX: 72.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto1224364860Min: 58.77 / Avg: 58.93 / Max: 59.2Min: 58.82 / Avg: 58.99 / Max: 59.17Min: 58.97 / Avg: 59.08 / Max: 59.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto306090120150SE +/- 0.31, N = 3SE +/- 0.23, N = 3SE +/- 0.15, N = 3135.63135.51135.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto306090120150Min: 135.01 / Avg: 135.63 / Max: 135.97Min: 135.12 / Avg: 135.51 / Max: 135.9Min: 135.08 / Avg: 135.28 / Max: 135.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUPrefer FreqPrefer CacheAuto1.14082.28163.42244.56325.704SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.055.075.02MIN: 4.92 / MAX: 5.35MIN: 4.95 / MAX: 5.38MIN: 4.9 / MAX: 5.32
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUPrefer FreqPrefer CacheAuto246810Min: 5.04 / Avg: 5.05 / Max: 5.06Min: 5.06 / Avg: 5.07 / Max: 5.08Min: 5.01 / Avg: 5.02 / Max: 5.02

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Prefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 12.93, N = 3SE +/- 15.43, N = 3SE +/- 16.88, N = 317769.517748.517797.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Prefer FreqPrefer CacheAuto3K6K9K12K15KMin: 17748.4 / Avg: 17769.5 / Max: 17793Min: 17732 / Avg: 17748.47 / Max: 17779.3Min: 17764.3 / Avg: 17797.33 / Max: 17819.9

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromePrefer FreqPrefer CacheAuto80160240320400SE +/- 3.94, N = 15SE +/- 3.24, N = 3SE +/- 3.39, N = 15372.8373.2373.21. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromePrefer FreqPrefer CacheAuto70140210280350Min: 345.3 / Avg: 372.8 / Max: 390.3Min: 369.1 / Avg: 373.2 / Max: 379.6Min: 344.4 / Avg: 373.17 / Max: 388.11. chrome 110.0.5481.96

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto1.08452.1693.25354.3385.4225SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.774.824.80MIN: 3.42 / MAX: 11.47MIN: 3.53 / MAX: 14.59MIN: 3.36 / MAX: 13.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 4.76 / Avg: 4.77 / Max: 4.78Min: 4.8 / Avg: 4.82 / Max: 4.84Min: 4.79 / Avg: 4.8 / Max: 4.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto400800120016002000SE +/- 2.43, N = 3SE +/- 3.63, N = 3SE +/- 1.97, N = 31674.001659.221663.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto30060090012001500Min: 1670.78 / Avg: 1674 / Max: 1678.76Min: 1652.57 / Avg: 1659.22 / Max: 1665.09Min: 1660.99 / Avg: 1663.09 / Max: 1667.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.75, N = 3SE +/- 2.07, N = 3SE +/- 1.48, N = 31108.321109.841108.20
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatPrefer FreqPrefer CacheAuto2004006008001000Min: 1106.99 / Avg: 1108.32 / Max: 1109.58Min: 1105.69 / Avg: 1109.84 / Max: 1112.02Min: 1105.24 / Avg: 1108.2 / Max: 1109.76

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantPrefer FreqPrefer CacheAuto30060090012001500SE +/- 8.09, N = 3SE +/- 1.70, N = 3SE +/- 6.58, N = 31627.871622.751626.40
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantPrefer FreqPrefer CacheAuto30060090012001500Min: 1619.47 / Avg: 1627.87 / Max: 1644.05Min: 1620.07 / Avg: 1622.75 / Max: 1625.91Min: 1619.63 / Avg: 1626.4 / Max: 1639.55

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill SyncPrefer FreqPrefer CacheAuto9K18K27K36K45KSE +/- 113.36, N = 3SE +/- 84.16, N = 3SE +/- 41.20, N = 33947839653397281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill SyncPrefer FreqPrefer CacheAuto7K14K21K28K35KMin: 39330 / Avg: 39478.33 / Max: 39701Min: 39530 / Avg: 39653 / Max: 39814Min: 39646 / Avg: 39728 / Max: 397761. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto1.05752.1153.17254.235.2875SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.654.694.70MIN: 3 / MAX: 12.96MIN: 3.01 / MAX: 12.79MIN: 3.01 / MAX: 12.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 4.63 / Avg: 4.65 / Max: 4.66Min: 4.69 / Avg: 4.69 / Max: 4.7Min: 4.68 / Avg: 4.7 / Max: 4.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto400800120016002000SE +/- 3.24, N = 3SE +/- 1.37, N = 3SE +/- 3.62, N = 31721.241702.471700.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto30060090012001500Min: 1716.24 / Avg: 1721.24 / Max: 1727.32Min: 1699.85 / Avg: 1702.47 / Max: 1704.5Min: 1693.79 / Avg: 1700.14 / Max: 1706.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto0.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.390.390.39MIN: 0.22 / MAX: 7.76MIN: 0.23 / MAX: 9.35MIN: 0.23 / MAX: 7.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto12345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto9K18K27K36K45KSE +/- 18.59, N = 3SE +/- 60.47, N = 3SE +/- 26.82, N = 340605.4440512.3940664.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto7K14K21K28K35KMin: 40581.55 / Avg: 40605.44 / Max: 40642.06Min: 40433.18 / Avg: 40512.39 / Max: 40631.14Min: 40617.19 / Avg: 40664.13 / Max: 40710.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 38.057.928.01MIN: 3.98 / MAX: 18.88MIN: 4.93 / MAX: 18.91MIN: 4.18 / MAX: 20.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 8 / Avg: 8.05 / Max: 8.09Min: 7.87 / Avg: 7.92 / Max: 7.97Min: 7.95 / Avg: 8.01 / Max: 8.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 3.10, N = 3SE +/- 3.98, N = 3SE +/- 4.60, N = 3993.161009.73997.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 987.94 / Avg: 993.16 / Max: 998.66Min: 1002.54 / Avg: 1009.73 / Max: 1016.28Min: 989.12 / Avg: 997.6 / Max: 1004.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto0.15080.30160.45240.60320.754SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.670.670.67MIN: 0.34 / MAX: 8.81MIN: 0.37 / MAX: 8.32MIN: 0.37 / MAX: 8.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.67 / Max: 0.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto5K10K15K20K25KSE +/- 31.27, N = 3SE +/- 118.07, N = 3SE +/- 11.18, N = 323592.8523486.3623579.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto4K8K12K16K20KMin: 23535.81 / Avg: 23592.85 / Max: 23643.56Min: 23251.21 / Avg: 23486.36 / Max: 23622.65Min: 23560.77 / Avg: 23579.67 / Max: 23599.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 36.106.126.12MIN: 3.29 / MAX: 13.39MIN: 3.21 / MAX: 13.65MIN: 3.2 / MAX: 14.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 6.09 / Avg: 6.1 / Max: 6.1Min: 6.11 / Avg: 6.12 / Max: 6.12Min: 6.11 / Avg: 6.12 / Max: 6.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto6001200180024003000SE +/- 1.52, N = 3SE +/- 1.72, N = 3SE +/- 3.36, N = 32621.902613.722613.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto5001000150020002500Min: 2619.81 / Avg: 2621.9 / Max: 2624.86Min: 2610.91 / Avg: 2613.72 / Max: 2616.85Min: 2607.11 / Avg: 2613.42 / Max: 2618.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 35.995.986.00MIN: 3.14 / MAX: 13.64MIN: 3.07 / MAX: 15.39MIN: 3.08 / MAX: 13.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 5.98 / Avg: 5.99 / Max: 6.01Min: 5.98 / Avg: 5.98 / Max: 5.99Min: 5.99 / Avg: 6 / Max: 6.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto30060090012001500SE +/- 1.63, N = 3SE +/- 0.83, N = 3SE +/- 1.06, N = 31333.241335.291332.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1330.44 / Avg: 1333.24 / Max: 1336.09Min: 1333.69 / Avg: 1335.29 / Max: 1336.5Min: 1330.59 / Avg: 1332.5 / Max: 1334.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random FillPrefer FreqPrefer CacheAuto300K600K900K1200K1500KSE +/- 2340.10, N = 3SE +/- 3639.38, N = 3SE +/- 1364.86, N = 31396024138878713987171. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random FillPrefer FreqPrefer CacheAuto200K400K600K800K1000KMin: 1391481 / Avg: 1396024.33 / Max: 1399269Min: 1381586 / Avg: 1388787.33 / Max: 1393305Min: 1396002 / Avg: 1398716.67 / Max: 14003221. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.00, N = 32612612601. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenPrefer FreqPrefer CacheAuto50100150200250Min: 260 / Avg: 261 / Max: 262Min: 260 / Avg: 260.67 / Max: 261Min: 260 / Avg: 260 / Max: 2601. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedPrefer FreqPrefer CacheAuto110220330440550SE +/- 1.45, N = 3SE +/- 0.88, N = 3SE +/- 1.00, N = 34924864861. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedPrefer FreqPrefer CacheAuto90180270360450Min: 490 / Avg: 492.33 / Max: 495Min: 485 / Avg: 486.33 / Max: 488Min: 485 / Avg: 486 / Max: 4881. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomPrefer FreqPrefer CacheAuto200K400K600K800K1000KSE +/- 293.65, N = 3SE +/- 3396.18, N = 3SE +/- 1979.12, N = 39502949507869471321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomPrefer FreqPrefer CacheAuto160K320K480K640K800KMin: 949716 / Avg: 950294.33 / Max: 950672Min: 944024 / Avg: 950785.67 / Max: 954725Min: 943934 / Avg: 947132 / Max: 9507511. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While WritingPrefer FreqPrefer CacheAuto900K1800K2700K3600K4500KSE +/- 25035.49, N = 3SE +/- 24244.30, N = 3SE +/- 12110.14, N = 34198454421224641969441. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While WritingPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KMin: 4173149 / Avg: 4198454 / Max: 4248524Min: 4179982 / Avg: 4212246 / Max: 4259725Min: 4173414 / Avg: 4196944 / Max: 42136811. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write RandomPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KSE +/- 3924.51, N = 3SE +/- 9373.61, N = 3SE +/- 3489.00, N = 33315415331176633236671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write RandomPrefer FreqPrefer CacheAuto600K1200K1800K2400K3000KMin: 3307677 / Avg: 3315415.33 / Max: 3320422Min: 3298196 / Avg: 3311766 / Max: 3329753Min: 3317011 / Avg: 3323666.67 / Max: 33288101. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotatePrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.33, N = 3SE +/- 6.51, N = 3SE +/- 2.31, N = 31012102710301. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotatePrefer FreqPrefer CacheAuto2004006008001000Min: 1012 / Avg: 1012.33 / Max: 1013Min: 1020 / Avg: 1027 / Max: 1040Min: 1026 / Avg: 1030 / Max: 10341. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.33, N = 3SE +/- 5.17, N = 3SE +/- 1.53, N = 31153116211441. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlPrefer FreqPrefer CacheAuto2004006008001000Min: 1153 / Avg: 1153.33 / Max: 1154Min: 1156 / Avg: 1161.67 / Max: 1172Min: 1142 / Avg: 1144 / Max: 11471. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpacePrefer FreqPrefer CacheAuto400800120016002000SE +/- 6.89, N = 3SE +/- 3.71, N = 3SE +/- 10.37, N = 31646170216341. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpacePrefer FreqPrefer CacheAuto30060090012001500Min: 1632 / Avg: 1645.67 / Max: 1654Min: 1697 / Avg: 1701.67 / Max: 1709Min: 1620 / Avg: 1633.67 / Max: 16541. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadPrefer FreqPrefer CacheAuto30M60M90M120M150MSE +/- 260132.99, N = 3SE +/- 515512.16, N = 3SE +/- 134841.05, N = 31477400171477601091472961311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadPrefer FreqPrefer CacheAuto30M60M90M120M150MMin: 147323351 / Avg: 147740017 / Max: 148218164Min: 147032125 / Avg: 147760109.33 / Max: 148756389Min: 147113196 / Avg: 147296131 / Max: 1475592011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto3691215SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 310.1810.0610.101. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto3691215Min: 10.13 / Avg: 10.18 / Max: 10.23Min: 9.91 / Avg: 10.06 / Max: 10.22Min: 10 / Avg: 10.1 / Max: 10.211. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimePrefer FreqPrefer CacheAuto1326395265SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 358.9659.1058.991. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimePrefer FreqPrefer CacheAuto1224364860Min: 58.76 / Avg: 58.96 / Max: 59.16Min: 59.03 / Avg: 59.1 / Max: 59.14Min: 58.86 / Avg: 58.99 / Max: 59.121. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputPrefer FreqPrefer CacheAuto70140210280350SE +/- 1.53, N = 3SE +/- 1.66, N = 3SE +/- 3.34, N = 15344.08342.25337.861. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputPrefer FreqPrefer CacheAuto60120180240300Min: 341.05 / Avg: 344.08 / Max: 345.99Min: 339.84 / Avg: 342.25 / Max: 345.43Min: 311.84 / Avg: 337.86 / Max: 347.911. (CC) gcc options: -O3 -rdynamic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 313.2213.1113.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto48121620Min: 13.14 / Avg: 13.22 / Max: 13.32Min: 13.04 / Avg: 13.11 / Max: 13.16Min: 13.12 / Avg: 13.19 / Max: 13.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromePrefer FreqPrefer CacheAuto50100150200250SE +/- 2.69, N = 15SE +/- 2.79, N = 15SE +/- 2.67, N = 15228.64229.09230.101. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromePrefer FreqPrefer CacheAuto4080120160200Min: 217.88 / Avg: 228.64 / Max: 239.61Min: 217.88 / Avg: 229.09 / Max: 239.71Min: 217.89 / Avg: 230.1 / Max: 239.631. chrome 110.0.5481.96

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialPrefer FreqPrefer CacheAuto2040608010084.7384.7984.58

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesPrefer FreqPrefer CacheAuto2004006008001000SE +/- 7.37, N = 3SE +/- 8.29, N = 3SE +/- 5.13, N = 31111.31116.21126.5MIN: 1060.74 / MAX: 1144.27MIN: 1043.35 / MAX: 1143.74MIN: 1017.45 / MAX: 1147.28
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesPrefer FreqPrefer CacheAuto2004006008001000Min: 1096.78 / Avg: 1111.33 / Max: 1120.66Min: 1100.33 / Avg: 1116.2 / Max: 1128.28Min: 1117.08 / Avg: 1126.49 / Max: 1134.75

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsPrefer FreqPrefer CacheAuto3691215SE +/- 0.20, N = 15SE +/- 0.20, N = 15SE +/- 0.19, N = 1511.0011.0911.12
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsPrefer FreqPrefer CacheAuto3691215Min: 10.18 / Avg: 11 / Max: 11.92Min: 10.31 / Avg: 11.09 / Max: 11.96Min: 10.24 / Avg: 11.12 / Max: 11.93

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompilePrefer FreqPrefer CacheAuto1224364860SE +/- 0.42, N = 3SE +/- 0.31, N = 3SE +/- 0.37, N = 355.2955.3555.29
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompilePrefer FreqPrefer CacheAuto1122334455Min: 54.56 / Avg: 55.29 / Max: 56.02Min: 54.73 / Avg: 55.35 / Max: 55.67Min: 54.63 / Avg: 55.29 / Max: 55.91

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankPrefer FreqPrefer CacheAuto30060090012001500SE +/- 15.65, N = 3SE +/- 2.07, N = 3SE +/- 15.84, N = 31584.11562.21566.9MIN: 1414.69 / MAX: 1754.86MIN: 1439.9 / MAX: 1667.7MIN: 1420.34 / MAX: 1721.97
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankPrefer FreqPrefer CacheAuto30060090012001500Min: 1553.25 / Avg: 1584.1 / Max: 1604.13Min: 1559.13 / Avg: 1562.23 / Max: 1566.15Min: 1547.01 / Avg: 1566.9 / Max: 1598.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.03830.07660.11490.15320.1915SE +/- 0.001025, N = 13SE +/- 0.002974, N = 15SE +/- 0.002416, N = 120.1485850.1700310.148820MIN: 0.13MIN: 0.14MIN: 0.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto12345Min: 0.14 / Avg: 0.15 / Max: 0.16Min: 0.15 / Avg: 0.17 / Max: 0.2Min: 0.14 / Avg: 0.15 / Max: 0.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinPrefer FreqPrefer CacheAuto9001800270036004500SE +/- 13.77, N = 3SE +/- 32.16, N = 10SE +/- 12.63, N = 34152.584211.634161.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinPrefer FreqPrefer CacheAuto7001400210028003500Min: 4132.54 / Avg: 4152.58 / Max: 4178.96Min: 4143.37 / Avg: 4211.63 / Max: 4416.2Min: 4141.86 / Avg: 4161.55 / Max: 4185.11. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkPrefer FreqPrefer CacheAuto510152025SE +/- 0.14, N = 3SE +/- 0.24, N = 3SE +/- 0.08, N = 320.9020.4620.63
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkPrefer FreqPrefer CacheAuto510152025Min: 20.64 / Avg: 20.9 / Max: 21.11Min: 19.99 / Avg: 20.46 / Max: 20.7Min: 20.5 / Avg: 20.63 / Max: 20.77

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1224364860SE +/- 0.06, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 352.2552.3752.54
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1122334455Min: 52.14 / Avg: 52.25 / Max: 52.34Min: 52.15 / Avg: 52.37 / Max: 52.59Min: 52.34 / Avg: 52.54 / Max: 52.76

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 18.99, N = 3SE +/- 7.50, N = 3SE +/- 24.03, N = 319658.119605.419626.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 19629.4 / Avg: 19658.13 / Max: 19694Min: 19590.6 / Avg: 19605.43 / Max: 19614.8Min: 19589 / Avg: 19626.1 / Max: 19671.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 30.05, N = 3SE +/- 28.63, N = 3SE +/- 78.84, N = 418319.818752.718483.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 18270.2 / Avg: 18319.83 / Max: 18374Min: 18696.4 / Avg: 18752.67 / Max: 18790Min: 18347 / Avg: 18482.98 / Max: 18710.31. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto20406080100SE +/- 0.68, N = 3SE +/- 0.15, N = 3SE +/- 0.95, N = 473.6175.7478.951. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto1530456075Min: 72.25 / Avg: 73.61 / Max: 74.34Min: 75.56 / Avg: 75.74 / Max: 76.04Min: 77.54 / Avg: 78.95 / Max: 81.661. (CC) gcc options: -O3

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimePrefer FreqPrefer CacheAuto306090120150129.42123.66125.821. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimePrefer FreqPrefer CacheAuto61218243022.9323.9823.201. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingPrefer FreqPrefer CacheAuto15003000450060007500SE +/- 19.10, N = 3SE +/- 23.08, N = 3SE +/- 59.45, N = 97077.097097.397172.311. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingPrefer FreqPrefer CacheAuto12002400360048006000Min: 7055.4 / Avg: 7077.09 / Max: 7115.17Min: 7052.27 / Avg: 7097.39 / Max: 7128.37Min: 7069.74 / Avg: 7172.31 / Max: 7642.631. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompilePrefer FreqPrefer CacheAuto1122334455SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 349.3749.4549.38
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompilePrefer FreqPrefer CacheAuto1020304050Min: 49.2 / Avg: 49.37 / Max: 49.56Min: 49.34 / Avg: 49.44 / Max: 49.5Min: 49.35 / Avg: 49.38 / Max: 49.42

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto0.09230.18460.27690.36920.4615SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.410.410.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto12345Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.41 / Avg: 0.41 / Max: 0.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto0.82341.64682.47023.29364.117SE +/- 0.021337, N = 3SE +/- 0.006243, N = 3SE +/- 0.014082, N = 33.6545593.6374613.6594831. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810Min: 3.61 / Avg: 3.65 / Max: 3.68Min: 3.63 / Avg: 3.64 / Max: 3.65Min: 3.63 / Avg: 3.66 / Max: 3.671. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylinePrefer FreqPrefer CacheAuto1122334455SE +/- 0.33, N = 3SE +/- 0.43, N = 3SE +/- 0.45, N = 346.3947.9945.23
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylinePrefer FreqPrefer CacheAuto1020304050Min: 45.94 / Avg: 46.39 / Max: 47.04Min: 47.14 / Avg: 47.99 / Max: 48.56Min: 44.35 / Avg: 45.23 / Max: 45.85

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigPrefer FreqPrefer CacheAuto1122334455SE +/- 0.30, N = 3SE +/- 0.32, N = 3SE +/- 0.31, N = 346.4346.1946.57
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigPrefer FreqPrefer CacheAuto918273645Min: 46.12 / Avg: 46.43 / Max: 47.02Min: 45.74 / Avg: 46.19 / Max: 46.81Min: 46.26 / Avg: 46.57 / Max: 47.19

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOPrefer FreqPrefer CacheAuto7001400210028003500SE +/- 17.17, N = 3SE +/- 47.73, N = 3SE +/- 31.81, N = 33439.33489.03454.1MIN: 3421.31 / MAX: 5134.14MIN: 3424.57 / MAX: 5127.41MIN: 3399.67 / MAX: 5374.55
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOPrefer FreqPrefer CacheAuto6001200180024003000Min: 3421.31 / Avg: 3439.29 / Max: 3473.62Min: 3424.57 / Avg: 3489 / Max: 3582.21Min: 3399.67 / Avg: 3454.07 / Max: 3509.85

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexPrefer FreqPrefer CacheAuto900K1800K2700K3600K4500KSE +/- 37858.78, N = 7SE +/- 54535.09, N = 3SE +/- 41048.77, N = 34134347.274120615.704087777.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KMin: 3957674.19 / Avg: 4134347.27 / Max: 4267266.62Min: 4035049.55 / Avg: 4120615.7 / Max: 4221973.96Min: 4005700.16 / Avg: 4087777.65 / Max: 4130387.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointPrefer FreqPrefer CacheAuto3691215SE +/- 0.14, N = 15SE +/- 0.09, N = 4SE +/- 0.12, N = 1511.4111.9511.50
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointPrefer FreqPrefer CacheAuto3691215Min: 10.53 / Avg: 11.41 / Max: 12.34Min: 11.71 / Avg: 11.95 / Max: 12.14Min: 10.6 / Avg: 11.5 / Max: 12.11

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KSE +/- 783.77, N = 3SE +/- 214.95, N = 3SE +/- 32095.22, N = 73476539.493476176.553423759.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresPrefer FreqPrefer CacheAuto600K1200K1800K2400K3000KMin: 3475025.32 / Avg: 3476539.49 / Max: 3477647.78Min: 3475952.16 / Avg: 3476176.55 / Max: 3476606.31Min: 3287977.32 / Avg: 3423759.33 / Max: 3481227.121. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1632486480SE +/- 0.71, N = 15SE +/- 0.72, N = 15SE +/- 0.68, N = 1571.7871.7971.391. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1428425670Min: 63.23 / Avg: 71.78 / Max: 73.18Min: 62.96 / Avg: 71.79 / Max: 73.15Min: 63.5 / Avg: 71.39 / Max: 72.721. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurPrefer FreqPrefer CacheAuto918273645SE +/- 0.42, N = 3SE +/- 0.34, N = 3SE +/- 0.39, N = 339.2239.5139.34
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurPrefer FreqPrefer CacheAuto816243240Min: 38.4 / Avg: 39.22 / Max: 39.78Min: 39.09 / Avg: 39.51 / Max: 40.2Min: 38.77 / Avg: 39.34 / Max: 40.08

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsPrefer FreqPrefer CacheAuto0.18390.36780.55170.73560.9195SE +/- 0.00066, N = 3SE +/- 0.00028, N = 3SE +/- 0.00047, N = 30.813790.815940.81721
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsPrefer FreqPrefer CacheAuto246810Min: 0.81 / Avg: 0.81 / Max: 0.81Min: 0.82 / Avg: 0.82 / Max: 0.82Min: 0.82 / Avg: 0.82 / Max: 0.82

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SMP ParallelPrefer FreqPrefer CacheAuto306090120150112.34114.03112.05

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.70, N = 15SE +/- 0.85, N = 15SE +/- 0.86, N = 1589.8789.7289.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 86.02 / Avg: 89.87 / Max: 93.81Min: 85.24 / Avg: 89.72 / Max: 93.66Min: 84.52 / Avg: 89.07 / Max: 93.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimePrefer FreqPrefer CacheAuto816243240SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 334.8634.9434.831. RawTherapee, version 5.9, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimePrefer FreqPrefer CacheAuto714212835Min: 34.66 / Avg: 34.86 / Max: 34.99Min: 34.79 / Avg: 34.94 / Max: 35.03Min: 34.69 / Avg: 34.83 / Max: 35.051. RawTherapee, version 5.9, command line.

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Prefer FreqPrefer CacheAuto816243240SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.20, N = 336.0934.5234.841. (CC) gcc options: -O2 -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Prefer FreqPrefer CacheAuto816243240Min: 35.92 / Avg: 36.09 / Max: 36.2Min: 34.37 / Avg: 34.52 / Max: 34.63Min: 34.52 / Avg: 34.84 / Max: 35.221. (CC) gcc options: -O2 -lz

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromePrefer FreqPrefer CacheAuto80160240320400SE +/- 4.91, N = 3SE +/- 2.08, N = 3SE +/- 2.65, N = 33693673681. chrome 110.0.5481.96
OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromePrefer FreqPrefer CacheAuto70140210280350Min: 361 / Avg: 369.33 / Max: 378Min: 363 / Avg: 367 / Max: 370Min: 364 / Avg: 368 / Max: 3731. chrome 110.0.5481.96

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto510152025SE +/- 0.19, N = 3SE +/- 0.24, N = 5SE +/- 0.27, N = 321.9922.2022.681. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto510152025Min: 21.75 / Avg: 21.99 / Max: 22.37Min: 21.68 / Avg: 22.2 / Max: 22.77Min: 22.14 / Avg: 22.68 / Max: 22.971. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesPrefer FreqPrefer CacheAuto816243240SE +/- 0.15, N = 3SE +/- 0.13, N = 3SE +/- 0.22, N = 334.9734.2434.27
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesPrefer FreqPrefer CacheAuto714212835Min: 34.78 / Avg: 34.97 / Max: 35.27Min: 34 / Avg: 34.24 / Max: 34.44Min: 33.91 / Avg: 34.27 / Max: 34.66

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhancePrefer FreqPrefer CacheAuto816243240SE +/- 0.27, N = 3SE +/- 0.20, N = 3SE +/- 0.10, N = 333.6034.2533.42
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhancePrefer FreqPrefer CacheAuto714212835Min: 33.2 / Avg: 33.6 / Max: 34.12Min: 34.05 / Avg: 34.25 / Max: 34.64Min: 33.27 / Avg: 33.42 / Max: 33.61

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RatePrefer FreqPrefer CacheAuto3691215SE +/- 0.09, N = 3SE +/- 0.13, N = 4SE +/- 0.10, N = 711.2810.6410.721. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RatePrefer FreqPrefer CacheAuto3691215Min: 11.11 / Avg: 11.28 / Max: 11.4Min: 10.27 / Avg: 10.64 / Max: 10.84Min: 10.4 / Avg: 10.72 / Max: 11.21. (CC) gcc options: -O3 -march=native -fopenmp

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigPrefer FreqPrefer CacheAuto816243240SE +/- 0.12, N = 3SE +/- 0.18, N = 3SE +/- 0.21, N = 333.0532.3933.121. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigPrefer FreqPrefer CacheAuto714212835Min: 32.81 / Avg: 33.05 / Max: 33.21Min: 32.14 / Avg: 32.39 / Max: 32.74Min: 32.79 / Avg: 33.12 / Max: 33.521. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025SE +/- 0.28, N = 3SE +/- 0.26, N = 3SE +/- 0.25, N = 420.3920.3520.161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025Min: 20.03 / Avg: 20.39 / Max: 20.95Min: 19.97 / Avg: 20.35 / Max: 20.85Min: 19.7 / Avg: 20.16 / Max: 20.861. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Prefer FreqPrefer CacheAuto3691215SE +/- 0.15, N = 15SE +/- 0.07, N = 4SE +/- 0.02, N = 413.1312.7712.571. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Prefer FreqPrefer CacheAuto48121620Min: 12.55 / Avg: 13.13 / Max: 14.13Min: 12.63 / Avg: 12.77 / Max: 12.96Min: 12.55 / Avg: 12.57 / Max: 12.621. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Prefer FreqPrefer CacheAuto1.18242.36483.54724.72965.912SE +/- 0.035122, N = 3SE +/- 0.017595, N = 3SE +/- 0.033936, N = 35.1816075.1935285.2552891. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810Min: 5.12 / Avg: 5.18 / Max: 5.24Min: 5.16 / Avg: 5.19 / Max: 5.22Min: 5.19 / Avg: 5.26 / Max: 5.311. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromePrefer FreqPrefer CacheAuto510152025SE +/- 0.37, N = 15SE +/- 0.30, N = 15SE +/- 0.38, N = 1518.6419.3318.841. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromePrefer FreqPrefer CacheAuto510152025Min: 16.89 / Avg: 18.64 / Max: 20.16Min: 16.99 / Avg: 19.33 / Max: 20.14Min: 16.88 / Avg: 18.84 / Max: 20.321. chrome 110.0.5481.96

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto612182430SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 324.0924.1824.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto612182430Min: 23.99 / Avg: 24.09 / Max: 24.15Min: 24.13 / Avg: 24.18 / Max: 24.25Min: 24.04 / Avg: 24.18 / Max: 24.41. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1.1952.393.5854.785.975SE +/- 0.016, N = 3SE +/- 0.018, N = 3SE +/- 0.009, N = 35.2985.3115.2961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto246810Min: 5.27 / Avg: 5.3 / Max: 5.32Min: 5.29 / Avg: 5.31 / Max: 5.35Min: 5.28 / Avg: 5.3 / Max: 5.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.77, N = 15SE +/- 0.75, N = 15SE +/- 0.76, N = 15109.76109.57109.371. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 101.79 / Avg: 109.76 / Max: 111.5Min: 101.96 / Avg: 109.57 / Max: 111.08Min: 101.56 / Avg: 109.37 / Max: 111.281. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto306090120150SE +/- 1.16, N = 5SE +/- 0.93, N = 15SE +/- 0.19, N = 5143.9126.3127.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto306090120150Min: 141.5 / Avg: 143.92 / Max: 147Min: 113.9 / Avg: 126.32 / Max: 130Min: 127.3 / Avg: 127.7 / Max: 128.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto50100150200250SE +/- 2.19, N = 5SE +/- 1.36, N = 15SE +/- 0.39, N = 5211.5191.6191.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto4080120160200Min: 207.5 / Avg: 211.52 / Max: 217.8Min: 187.6 / Avg: 191.59 / Max: 209.3Min: 189.9 / Avg: 191.18 / Max: 192.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustivePrefer FreqPrefer CacheAuto0.38280.76561.14841.53121.914SE +/- 0.0015, N = 3SE +/- 0.0022, N = 3SE +/- 0.0015, N = 31.70121.69761.69501. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustivePrefer FreqPrefer CacheAuto246810Min: 1.7 / Avg: 1.7 / Max: 1.7Min: 1.69 / Avg: 1.7 / Max: 1.7Min: 1.69 / Avg: 1.7 / Max: 1.71. (CXX) g++ options: -O3 -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto1.2212.4423.6634.8846.105SE +/- 0.07946, N = 15SE +/- 0.04406, N = 15SE +/- 0.03961, N = 155.426885.154565.14172MIN: 5.02MIN: 4.87MIN: 4.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto246810Min: 5.11 / Avg: 5.43 / Max: 5.83Min: 4.97 / Avg: 5.15 / Max: 5.7Min: 4.9 / Avg: 5.14 / Max: 5.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto1.25122.50243.75365.00486.256SE +/- 0.026611, N = 3SE +/- 0.052075, N = 3SE +/- 0.032795, N = 35.5064675.5499725.5609091. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810Min: 5.45 / Avg: 5.51 / Max: 5.54Min: 5.48 / Avg: 5.55 / Max: 5.65Min: 5.52 / Avg: 5.56 / Max: 5.631. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.99, N = 15SE +/- 1.26, N = 15SE +/- 1.44, N = 15110.91108.48105.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 102.54 / Avg: 110.91 / Max: 114.22Min: 101.71 / Avg: 108.48 / Max: 115.27Min: 100.04 / Avg: 105.96 / Max: 115.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinPrefer FreqPrefer CacheAuto90K180K270K360K450KSE +/- 1105.18, N = 3SE +/- 260.58, N = 3SE +/- 126.80, N = 34264804255504252371. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinPrefer FreqPrefer CacheAuto70K140K210K280K350KMin: 425340 / Avg: 426480 / Max: 428690Min: 425260 / Avg: 425550 / Max: 426070Min: 425100 / Avg: 425236.67 / Max: 4254901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptPrefer FreqPrefer CacheAuto140280420560700SE +/- 0.23, N = 3SE +/- 0.56, N = 3SE +/- 0.42, N = 3641.11640.64639.971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptPrefer FreqPrefer CacheAuto110220330440550Min: 640.65 / Avg: 641.11 / Max: 641.37Min: 639.54 / Avg: 640.64 / Max: 641.34Min: 639.24 / Avg: 639.97 / Max: 640.691. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDPrefer FreqPrefer CacheAuto30060090012001500SE +/- 3.18, N = 3SE +/- 2.63, N = 3SE +/- 2.10, N = 31276.151266.521281.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDPrefer FreqPrefer CacheAuto2004006008001000Min: 1270.32 / Avg: 1276.15 / Max: 1281.26Min: 1261.88 / Avg: 1266.52 / Max: 1270.99Min: 1277.19 / Avg: 1281.14 / Max: 1284.341. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsPrefer FreqPrefer CacheAuto1000K2000K3000K4000K5000KSE +/- 46384.47, N = 3SE +/- 55352.76, N = 3SE +/- 4595.10, N = 34613558.354616386.564554878.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsPrefer FreqPrefer CacheAuto800K1600K2400K3200K4000KMin: 4562787.85 / Avg: 4613558.35 / Max: 4706184.4Min: 4556657.78 / Avg: 4616386.56 / Max: 4726973.55Min: 4547240.33 / Avg: 4554878.75 / Max: 4563123.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 688.41, N = 3SE +/- 79.62, N = 3SE +/- 105.15, N = 3123819.50122869.22122833.101. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 123045.2 / Avg: 123819.5 / Max: 125192.59Min: 122719.33 / Avg: 122869.22 / Max: 122990.74Min: 122716.48 / Avg: 122833.1 / Max: 123042.971. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicPrefer FreqPrefer CacheAuto50K100K150K200K250KSE +/- 147.40, N = 3SE +/- 84.51, N = 3SE +/- 178.50, N = 3212527.91212454.76212936.161. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicPrefer FreqPrefer CacheAuto40K80K120K160K200KMin: 212243.57 / Avg: 212527.91 / Max: 212737.49Min: 212320.94 / Avg: 212454.76 / Max: 212611.07Min: 212694.94 / Avg: 212936.16 / Max: 213284.71. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 190.35, N = 3SE +/- 83.53, N = 3SE +/- 193.42, N = 31967019517197731. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 19460 / Avg: 19670 / Max: 20050Min: 19350 / Avg: 19516.67 / Max: 19610Min: 19410 / Avg: 19773.33 / Max: 200701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexPrefer FreqPrefer CacheAuto2M4M6M8M10MSE +/- 29801.71, N = 3SE +/- 82230.41, N = 3SE +/- 41091.60, N = 311172285.0011230656.9911172291.291. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexPrefer FreqPrefer CacheAuto2M4M6M8M10MMin: 11114475.24 / Avg: 11172285 / Max: 11213757.65Min: 11072978.22 / Avg: 11230656.99 / Max: 11349975.69Min: 11092121.74 / Avg: 11172291.29 / Max: 11228034.621. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocPrefer FreqPrefer CacheAuto8M16M24M32M40MSE +/- 10732.17, N = 3SE +/- 50781.26, N = 3SE +/- 130418.56, N = 336014622.7336157266.3335942000.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocPrefer FreqPrefer CacheAuto6M12M18M24M30MMin: 35993327.94 / Avg: 36014622.73 / Max: 36027601.97Min: 36068008.71 / Avg: 36157266.33 / Max: 36243859.62Min: 35770048.52 / Avg: 35942000.76 / Max: 36197833.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEPrefer FreqPrefer CacheAuto100K200K300K400K500KSE +/- 3764.61, N = 3SE +/- 398.35, N = 3SE +/- 821.42, N = 3489525.42485658.97484816.521. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEPrefer FreqPrefer CacheAuto80K160K240K320K400KMin: 485507.56 / Avg: 489525.42 / Max: 497048.82Min: 484951.47 / Avg: 485658.97 / Max: 486329.95Min: 483174.46 / Avg: 484816.52 / Max: 485681.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingPrefer FreqPrefer CacheAuto16K32K48K64K80KSE +/- 268.01, N = 3SE +/- 174.31, N = 3SE +/- 262.93, N = 372995.0473184.0473511.841. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingPrefer FreqPrefer CacheAuto13K26K39K52K65KMin: 72571.37 / Avg: 72995.04 / Max: 73491.25Min: 72838.85 / Avg: 73184.04 / Max: 73398.91Min: 73179.23 / Avg: 73511.84 / Max: 74030.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoPrefer FreqPrefer CacheAuto9K18K27K36K45KSE +/- 33.31, N = 3SE +/- 82.33, N = 3SE +/- 19.05, N = 343758.7643716.1343690.401. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoPrefer FreqPrefer CacheAuto8K16K24K32K40KMin: 43705.09 / Avg: 43758.76 / Max: 43819.79Min: 43551.7 / Avg: 43716.13 / Max: 43805.96Min: 43652.32 / Avg: 43690.4 / Max: 43710.451. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 73.54, N = 3SE +/- 118.20, N = 3SE +/- 98.70, N = 3138137.06137735.11138021.661. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 138007.51 / Avg: 138137.06 / Max: 138262.13Min: 137538.82 / Avg: 137735.11 / Max: 137947.33Min: 137855.12 / Avg: 138021.66 / Max: 138196.721. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.31, N = 3SE +/- 0.19, N = 3SE +/- 0.60, N = 3381.37379.53381.211. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPPrefer FreqPrefer CacheAuto70140210280350Min: 381.01 / Avg: 381.37 / Max: 381.98Min: 379.34 / Avg: 379.53 / Max: 379.9Min: 380.25 / Avg: 381.21 / Max: 382.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressPrefer FreqPrefer CacheAuto13K26K39K52K65KSE +/- 352.22, N = 3SE +/- 137.26, N = 3SE +/- 307.96, N = 356651.5358865.0158780.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressPrefer FreqPrefer CacheAuto10K20K30K40K50KMin: 56156.62 / Avg: 56651.53 / Max: 57333.11Min: 58611.67 / Avg: 58865.01 / Max: 59083.27Min: 58415.66 / Avg: 58780.75 / Max: 59392.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.94, N = 3SE +/- 1.15, N = 3SE +/- 0.71, N = 3373.08371.84368.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingPrefer FreqPrefer CacheAuto70140210280350Min: 371.2 / Avg: 373.08 / Max: 374.13Min: 369.57 / Avg: 371.84 / Max: 373.23Min: 367.3 / Avg: 368.05 / Max: 369.471. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessPrefer FreqPrefer CacheAuto0.51531.03061.54592.06122.5765SE +/- 0.03, N = 15SE +/- 0.02, N = 5SE +/- 0.01, N = 52.232.272.291. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessPrefer FreqPrefer CacheAuto246810Min: 2.03 / Avg: 2.23 / Max: 2.33Min: 2.23 / Avg: 2.27 / Max: 2.32Min: 2.24 / Avg: 2.29 / Max: 2.321. (CC) gcc options: -fvisibility=hidden -O2 -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 78.60, N = 3SE +/- 290.08, N = 3SE +/- 27.28, N = 31366131367831365131. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 136460 / Avg: 136613.33 / Max: 136720Min: 136440 / Avg: 136783.33 / Max: 137360Min: 136460 / Avg: 136513.33 / Max: 1365501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.99, N = 3SE +/- 3.26, N = 3SE +/- 2.10, N = 31102.641105.251101.711. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiPrefer FreqPrefer CacheAuto2004006008001000Min: 1101 / Avg: 1102.64 / Max: 1104.42Min: 1101.6 / Avg: 1105.25 / Max: 1111.75Min: 1097.54 / Avg: 1101.71 / Max: 1104.231. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinPrefer FreqPrefer CacheAuto60K120K180K240K300KSE +/- 231.40, N = 3SE +/- 25.17, N = 3SE +/- 104.14, N = 32628872624202627871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinPrefer FreqPrefer CacheAuto50K100K150K200K250KMin: 262430 / Avg: 262886.67 / Max: 263180Min: 262390 / Avg: 262420 / Max: 262470Min: 262600 / Avg: 262786.67 / Max: 2629601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.01, N = 3SE +/- 5.11, N = 3SE +/- 3.00, N = 31109.971116.021110.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xPrefer FreqPrefer CacheAuto2004006008001000Min: 1108.03 / Avg: 1109.97 / Max: 1111.45Min: 1106.87 / Avg: 1116.02 / Max: 1124.55Min: 1105.97 / Avg: 1110.55 / Max: 1116.21. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 23.55, N = 3SE +/- 89.48, N = 3SE +/- 30.12, N = 319602.619460.119572.91. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 19560.4 / Avg: 19602.63 / Max: 19641.8Min: 19285.7 / Avg: 19460.13 / Max: 19582Min: 19524.4 / Avg: 19572.9 / Max: 19628.11. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 99.20, N = 3SE +/- 79.02, N = 3SE +/- 78.27, N = 317314.4017319.6717290.981. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 17126.55 / Avg: 17314.4 / Max: 17463.63Min: 17168.9 / Avg: 17319.67 / Max: 17436.11Min: 17190.25 / Avg: 17290.98 / Max: 17445.121. (CC) gcc options: -O3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyritePrefer FreqPrefer CacheAuto60K120K180K240K300KSE +/- 957.29, N = 3SE +/- 356.85, N = 3SE +/- 551.67, N = 32919532919332915471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyritePrefer FreqPrefer CacheAuto50K100K150K200K250KMin: 290040 / Avg: 291953.33 / Max: 292970Min: 291220 / Avg: 291933.33 / Max: 292310Min: 290490 / Avg: 291546.67 / Max: 2923501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowPrefer FreqPrefer CacheAuto510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 320.7420.6920.711. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowPrefer FreqPrefer CacheAuto510152025Min: 20.74 / Avg: 20.74 / Max: 20.75Min: 20.68 / Avg: 20.69 / Max: 20.69Min: 20.68 / Avg: 20.71 / Max: 20.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto60120180240300SE +/- 1.29, N = 5SE +/- 3.75, N = 15SE +/- 0.89, N = 5215.9220.7254.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto50100150200250Min: 212.8 / Avg: 215.94 / Max: 219.4Min: 212.1 / Avg: 220.72 / Max: 255.7Min: 252.6 / Avg: 254.8 / Max: 256.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto140280420560700SE +/- 4.08, N = 5SE +/- 6.69, N = 15SE +/- 2.33, N = 5563.6571.6633.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto110220330440550Min: 552.8 / Avg: 563.56 / Max: 572.8Min: 544.6 / Avg: 571.56 / Max: 632Min: 627.3 / Avg: 633.14 / Max: 6381. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionPrefer FreqPrefer CacheAuto0.1890.3780.5670.7560.945SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.840.840.831. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionPrefer FreqPrefer CacheAuto246810Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.83 / Avg: 0.83 / Max: 0.841. (CC) gcc options: -fvisibility=hidden -O2 -lm

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolvePrefer FreqPrefer CacheAuto714212835SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 328.4828.5528.551. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolvePrefer FreqPrefer CacheAuto612182430Min: 28.44 / Avg: 28.48 / Max: 28.52Min: 28.5 / Avg: 28.55 / Max: 28.6Min: 28.52 / Avg: 28.54 / Max: 28.591. (CXX) g++ options: -fopenmp -O2 -march=native

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumPrefer FreqPrefer CacheAuto510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 321.1921.1621.151. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumPrefer FreqPrefer CacheAuto510152025Min: 21.19 / Avg: 21.19 / Max: 21.2Min: 21.14 / Avg: 21.16 / Max: 21.17Min: 21.11 / Avg: 21.15 / Max: 21.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsPrefer FreqPrefer CacheAuto400800120016002000SE +/- 11.77, N = 3SE +/- 10.69, N = 3SE +/- 12.79, N = 32080.42096.12083.5MIN: 1915.29 / MAX: 2096.84MIN: 1946.09 / MAX: 2151MIN: 1911.93 / MAX: 2147.25
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsPrefer FreqPrefer CacheAuto400800120016002000Min: 2056.93 / Avg: 2080.37 / Max: 2093.92Min: 2075.08 / Avg: 2096.15 / Max: 2109.87Min: 2067.15 / Avg: 2083.53 / Max: 2108.72

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 322.7322.6422.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025Min: 22.69 / Avg: 22.73 / Max: 22.77Min: 22.51 / Avg: 22.64 / Max: 22.73Min: 22.61 / Avg: 22.66 / Max: 22.681. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.90, N = 15SE +/- 1.09, N = 15SE +/- 0.99, N = 7102.32105.98108.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 98.92 / Avg: 102.32 / Max: 108.06Min: 96.96 / Avg: 105.98 / Max: 110.43Min: 103.43 / Avg: 108.41 / Max: 111.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto0.16760.33520.50280.67040.838SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.7450.7410.736
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto246810Min: 0.74 / Avg: 0.75 / Max: 0.75Min: 0.74 / Avg: 0.74 / Max: 0.74Min: 0.73 / Avg: 0.74 / Max: 0.74

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.84, N = 3SE +/- 3.20, N = 3SE +/- 2.28, N = 3786.3788.2790.2MIN: 582.5 / MAX: 787.77MIN: 581.24 / MAX: 794.49MIN: 583.89 / MAX: 794.71
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesPrefer FreqPrefer CacheAuto140280420560700Min: 784.88 / Avg: 786.26 / Max: 787.77Min: 783.95 / Avg: 788.24 / Max: 794.49Min: 787.71 / Avg: 790.16 / Max: 794.71

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Prefer FreqPrefer CacheAuto4080120160200SE +/- 0.45, N = 4SE +/- 1.59, N = 9SE +/- 0.08, N = 4191.66192.25192.28MIN: 189.14 / MAX: 202.5MIN: 185.61 / MAX: 215.47MIN: 190.51 / MAX: 200.261. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Prefer FreqPrefer CacheAuto4080120160200Min: 190.68 / Avg: 191.66 / Max: 192.64Min: 188.9 / Avg: 192.25 / Max: 202.78Min: 192.03 / Avg: 192.28 / Max: 192.391. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.18520.37040.55560.74080.926SE +/- 0.004461, N = 5SE +/- 0.008538, N = 15SE +/- 0.001866, N = 50.8230090.6646270.665517MIN: 0.75MIN: 0.58MIN: 0.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto246810Min: 0.81 / Avg: 0.82 / Max: 0.84Min: 0.61 / Avg: 0.66 / Max: 0.75Min: 0.66 / Avg: 0.67 / Max: 0.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto714212835SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 328.2928.3428.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto612182430Min: 28.21 / Avg: 28.29 / Max: 28.37Min: 28.26 / Avg: 28.34 / Max: 28.44Min: 28.31 / Avg: 28.32 / Max: 28.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 1.20, N = 15SE +/- 1.11, N = 15SE +/- 1.08, N = 6108.60109.81106.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 102.97 / Avg: 108.6 / Max: 114.48Min: 102.8 / Avg: 109.81 / Max: 114.23Min: 102.23 / Avg: 106.78 / Max: 110.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasPrefer FreqPrefer CacheAuto612182430SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 325.0624.6024.94
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasPrefer FreqPrefer CacheAuto612182430Min: 24.91 / Avg: 25.06 / Max: 25.25Min: 24.49 / Avg: 24.6 / Max: 24.78Min: 24.81 / Avg: 24.94 / Max: 25.08

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestPrefer FreqPrefer CacheAuto50M100M150M200M250MSE +/- 2069084.61, N = 3SE +/- 1311911.24, N = 3SE +/- 2423625.04, N = 42411666672043666672026250001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestPrefer FreqPrefer CacheAuto40M80M120M160M200MMin: 237900000 / Avg: 241166666.67 / Max: 245000000Min: 202300000 / Avg: 204366666.67 / Max: 206800000Min: 197400000 / Avg: 202625000 / Max: 2068000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Prefer FreqPrefer CacheAuto400800120016002000SE +/- 36.66, N = 20SE +/- 31.69, N = 20SE +/- 32.29, N = 20163815931632
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Prefer FreqPrefer CacheAuto30060090012001500Min: 1358 / Avg: 1638.35 / Max: 1934Min: 1386 / Avg: 1593.15 / Max: 1814Min: 1378 / Avg: 1631.7 / Max: 1882

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigPrefer FreqPrefer CacheAuto612182430SE +/- 0.11, N = 3SE +/- 0.18, N = 3SE +/- 0.12, N = 323.3922.6224.851. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigPrefer FreqPrefer CacheAuto612182430Min: 23.22 / Avg: 23.39 / Max: 23.6Min: 22.32 / Avg: 22.62 / Max: 22.95Min: 24.62 / Avg: 24.85 / Max: 25.021. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionPrefer FreqPrefer CacheAuto48121620SE +/- 0.09, N = 4SE +/- 0.15, N = 4SE +/- 0.14, N = 614.6214.6714.951. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionPrefer FreqPrefer CacheAuto48121620Min: 14.42 / Avg: 14.62 / Max: 14.8Min: 14.27 / Avg: 14.67 / Max: 14.97Min: 14.37 / Avg: 14.95 / Max: 15.421. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810SE +/- 0.049732, N = 3SE +/- 0.018306, N = 3SE +/- 0.071291, N = 37.1955167.2085997.1964011. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Prefer FreqPrefer CacheAuto3691215Min: 7.12 / Avg: 7.2 / Max: 7.29Min: 7.17 / Avg: 7.21 / Max: 7.24Min: 7.12 / Avg: 7.2 / Max: 7.341. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.67, N = 3SE +/- 0.47, N = 3SE +/- 1.95, N = 4243.1204.9242.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto4080120160200Min: 241.8 / Avg: 243.13 / Max: 243.8Min: 204 / Avg: 204.93 / Max: 205.5Min: 238 / Avg: 242.25 / Max: 246.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto140280420560700SE +/- 1.65, N = 3SE +/- 0.12, N = 3SE +/- 6.79, N = 4633.3566.3622.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto110220330440550Min: 630 / Avg: 633.27 / Max: 635.3Min: 566.1 / Avg: 566.3 / Max: 566.5Min: 605.2 / Avg: 622.6 / Max: 638.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19Prefer FreqPrefer CacheAuto246810SE +/- 0.080, N = 15SE +/- 0.012, N = 5SE +/- 0.009, N = 67.5618.7147.497
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19Prefer FreqPrefer CacheAuto3691215Min: 7.44 / Avg: 7.56 / Max: 8.68Min: 8.69 / Avg: 8.71 / Max: 8.76Min: 7.47 / Avg: 7.5 / Max: 7.52

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810SE +/- 0.040332, N = 3SE +/- 0.057662, N = 3SE +/- 0.076546, N = 37.3914937.3244517.3394461. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Prefer FreqPrefer CacheAuto3691215Min: 7.32 / Avg: 7.39 / Max: 7.45Min: 7.21 / Avg: 7.32 / Max: 7.39Min: 7.19 / Avg: 7.34 / Max: 7.421. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential FillPrefer FreqPrefer CacheAuto300K600K900K1200K1500KSE +/- 3968.04, N = 3SE +/- 3546.36, N = 3SE +/- 2647.98, N = 31450599144181514454991. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential FillPrefer FreqPrefer CacheAuto300K600K900K1200K1500KMin: 1442663 / Avg: 1450599 / Max: 1454597Min: 1436267 / Avg: 1441815.33 / Max: 1448416Min: 1440223 / Avg: 1445499.33 / Max: 14485321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian DragonPrefer FreqPrefer CacheAuto918273645SE +/- 0.02, N = 4SE +/- 0.02, N = 4SE +/- 0.06, N = 439.7639.7839.74MIN: 39.42 / MAX: 40.83MIN: 39.43 / MAX: 40.87MIN: 39.35 / MAX: 40.85
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian DragonPrefer FreqPrefer CacheAuto816243240Min: 39.72 / Avg: 39.76 / Max: 39.8Min: 39.74 / Avg: 39.78 / Max: 39.82Min: 39.62 / Avg: 39.74 / Max: 39.87

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompilePrefer FreqPrefer CacheAuto510152025SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 321.8521.8221.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompilePrefer FreqPrefer CacheAuto510152025Min: 21.78 / Avg: 21.85 / Max: 21.9Min: 21.71 / Avg: 21.82 / Max: 21.97Min: 21.89 / Avg: 21.96 / Max: 22.03

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto0.02770.05540.08310.11080.1385SE +/- 0.001, N = 4SE +/- 0.000, N = 4SE +/- 0.000, N = 40.1220.1230.121
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto12345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810SE +/- 0.035806, N = 3SE +/- 0.016609, N = 3SE +/- 0.032079, N = 37.7432607.7263967.7267381. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto3691215Min: 7.67 / Avg: 7.74 / Max: 7.78Min: 7.69 / Avg: 7.73 / Max: 7.75Min: 7.69 / Avg: 7.73 / Max: 7.791. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompilePrefer FreqPrefer CacheAuto510152025SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 321.3021.4421.21
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompilePrefer FreqPrefer CacheAuto510152025Min: 21.14 / Avg: 21.3 / Max: 21.45Min: 21.1 / Avg: 21.44 / Max: 21.62Min: 21.11 / Avg: 21.21 / Max: 21.38

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.10470.20940.31410.41880.5235SE +/- 0.000410, N = 3SE +/- 0.001471, N = 3SE +/- 0.000909, N = 30.4651390.4644260.460811MIN: 0.45MIN: 0.45MIN: 0.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto12345Min: 0.46 / Avg: 0.47 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.54, N = 3SE +/- 1.83, N = 3SE +/- 0.34, N = 3388.9391.5391.5MIN: 352.2 / MAX: 453.6MIN: 353.06 / MAX: 470.48MIN: 365 / MAX: 478.75
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestPrefer FreqPrefer CacheAuto70140210280350Min: 387.94 / Avg: 388.93 / Max: 389.79Min: 387.91 / Avg: 391.55 / Max: 393.69Min: 391.19 / Avg: 391.55 / Max: 392.23

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810SE +/- 0.007132, N = 3SE +/- 0.002520, N = 3SE +/- 0.034705, N = 37.9966417.9772557.9263231. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Prefer FreqPrefer CacheAuto3691215Min: 7.99 / Avg: 8 / Max: 8.01Min: 7.97 / Avg: 7.98 / Max: 7.98Min: 7.86 / Avg: 7.93 / Max: 7.971. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectPrefer FreqPrefer CacheAuto510152025SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 320.3720.7420.58
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectPrefer FreqPrefer CacheAuto510152025Min: 20.13 / Avg: 20.37 / Max: 20.71Min: 20.65 / Avg: 20.74 / Max: 20.88Min: 20.5 / Avg: 20.58 / Max: 20.69

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto918273645SE +/- 0.18, N = 4SE +/- 0.08, N = 4SE +/- 0.20, N = 439.8340.3439.801. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto816243240Min: 39.35 / Avg: 39.83 / Max: 40.11Min: 40.24 / Avg: 40.34 / Max: 40.58Min: 39.37 / Avg: 39.8 / Max: 40.341. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompilePrefer FreqPrefer CacheAuto48121620SE +/- 0.02, N = 4SE +/- 0.07, N = 4SE +/- 0.05, N = 415.2615.3515.34
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompilePrefer FreqPrefer CacheAuto48121620Min: 15.23 / Avg: 15.26 / Max: 15.3Min: 15.2 / Avg: 15.35 / Max: 15.46Min: 15.25 / Avg: 15.34 / Max: 15.5

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassPrefer FreqPrefer CacheAuto510152025SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 320.2720.3820.27
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassPrefer FreqPrefer CacheAuto510152025Min: 20.21 / Avg: 20.27 / Max: 20.33Min: 20.07 / Avg: 20.38 / Max: 20.58Min: 20.19 / Avg: 20.27 / Max: 20.38

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: CrownPrefer FreqPrefer CacheAuto714212835SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 332.2932.1332.00MIN: 31.94 / MAX: 32.98MIN: 31.83 / MAX: 32.72MIN: 31.58 / MAX: 32.66
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: CrownPrefer FreqPrefer CacheAuto714212835Min: 32.17 / Avg: 32.29 / Max: 32.46Min: 32.07 / Avg: 32.13 / Max: 32.18Min: 31.83 / Avg: 32 / Max: 32.12

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEPrefer FreqPrefer CacheAuto510152025SE +/- 0.18, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 319.9620.7819.90
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEPrefer FreqPrefer CacheAuto510152025Min: 19.65 / Avg: 19.96 / Max: 20.28Min: 20.63 / Avg: 20.78 / Max: 20.86Min: 19.71 / Avg: 19.9 / Max: 20.19

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Prefer FreqPrefer CacheAuto90M180M270M360M450MSE +/- 1311932.65, N = 3SE +/- 457867.41, N = 3SE +/- 337440.37, N = 34354949334381133004375862001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Prefer FreqPrefer CacheAuto80M160M240M320M400MMin: 432874300 / Avg: 435494933.33 / Max: 436918000Min: 437304500 / Avg: 438113300 / Max: 438889600Min: 437088300 / Avg: 437586200 / Max: 4382297001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MSE +/- 2628265.17, N = 3SE +/- 1675642.50, N = 3SE +/- 971253.49, N = 31488633333148566666714877000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MMin: 1484000000 / Avg: 1488633333.33 / Max: 1493100000Min: 1482500000 / Avg: 1485666666.67 / Max: 1488200000Min: 1486400000 / Avg: 1487700000 / Max: 14896000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MSE +/- 1880011.82, N = 3SE +/- 6133605.07, N = 3SE +/- 3012381.86, N = 31488033333147143333314853666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MMin: 1484900000 / Avg: 1488033333.33 / Max: 1491400000Min: 1465200000 / Avg: 1471433333.33 / Max: 1483700000Min: 1480700000 / Avg: 1485366666.67 / Max: 14910000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto160M320M480M640M800MSE +/- 135441.66, N = 3SE +/- 1271434.01, N = 3SE +/- 813961.51, N = 37579833337517333337561300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto130M260M390M520M650MMin: 757780000 / Avg: 757983333.33 / Max: 758240000Min: 749910000 / Avg: 751733333.33 / Max: 754180000Min: 755070000 / Avg: 756130000 / Max: 7577300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuitePrefer FreqPrefer CacheAuto300K600K900K1200K1500KSE +/- 12155.78, N = 4SE +/- 1625.68, N = 3SE +/- 11338.88, N = 4124778611543411270496
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuitePrefer FreqPrefer CacheAuto200K400K600K800K1000KMin: 1217414 / Avg: 1247786.25 / Max: 1276407Min: 1151835 / Avg: 1154341 / Max: 1157388Min: 1239861 / Avg: 1270495.5 / Max: 1292683

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingPrefer FreqPrefer CacheAuto40K80K120K160K200KSE +/- 194.35, N = 3SE +/- 304.82, N = 3SE +/- 73.05, N = 31770241763981766161. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingPrefer FreqPrefer CacheAuto30K60K90K120K150KMin: 176704 / Avg: 177023.67 / Max: 177375Min: 176076 / Avg: 176397.67 / Max: 177007Min: 176497 / Avg: 176616.33 / Max: 1767491. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingPrefer FreqPrefer CacheAuto40K80K120K160K200KSE +/- 439.96, N = 3SE +/- 298.57, N = 3SE +/- 177.38, N = 31906591898711896711. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingPrefer FreqPrefer CacheAuto30K60K90K120K150KMin: 190141 / Avg: 190659 / Max: 191534Min: 189292 / Avg: 189871 / Max: 190287Min: 189455 / Avg: 189671.33 / Max: 1900231. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian DragonPrefer FreqPrefer CacheAuto816243240SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 334.6234.5934.47MIN: 34.35 / MAX: 35.31MIN: 34.42 / MAX: 34.96MIN: 34.23 / MAX: 34.9
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian DragonPrefer FreqPrefer CacheAuto714212835Min: 34.52 / Avg: 34.62 / Max: 34.69Min: 34.57 / Avg: 34.59 / Max: 34.62Min: 34.37 / Avg: 34.47 / Max: 34.59

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Prefer FreqPrefer CacheAuto10002000300040005000SE +/- 34.67, N = 3SE +/- 27.33, N = 3SE +/- 38.40, N = 34512.44577.44169.21. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Prefer FreqPrefer CacheAuto8001600240032004000Min: 4443.4 / Avg: 4512.4 / Max: 4552.9Min: 4525.4 / Avg: 4577.37 / Max: 4618Min: 4098.8 / Avg: 4169.17 / Max: 42311. (CXX) g++ options: -O3 -march=native -rdynamic

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.25, N = 5SE +/- 0.29, N = 5SE +/- 0.23, N = 5368.10368.25367.891. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastPrefer FreqPrefer CacheAuto70140210280350Min: 367.69 / Avg: 368.1 / Max: 369.04Min: 367.44 / Avg: 368.25 / Max: 368.92Min: 367.23 / Avg: 367.89 / Max: 368.351. (CXX) g++ options: -O3 -flto -pthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.87, N = 4SE +/- 0.71, N = 4SE +/- 2.41, N = 5223.1222.5264.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto50100150200250Min: 221.2 / Avg: 223.08 / Max: 225.4Min: 221.3 / Avg: 222.45 / Max: 224.5Min: 259.2 / Avg: 264.92 / Max: 270.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto150300450600750SE +/- 1.49, N = 4SE +/- 1.16, N = 4SE +/- 5.81, N = 5613.0610.4675.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto120240360480600Min: 610.1 / Avg: 612.95 / Max: 616.6Min: 608.7 / Avg: 610.35 / Max: 613.8Min: 660 / Avg: 675.62 / Max: 688.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPPrefer FreqPrefer CacheAuto120240360480600SE +/- 0.00, N = 5SE +/- 2.00, N = 5SE +/- 2.26, N = 5549.45550.08551.311. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPPrefer FreqPrefer CacheAuto100200300400500Min: 549.45 / Avg: 549.45 / Max: 549.45Min: 543.48 / Avg: 550.08 / Max: 555.56Min: 543.48 / Avg: 551.31 / Max: 555.561. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: CrownPrefer FreqPrefer CacheAuto816243240SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 336.2436.0236.03MIN: 35.81 / MAX: 36.91MIN: 35.7 / MAX: 36.65MIN: 35.6 / MAX: 36.69
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: CrownPrefer FreqPrefer CacheAuto816243240Min: 36.09 / Avg: 36.24 / Max: 36.37Min: 35.95 / Avg: 36.02 / Max: 36.11Min: 35.87 / Avg: 36.03 / Max: 36.12

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.23, N = 5SE +/- 0.54, N = 5SE +/- 1.52, N = 5827.99830.28830.581. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitPrefer FreqPrefer CacheAuto150300450600750Min: 825.23 / Avg: 827.99 / Max: 832.23Min: 829.34 / Avg: 830.28 / Max: 831.66Min: 825.07 / Avg: 830.58 / Max: 834.171. (CC) gcc options: -pthread -lm

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipPrefer FreqPrefer CacheAuto246810SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 35.96.05.9
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipPrefer FreqPrefer CacheAuto246810Min: 5.9 / Avg: 5.93 / Max: 6Min: 5.9 / Avg: 6 / Max: 6.1Min: 5.9 / Avg: 5.93 / Max: 6

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19Prefer FreqPrefer CacheAuto3691215SE +/- 0.02, N = 5SE +/- 0.06, N = 5SE +/- 0.03, N = 510.1911.4510.20
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19Prefer FreqPrefer CacheAuto3691215Min: 10.16 / Avg: 10.19 / Max: 10.23Min: 11.3 / Avg: 11.45 / Max: 11.61Min: 10.12 / Avg: 10.2 / Max: 10.25

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromePrefer FreqPrefer CacheAuto0.83251.6652.49753.334.1625SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 43.73.73.71. chrome 110.0.5481.96
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromePrefer FreqPrefer CacheAuto246810Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.7 / Avg: 3.7 / Max: 3.71. chrome 110.0.5481.96

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughPrefer FreqPrefer CacheAuto48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 316.1516.1216.041. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughPrefer FreqPrefer CacheAuto48121620Min: 16.14 / Avg: 16.15 / Max: 16.16Min: 16.1 / Avg: 16.12 / Max: 16.13Min: 16.03 / Avg: 16.04 / Max: 16.061. (CXX) g++ options: -O3 -flto -pthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: unsharp-maskPrefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 4SE +/- 0.05, N = 4SE +/- 0.03, N = 413.1313.1413.11
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: unsharp-maskPrefer FreqPrefer CacheAuto48121620Min: 13.05 / Avg: 13.12 / Max: 13.27Min: 13.02 / Avg: 13.14 / Max: 13.23Min: 13.04 / Avg: 13.11 / Max: 13.18

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastPrefer FreqPrefer CacheAuto1020304050SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.05, N = 445.9245.7845.841. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastPrefer FreqPrefer CacheAuto918273645Min: 45.86 / Avg: 45.92 / Max: 45.96Min: 45.72 / Avg: 45.78 / Max: 45.9Min: 45.71 / Avg: 45.84 / Max: 45.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto816243240SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.24, N = 334.6534.6434.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto714212835Min: 34.47 / Avg: 34.65 / Max: 34.83Min: 34.55 / Avg: 34.64 / Max: 34.69Min: 34.03 / Avg: 34.28 / Max: 34.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Prefer FreqPrefer CacheAuto1.06312.12623.18934.25245.3155SE +/- 0.011, N = 4SE +/- 0.008, N = 4SE +/- 0.033, N = 44.7254.6894.633
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Prefer FreqPrefer CacheAuto246810Min: 4.7 / Avg: 4.73 / Max: 4.75Min: 4.66 / Avg: 4.69 / Max: 4.7Min: 4.55 / Avg: 4.63 / Max: 4.71

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionPrefer FreqPrefer CacheAuto8001600240032004000SE +/- 27.55, N = 15SE +/- 23.31, N = 8SE +/- 26.74, N = 153195354331961. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionPrefer FreqPrefer CacheAuto6001200180024003000Min: 3141 / Avg: 3194.53 / Max: 3578Min: 3497 / Avg: 3543.25 / Max: 3692Min: 3145 / Avg: 3196.47 / Max: 35661. (CXX) g++ options: -O3

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimePrefer FreqPrefer CacheAuto3M6M9M12M15MSE +/- 134359.73, N = 4SE +/- 154949.74, N = 4SE +/- 107676.89, N = 41538738015231602153356671. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimePrefer FreqPrefer CacheAuto3M6M9M12M15MMin: 14989523 / Avg: 15387380.25 / Max: 15576360Min: 14777842 / Avg: 15231602 / Max: 15471981Min: 15144082 / Avg: 15335666.5 / Max: 155339911. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 1Prefer FreqPrefer CacheAuto0.26570.53140.79711.06281.3285SE +/- 0.002, N = 3SE +/- 0.006, N = 3SE +/- 0.009, N = 31.1791.1811.176
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 1Prefer FreqPrefer CacheAuto246810Min: 1.18 / Avg: 1.18 / Max: 1.18Min: 1.17 / Avg: 1.18 / Max: 1.19Min: 1.16 / Avg: 1.18 / Max: 1.19

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto0.05330.10660.15990.21320.2665SE +/- 0.002, N = 4SE +/- 0.001, N = 4SE +/- 0.001, N = 40.2340.2360.237
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto12345Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto50100150200250SE +/- 3.40, N = 15SE +/- 3.44, N = 15SE +/- 3.51, N = 15209.98209.86211.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto4080120160200Min: 174.83 / Avg: 209.98 / Max: 217.39Min: 176.67 / Avg: 209.86 / Max: 217.2Min: 175.9 / Avg: 211 / Max: 217.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto0.27230.54460.81691.08921.3615SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.201.211.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto246810Min: 1.19 / Avg: 1.2 / Max: 1.21Min: 1.2 / Avg: 1.21 / Max: 1.21Min: 1.19 / Avg: 1.2 / Max: 1.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: resizePrefer FreqPrefer CacheAuto3691215SE +/- 0.09, N = 4SE +/- 0.08, N = 4SE +/- 0.08, N = 412.6512.4512.53
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: resizePrefer FreqPrefer CacheAuto48121620Min: 12.44 / Avg: 12.65 / Max: 12.85Min: 12.2 / Avg: 12.45 / Max: 12.53Min: 12.34 / Avg: 12.53 / Max: 12.72

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastPrefer FreqPrefer CacheAuto1428425670SE +/- 0.06, N = 5SE +/- 0.04, N = 5SE +/- 0.04, N = 560.4760.4060.461. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastPrefer FreqPrefer CacheAuto1224364860Min: 60.28 / Avg: 60.47 / Max: 60.63Min: 60.3 / Avg: 60.4 / Max: 60.53Min: 60.36 / Avg: 60.46 / Max: 60.61. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.73, N = 5SE +/- 0.55, N = 5SE +/- 0.76, N = 5915.11912.38913.511. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pPrefer FreqPrefer CacheAuto160320480640800Min: 913.32 / Avg: 915.11 / Max: 916.99Min: 910.78 / Avg: 912.38 / Max: 913.66Min: 911 / Avg: 913.51 / Max: 915.711. (CC) gcc options: -pthread -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessPrefer FreqPrefer CacheAuto0.74181.48362.22542.96723.709SE +/- 0.022, N = 15SE +/- 0.022, N = 15SE +/- 0.027, N = 153.2503.2973.2831. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessPrefer FreqPrefer CacheAuto246810Min: 3.15 / Avg: 3.25 / Max: 3.52Min: 3.2 / Avg: 3.3 / Max: 3.49Min: 3.18 / Avg: 3.28 / Max: 3.481. (CXX) g++ options: -O3 -fPIC -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Prefer FreqPrefer CacheAuto4080120160200SE +/- 0.45, N = 4SE +/- 1.78, N = 4SE +/- 0.42, N = 4174.63175.72174.11MIN: 173.65 / MAX: 179.74MIN: 172.53 / MAX: 179.14MIN: 173.47 / MAX: 175.491. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Prefer FreqPrefer CacheAuto306090120150Min: 173.92 / Avg: 174.63 / Max: 175.82Min: 172.62 / Avg: 175.72 / Max: 179Min: 173.68 / Avg: 174.11 / Max: 175.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Prefer FreqPrefer CacheAuto246810SE +/- 0.056, N = 6SE +/- 0.013, N = 6SE +/- 0.038, N = 67.3677.4057.370
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Prefer FreqPrefer CacheAuto3691215Min: 7.23 / Avg: 7.37 / Max: 7.53Min: 7.35 / Avg: 7.41 / Max: 7.43Min: 7.22 / Avg: 7.37 / Max: 7.5

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Prefer FreqPrefer CacheAuto1.09782.19563.29344.39125.489SE +/- 0.019, N = 8SE +/- 0.064, N = 15SE +/- 0.019, N = 84.5624.8794.5671. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Prefer FreqPrefer CacheAuto246810Min: 4.5 / Avg: 4.56 / Max: 4.66Min: 4.51 / Avg: 4.88 / Max: 5.11Min: 4.5 / Avg: 4.57 / Max: 4.671. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUPrefer FreqPrefer CacheAuto48121620SE +/- 0.08, N = 5SE +/- 0.08, N = 5SE +/- 0.03, N = 517.5817.6717.57MIN: 15.92 / MAX: 18.06MIN: 15.83 / MAX: 18.03MIN: 15.92 / MAX: 17.78
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUPrefer FreqPrefer CacheAuto48121620Min: 17.44 / Avg: 17.58 / Max: 17.88Min: 17.4 / Avg: 17.67 / Max: 17.85Min: 17.49 / Avg: 17.57 / Max: 17.66

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesPrefer FreqPrefer CacheAuto120240360480600SE +/- 4.09, N = 4SE +/- 3.30, N = 4SE +/- 2.96, N = 4504562500
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesPrefer FreqPrefer CacheAuto100200300400500Min: 497 / Avg: 503.5 / Max: 514Min: 555 / Avg: 562.25 / Max: 570Min: 493 / Avg: 499.5 / Max: 505

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryPrefer FreqPrefer CacheAuto3K6K9K12K15KSE +/- 62.18, N = 6SE +/- 53.25, N = 6SE +/- 52.79, N = 612712.2412696.4112821.251. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryPrefer FreqPrefer CacheAuto2K4K6K8K10KMin: 12544.47 / Avg: 12712.24 / Max: 12956.74Min: 12633.98 / Avg: 12696.41 / Max: 12962.45Min: 12702.69 / Avg: 12821.25 / Max: 12980.611. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionPrefer FreqPrefer CacheAuto1.24432.48863.73294.97726.2215SE +/- 0.07, N = 15SE +/- 0.01, N = 8SE +/- 0.01, N = 85.325.535.461. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionPrefer FreqPrefer CacheAuto246810Min: 4.98 / Avg: 5.32 / Max: 5.62Min: 5.48 / Avg: 5.53 / Max: 5.55Min: 5.41 / Avg: 5.46 / Max: 5.511. (CC) gcc options: -fvisibility=hidden -O2 -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1632486480SE +/- 0.12, N = 5SE +/- 0.17, N = 5SE +/- 0.21, N = 570.4870.3970.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1428425670Min: 70.25 / Avg: 70.48 / Max: 70.84Min: 69.83 / Avg: 70.39 / Max: 70.79Min: 69.46 / Avg: 70.09 / Max: 70.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: rotatePrefer FreqPrefer CacheAuto3691215SE +/- 0.017, N = 5SE +/- 0.008, N = 5SE +/- 0.009, N = 59.4719.1049.460
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: rotatePrefer FreqPrefer CacheAuto3691215Min: 9.42 / Avg: 9.47 / Max: 9.52Min: 9.07 / Avg: 9.1 / Max: 9.12Min: 9.44 / Avg: 9.46 / Max: 9.49

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto1530456075SE +/- 0.09, N = 4SE +/- 0.28, N = 4SE +/- 0.36, N = 468.5868.4769.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto1326395265Min: 68.39 / Avg: 68.58 / Max: 68.81Min: 67.97 / Avg: 68.47 / Max: 69.22Min: 68.28 / Avg: 69.22 / Max: 70.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: auto-levelsPrefer FreqPrefer CacheAuto3691215SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.04, N = 510.7810.7710.64
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: auto-levelsPrefer FreqPrefer CacheAuto3691215Min: 10.72 / Avg: 10.78 / Max: 10.88Min: 10.7 / Avg: 10.77 / Max: 10.85Min: 10.48 / Avg: 10.64 / Max: 10.74

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto48121620SE +/- 0.02, N = 4SE +/- 0.05, N = 4SE +/- 0.04, N = 414.4714.4014.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto48121620Min: 14.43 / Avg: 14.47 / Max: 14.52Min: 14.3 / Avg: 14.4 / Max: 14.49Min: 14.3 / Avg: 14.42 / Max: 14.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadePrefer FreqPrefer CacheAuto10002000300040005000SE +/- 2.19, N = 8SE +/- 6.68, N = 7SE +/- 57.92, N = 153623451736771. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadePrefer FreqPrefer CacheAuto8001600240032004000Min: 3610 / Avg: 3623.38 / Max: 3628Min: 4500 / Avg: 4517.29 / Max: 4537Min: 3603 / Avg: 3676.87 / Max: 44861. (CXX) g++ options: -O3

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestPrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 51.45, N = 5SE +/- 21.96, N = 5SE +/- 17.51, N = 5110561087111006
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestPrefer FreqPrefer CacheAuto2K4K6K8K10KMin: 10921 / Avg: 11055.8 / Max: 11237Min: 10838 / Avg: 10870.8 / Max: 10949Min: 10958 / Avg: 11006.4 / Max: 11055

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto20406080100SE +/- 0.02, N = 6SE +/- 0.06, N = 6SE +/- 0.20, N = 679.3279.1979.141. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto1530456075Min: 79.24 / Avg: 79.32 / Max: 79.39Min: 78.97 / Avg: 79.19 / Max: 79.43Min: 78.88 / Avg: 79.14 / Max: 80.131. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4KPrefer FreqPrefer CacheAuto90180270360450SE +/- 2.66, N = 5SE +/- 1.20, N = 5SE +/- 1.15, N = 5399.60401.81399.461. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4KPrefer FreqPrefer CacheAuto70140210280350Min: 395.01 / Avg: 399.6 / Max: 409.75Min: 399.75 / Avg: 401.81 / Max: 406.32Min: 395.53 / Avg: 399.46 / Max: 401.81. (CC) gcc options: -pthread -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowPrefer FreqPrefer CacheAuto20406080100SE +/- 0.07, N = 6SE +/- 0.06, N = 6SE +/- 0.12, N = 679.8479.7979.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowPrefer FreqPrefer CacheAuto1530456075Min: 79.6 / Avg: 79.84 / Max: 80.04Min: 79.57 / Avg: 79.79 / Max: 79.96Min: 79.49 / Avg: 79.72 / Max: 80.261. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumPrefer FreqPrefer CacheAuto20406080100SE +/- 0.09, N = 6SE +/- 0.09, N = 6SE +/- 0.11, N = 682.6482.3682.581. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumPrefer FreqPrefer CacheAuto1632486480Min: 82.21 / Avg: 82.64 / Max: 82.84Min: 82.09 / Avg: 82.36 / Max: 82.65Min: 82.15 / Avg: 82.58 / Max: 82.851. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScalePrefer FreqPrefer CacheAuto246810SE +/- 0.025, N = 6SE +/- 0.021, N = 6SE +/- 0.033, N = 67.2337.2647.344
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScalePrefer FreqPrefer CacheAuto3691215Min: 7.14 / Avg: 7.23 / Max: 7.3Min: 7.2 / Avg: 7.26 / Max: 7.32Min: 7.2 / Avg: 7.34 / Max: 7.41

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Prefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 18.86, N = 7SE +/- 24.38, N = 7SE +/- 16.15, N = 79066.299073.599039.151. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Prefer FreqPrefer CacheAuto16003200480064008000Min: 8978.96 / Avg: 9066.29 / Max: 9133.4Min: 8947.88 / Avg: 9073.59 / Max: 9137.29Min: 8983.36 / Avg: 9039.15 / Max: 9119.021. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyPrefer FreqPrefer CacheAuto246810SE +/- 0.015, N = 6SE +/- 0.052, N = 6SE +/- 0.049, N = 67.2837.2787.159
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyPrefer FreqPrefer CacheAuto3691215Min: 7.21 / Avg: 7.28 / Max: 7.31Min: 7.09 / Avg: 7.28 / Max: 7.4Min: 7.04 / Avg: 7.16 / Max: 7.3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 1.96, N = 15SE +/- 2.36, N = 15SE +/- 2.75, N = 15220.58221.70219.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 203.06 / Avg: 220.58 / Max: 227.22Min: 202.92 / Avg: 221.7 / Max: 231.61Min: 205.03 / Avg: 219.5 / Max: 236.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzPrefer FreqPrefer CacheAuto3691215SE +/- 0.08, N = 4SE +/- 0.03, N = 4SE +/- 0.09, N = 410.7210.7810.49
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzPrefer FreqPrefer CacheAuto3691215Min: 10.5 / Avg: 10.72 / Max: 10.87Min: 10.7 / Avg: 10.78 / Max: 10.85Min: 10.24 / Avg: 10.49 / Max: 10.67

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumPrefer FreqPrefer CacheAuto306090120150SE +/- 0.06, N = 7SE +/- 0.05, N = 7SE +/- 0.04, N = 7129.53129.16129.151. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumPrefer FreqPrefer CacheAuto20406080100Min: 129.18 / Avg: 129.53 / Max: 129.73Min: 128.93 / Avg: 129.16 / Max: 129.26Min: 128.98 / Avg: 129.15 / Max: 129.231. (CXX) g++ options: -O3 -flto -pthread

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimePrefer FreqPrefer CacheAuto1.34642.69284.03925.38566.732SE +/- 0.003, N = 7SE +/- 0.004, N = 7SE +/- 0.005, N = 75.9835.9845.9811. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimePrefer FreqPrefer CacheAuto246810Min: 5.97 / Avg: 5.98 / Max: 5.99Min: 5.97 / Avg: 5.98 / Max: 6Min: 5.97 / Avg: 5.98 / Max: 6.011. (CC) gcc options: -static -fopenmp -O3 -march=native

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10Prefer FreqPrefer CacheAuto48121620SE +/- 0.16, N = 7SE +/- 0.09, N = 7SE +/- 0.11, N = 716.7416.7917.05
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10Prefer FreqPrefer CacheAuto48121620Min: 16.13 / Avg: 16.74 / Max: 17.42Min: 16.53 / Avg: 16.79 / Max: 17.23Min: 16.64 / Avg: 17.05 / Max: 17.48

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropPrefer FreqPrefer CacheAuto246810SE +/- 0.014, N = 6SE +/- 0.019, N = 6SE +/- 0.010, N = 66.8846.9336.890
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropPrefer FreqPrefer CacheAuto3691215Min: 6.85 / Avg: 6.88 / Max: 6.94Min: 6.87 / Avg: 6.93 / Max: 6.99Min: 6.86 / Avg: 6.89 / Max: 6.93

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150SE +/- 0.22, N = 7SE +/- 0.23, N = 7SE +/- 0.16, N = 7118.00117.97117.841. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 116.99 / Avg: 118 / Max: 118.55Min: 117.3 / Avg: 117.97 / Max: 118.74Min: 117.19 / Avg: 117.84 / Max: 118.611. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Prefer FreqPrefer CacheAuto246810SE +/- 0.007, N = 6SE +/- 0.004, N = 6SE +/- 0.005, N = 66.8456.8686.8621. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Prefer FreqPrefer CacheAuto3691215Min: 6.82 / Avg: 6.84 / Max: 6.87Min: 6.86 / Avg: 6.87 / Max: 6.89Min: 6.84 / Avg: 6.86 / Max: 6.881. (CXX) g++ options: -O3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingPrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 80.38, N = 6SE +/- 80.38, N = 6SE +/- 171.09, N = 711495.911495.911519.91. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingPrefer FreqPrefer CacheAuto2K4K6K8K10KMin: 11094 / Avg: 11495.92 / Max: 11576.3Min: 11094 / Avg: 11495.92 / Max: 11576.3Min: 11094 / Avg: 11519.94 / Max: 12102.51. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingPrefer FreqPrefer CacheAuto15003000450060007500SE +/- 60.06, N = 6SE +/- 29.94, N = 6SE +/- 59.67, N = 76918.416857.026855.311. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingPrefer FreqPrefer CacheAuto12002400360048006000Min: 6656.4 / Avg: 6918.41 / Max: 7006.74Min: 6827.08 / Avg: 6857.02 / Max: 7006.74Min: 6656.4 / Avg: 6855.31 / Max: 7006.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 2.18, N = 15SE +/- 2.64, N = 15SE +/- 2.55, N = 15238.97241.09238.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 219.67 / Avg: 238.97 / Max: 251.74Min: 219.74 / Avg: 241.09 / Max: 251.58Min: 224.19 / Avg: 238.38 / Max: 257.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150SE +/- 0.18, N = 7SE +/- 0.11, N = 7SE +/- 0.17, N = 7126.63126.34126.221. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 125.9 / Avg: 126.63 / Max: 127.36Min: 125.95 / Avg: 126.34 / Max: 126.78Min: 125.47 / Avg: 126.22 / Max: 126.731. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Prefer FreqPrefer CacheAuto1020304050SE +/- 0.43, N = 15SE +/- 0.35, N = 9SE +/- 0.36, N = 1543.3642.2542.84MIN: 39.83 / MAX: 47.05MIN: 40.19 / MAX: 43.45MIN: 39.8 / MAX: 45.691. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Prefer FreqPrefer CacheAuto918273645Min: 40.02 / Avg: 43.36 / Max: 46.02Min: 40.34 / Avg: 42.25 / Max: 43.37Min: 40.01 / Avg: 42.84 / Max: 45.471. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 2.13, N = 15SE +/- 2.80, N = 15SE +/- 3.46, N = 15241.38242.10248.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 231.29 / Avg: 241.38 / Max: 254.7Min: 228.36 / Avg: 242.1 / Max: 260.91Min: 232.03 / Avg: 248.22 / Max: 269.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.10, N = 6SE +/- 0.21, N = 6SE +/- 0.12, N = 6101.09101.10101.101. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 100.67 / Avg: 101.09 / Max: 101.32Min: 100.38 / Avg: 101.1 / Max: 101.57Min: 100.69 / Avg: 101.1 / Max: 101.511. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessPrefer FreqPrefer CacheAuto1.24132.48263.72394.96526.2065SE +/- 0.012, N = 7SE +/- 0.021, N = 7SE +/- 0.036, N = 75.4215.5175.4881. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessPrefer FreqPrefer CacheAuto246810Min: 5.38 / Avg: 5.42 / Max: 5.47Min: 5.45 / Avg: 5.52 / Max: 5.6Min: 5.4 / Avg: 5.49 / Max: 5.641. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.85, N = 10SE +/- 1.80, N = 15SE +/- 2.26, N = 15217.88219.56221.031. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 214.6 / Avg: 217.88 / Max: 221.51Min: 202.55 / Avg: 219.56 / Max: 224.26Min: 201.99 / Avg: 221.03 / Max: 230.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto306090120150SE +/- 0.26, N = 7SE +/- 0.22, N = 7SE +/- 0.17, N = 7113.42113.59113.231. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto20406080100Min: 112.37 / Avg: 113.42 / Max: 114.29Min: 112.92 / Avg: 113.59 / Max: 114.7Min: 112.67 / Avg: 113.23 / Max: 114.051. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 7.3.0Prefer FreqPrefer CacheAuto1.04692.09383.14074.18765.2345SE +/- 0.012, N = 8SE +/- 0.013, N = 8SE +/- 0.007, N = 84.5854.6534.652
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 7.3.0Prefer FreqPrefer CacheAuto246810Min: 4.54 / Avg: 4.58 / Max: 4.63Min: 4.62 / Avg: 4.65 / Max: 4.73Min: 4.62 / Avg: 4.65 / Max: 4.69

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3Prefer FreqPrefer CacheAuto1.18782.37563.56344.75125.939SE +/- 0.002, N = 7SE +/- 0.005, N = 7SE +/- 0.003, N = 75.2725.1305.279
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3Prefer FreqPrefer CacheAuto246810Min: 5.26 / Avg: 5.27 / Max: 5.28Min: 5.12 / Avg: 5.13 / Max: 5.15Min: 5.27 / Avg: 5.28 / Max: 5.29

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzPrefer FreqPrefer CacheAuto0.91221.82442.73663.64884.561SE +/- 0.011, N = 8SE +/- 0.033, N = 9SE +/- 0.030, N = 84.0544.0073.953
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzPrefer FreqPrefer CacheAuto246810Min: 4 / Avg: 4.05 / Max: 4.11Min: 3.82 / Avg: 4.01 / Max: 4.1Min: 3.81 / Avg: 3.95 / Max: 4.05

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.30, N = 9SE +/- 0.53, N = 9SE +/- 0.30, N = 9212.52213.53214.071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto4080120160200Min: 211.27 / Avg: 212.52 / Max: 214.39Min: 211.31 / Avg: 213.53 / Max: 216.06Min: 212.69 / Avg: 214.07 / Max: 215.691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.34, N = 8SE +/- 0.10, N = 8SE +/- 0.36, N = 8174.93174.96174.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150Min: 173.86 / Avg: 174.93 / Max: 176.68Min: 174.32 / Avg: 174.96 / Max: 175.18Min: 173.66 / Avg: 174.66 / Max: 176.681. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.11, N = 9SE +/- 0.11, N = 9SE +/- 0.10, N = 9173.58173.35173.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastPrefer FreqPrefer CacheAuto306090120150Min: 173.19 / Avg: 173.58 / Max: 174.06Min: 172.86 / Avg: 173.35 / Max: 173.92Min: 173.16 / Avg: 173.45 / Max: 174.161. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianPrefer FreqPrefer CacheAuto0.77221.54442.31663.08883.861SE +/- 0.013, N = 9SE +/- 0.010, N = 9SE +/- 0.020, N = 93.4323.4093.414
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianPrefer FreqPrefer CacheAuto246810Min: 3.39 / Avg: 3.43 / Max: 3.51Min: 3.36 / Avg: 3.41 / Max: 3.47Min: 3.35 / Avg: 3.41 / Max: 3.53

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.49, N = 9SE +/- 0.51, N = 9SE +/- 0.63, N = 9195.81194.84193.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 192.74 / Avg: 195.81 / Max: 197.17Min: 192.17 / Avg: 194.84 / Max: 196.62Min: 191.37 / Avg: 193.76 / Max: 196.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto60120180240300SE +/- 1.82, N = 15SE +/- 1.51, N = 15SE +/- 2.12, N = 10269.52268.80268.241. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250Min: 244.55 / Avg: 269.52 / Max: 273.74Min: 248.45 / Avg: 268.8 / Max: 272.84Min: 249.58 / Avg: 268.24 / Max: 272.071. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Masskrug - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.5961.1921.7882.3842.98SE +/- 0.010, N = 9SE +/- 0.008, N = 9SE +/- 0.003, N = 92.6362.6492.643
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Masskrug - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto246810Min: 2.59 / Avg: 2.64 / Max: 2.69Min: 2.61 / Avg: 2.65 / Max: 2.67Min: 2.62 / Avg: 2.64 / Max: 2.65

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Prefer FreqPrefer CacheAuto0.73581.47162.20742.94323.679SE +/- 0.008, N = 9SE +/- 0.013, N = 9SE +/- 0.012, N = 93.2093.2703.2241. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Prefer FreqPrefer CacheAuto246810Min: 3.18 / Avg: 3.21 / Max: 3.25Min: 3.21 / Avg: 3.27 / Max: 3.33Min: 3.18 / Avg: 3.22 / Max: 3.31. (CXX) g++ options: -O3 -fPIC -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 3.91, N = 9SE +/- 6.57, N = 9SE +/- 8.79, N = 9232123142329
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonPrefer FreqPrefer CacheAuto400800120016002000Min: 2301 / Avg: 2320.56 / Max: 2341Min: 2292 / Avg: 2314.22 / Max: 2352Min: 2298 / Avg: 2328.89 / Max: 2371

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Room - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.52791.05581.58372.11162.6395SE +/- 0.002, N = 9SE +/- 0.006, N = 9SE +/- 0.006, N = 92.3312.3262.346
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Room - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto246810Min: 2.32 / Avg: 2.33 / Max: 2.34Min: 2.31 / Avg: 2.33 / Max: 2.35Min: 2.32 / Avg: 2.35 / Max: 2.38

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.1430.2860.4290.5720.715SE +/- 0.001721, N = 9SE +/- 0.001156, N = 9SE +/- 0.001435, N = 90.6343570.6357670.634455MIN: 0.61MIN: 0.62MIN: 0.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto246810Min: 0.63 / Avg: 0.63 / Max: 0.64Min: 0.63 / Avg: 0.64 / Max: 0.64Min: 0.63 / Avg: 0.63 / Max: 0.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Boat - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.55711.11421.67132.22842.7855SE +/- 0.003, N = 9SE +/- 0.005, N = 9SE +/- 0.004, N = 92.4522.4762.456
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Boat - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto246810Min: 2.44 / Avg: 2.45 / Max: 2.46Min: 2.45 / Avg: 2.48 / Max: 2.5Min: 2.43 / Avg: 2.46 / Max: 2.47

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.21, N = 10SE +/- 0.18, N = 10SE +/- 0.15, N = 10230.80229.99230.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastPrefer FreqPrefer CacheAuto4080120160200Min: 229.36 / Avg: 230.8 / Max: 231.56Min: 229.06 / Avg: 229.99 / Max: 230.71Min: 229.95 / Avg: 230.49 / Max: 231.341. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pPrefer FreqPrefer CacheAuto30060090012001500SE +/- 1.15, N = 10SE +/- 1.44, N = 10SE +/- 1.39, N = 101407.911406.811409.231. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pPrefer FreqPrefer CacheAuto2004006008001000Min: 1402.75 / Avg: 1407.91 / Max: 1412.86Min: 1401.97 / Avg: 1406.81 / Max: 1416.87Min: 1400.89 / Avg: 1409.23 / Max: 1415.081. (CC) gcc options: -pthread -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinPrefer FreqPrefer CacheAuto48121620SE +/- 0.14, N = 15SE +/- 0.17, N = 15SE +/- 0.11, N = 1316.2516.1116.311. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinPrefer FreqPrefer CacheAuto48121620Min: 15.12 / Avg: 16.25 / Max: 16.7Min: 14.2 / Avg: 16.11 / Max: 16.75Min: 15.4 / Avg: 16.31 / Max: 16.731. (CXX) g++ options: -O3 -lm -ldl

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.31, N = 11SE +/- 0.34, N = 11SE +/- 0.28, N = 11281.74281.56281.371. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto50100150200250Min: 280.09 / Avg: 281.74 / Max: 283.51Min: 279.87 / Avg: 281.56 / Max: 283.02Min: 279.95 / Avg: 281.37 / Max: 282.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto80160240320400SE +/- 1.24, N = 15SE +/- 1.35, N = 11SE +/- 1.44, N = 11384.34382.65383.361. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto70140210280350Min: 368.3 / Avg: 384.34 / Max: 388.32Min: 370.56 / Avg: 382.65 / Max: 386.66Min: 369.94 / Avg: 383.36 / Max: 387.451. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto70140210280350SE +/- 0.30, N = 10SE +/- 0.72, N = 10SE +/- 0.39, N = 10308.04306.62305.831. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto60120180240300Min: 306.59 / Avg: 308.04 / Max: 309.6Min: 301.81 / Avg: 306.62 / Max: 309.12Min: 303.18 / Avg: 305.83 / Max: 307.221. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Prefer FreqPrefer CacheAuto48121620SE +/- 0.11, N = 12SE +/- 0.18, N = 15SE +/- 0.14, N = 1517.1616.9517.441. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Prefer FreqPrefer CacheAuto48121620Min: 16.12 / Avg: 17.16 / Max: 17.71Min: 15.94 / Avg: 16.95 / Max: 17.7Min: 16.12 / Avg: 17.44 / Max: 17.711. (CC) gcc options: -fvisibility=hidden -O2 -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto100200300400500SE +/- 0.64, N = 12SE +/- 0.89, N = 12SE +/- 0.41, N = 12450.47446.12449.831. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto80160240320400Min: 445.8 / Avg: 450.47 / Max: 454.73Min: 440.25 / Avg: 446.12 / Max: 450.89Min: 447.46 / Avg: 449.83 / Max: 452.111. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto100200300400500SE +/- 0.43, N = 12SE +/- 0.64, N = 12SE +/- 0.50, N = 12459.41457.94458.641. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto80160240320400Min: 455.26 / Avg: 459.41 / Max: 461.28Min: 454.49 / Avg: 457.94 / Max: 461.05Min: 455.51 / Avg: 458.64 / Max: 461.951. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto130260390520650SE +/- 1.08, N = 12SE +/- 1.13, N = 12SE +/- 1.15, N = 12591.01591.30592.131. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto100200300400500Min: 581.4 / Avg: 591.01 / Max: 594.65Min: 585.37 / Avg: 591.3 / Max: 600Min: 582.52 / Avg: 592.13 / Max: 598.211. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto2004006008001000SE +/- 4.32, N = 15SE +/- 4.51, N = 15SE +/- 4.42, N = 15796.32787.79789.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto140280420560700Min: 737.88 / Avg: 796.32 / Max: 807.5Min: 728.99 / Avg: 787.79 / Max: 802.21Min: 732.8 / Avg: 789.95 / Max: 806.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto170340510680850SE +/- 1.28, N = 13SE +/- 1.73, N = 13SE +/- 3.12, N = 13771.61773.41772.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto140280420560700Min: 764.14 / Avg: 771.61 / Max: 782.31Min: 765.28 / Avg: 773.41 / Max: 784.63Min: 744.03 / Avg: 772.34 / Max: 783.381. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultPrefer FreqPrefer CacheAuto714212835SE +/- 0.30, N = 15SE +/- 0.02, N = 13SE +/- 0.07, N = 1326.5727.9228.121. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultPrefer FreqPrefer CacheAuto612182430Min: 25.37 / Avg: 26.57 / Max: 28.3Min: 27.78 / Avg: 27.92 / Max: 28.04Min: 27.55 / Avg: 28.12 / Max: 28.31. (CC) gcc options: -fvisibility=hidden -O2 -lm

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Rack - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.03510.07020.10530.14040.1755SE +/- 0.000, N = 14SE +/- 0.000, N = 14SE +/- 0.000, N = 140.1520.1560.152
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Rack - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto12345Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.15 / Avg: 0.15 / Max: 0.15

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

Test: x86_64 RdRand

Auto: The test run did not produce a result. E: stress-ng: error: [982014] No stress workers invoked (one or more were unsupported)

Prefer Cache: The test run did not produce a result. E: stress-ng: error: [943105] No stress workers invoked (one or more were unsupported)

Prefer Freq: The test run did not produce a result. E: stress-ng: error: [939716] No stress workers invoked (one or more were unsupported)

415 Results Shown

GNU Radio:
  Hilbert Transform
  FM Deemphasis Filter
  IIR Filter
  FIR Filter
  Signal Source (Cosine)
  Five Back to Back FIR Filters
LAMMPS Molecular Dynamics Simulator
Timed Linux Kernel Compilation
Blender
ONNX Runtime:
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
ASKAP:
  tConvolve MT - Degridding
  tConvolve MT - Gridding
ONNX Runtime:
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
LuaRadio:
  Complex Phase
  Hilbert Transform
  FM Deemphasis Filter
  Five Back to Back FIR Filters
OpenVKL
OSPRay
OpenEMS
OpenVKL
LeelaChessZero:
  BLAS
  Eigen
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
OSPRay
ONNX Runtime:
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
Himeno Benchmark
Timed LLVM Compilation:
  Unix Makefiles
  Ninja
Numpy Benchmark
ONNX Runtime:
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
BRL-CAD
OSPRay
ClickHouse:
  100M Rows Hits Dataset, Third Run
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, First Run / Cold Cache
High Performance Conjugate Gradient
Selenium
Renaissance
libavif avifenc
Blender
Renaissance
NCNN:
  CPU - mnasnet
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Selenium
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
OSPRay Studio
Stress-NG
Cpuminer-Opt
WireGuard + Linux Networking Stack Stress Test
Gcrypt Library
OSPRay Studio
TNN
OSPRay Studio
Blender
OSPRay Studio
Stress-NG
OSPRay Studio:
  2 - 4K - 32 - Path Tracer
  1 - 4K - 32 - Path Tracer
Stress-NG
GPAW
Renaissance
KTX-Software toktx
OSPRay Studio
Stress-NG
GraphicsMagick
Radiance Benchmark
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
PyHPC Benchmarks
Cpuminer-Opt
libavif avifenc
OSPRay Studio:
  3 - 1080p - 1 - Path Tracer
  2 - 1080p - 32 - Path Tracer
  2 - 1080p - 1 - Path Tracer
SVT-HEVC
OSPRay Studio:
  1 - 1080p - 1 - Path Tracer
  1 - 1080p - 32 - Path Tracer
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
Appleseed
simdjson
Numenta Anomaly Benchmark
Sysbench
Stress-NG
Primesieve
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
simdjson
Zstd Compression:
  8 - Decompression Speed
  8 - Compression Speed
Stargate Digital Audio Workstation
Selenium
Timed MrBayes Analysis
Selenium
Chaos Group V-RAY
GROMACS
DeepSpeech
Xcompact3d Incompact3d
Blender
GraphicsMagick
Appleseed
simdjson:
  DistinctUserID
  TopTweet
  PartialTweets
Xmrig
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
LuxCoreRender
GEGL
Zstd Compression:
  12 - Decompression Speed
  12 - Compression Speed
OpenVINO:
  Face Detection FP16 - CPU:
    ms
    FPS
LuxCoreRender
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
Zstd Compression:
  3, Long Mode - Decompression Speed
  3, Long Mode - Compression Speed
Renaissance
Zstd Compression:
  3 - Decompression Speed
  3 - Compression Speed
LuxCoreRender
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
Zstd Compression:
  8, Long Mode - Decompression Speed
  8, Long Mode - Compression Speed
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
LuxCoreRender
TensorFlow Lite
Selenium
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
TensorFlow Lite:
  Mobilenet Float
  Mobilenet Quant
RocksDB
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
RocksDB
GraphicsMagick:
  Sharpen
  Enhanced
RocksDB:
  Update Rand
  Read While Writing
  Read Rand Write Rand
GraphicsMagick:
  Rotate
  Swirl
  HWB Color Space
RocksDB
VP9 libvpx Encoding
Tachyon
libjpeg-turbo tjbench
AOM AV1
Selenium
Appleseed
Renaissance
Dolfyn
Build2
Renaissance
oneDNN
Cpuminer-Opt
Node.js V8 Web Tooling Benchmark
Blender
Xmrig
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
Stress-NG
Timed Godot Game Engine Compilation
AOM AV1
Stargate Digital Audio Workstation
Numenta Anomaly Benchmark
Timed Linux Kernel Compilation
Renaissance
Stress-NG
Numenta Anomaly Benchmark
Stress-NG
x264
GEGL
NAMD
Radiance Benchmark
AOM AV1
RawTherapee
SQLite Speedtest
Selenium
VP9 libvpx Encoding
GEGL:
  Rotate 90 Degrees
  Color Enhance
ACES DGEMM
Pennant
VP9 libvpx Encoding
RNNoise
Stargate Digital Audio Workstation
Selenium
AOM AV1
SVT-AV1
SVT-VP9
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
ASTC Encoder
oneDNN
Stargate Digital Audio Workstation
AOM AV1
Cpuminer-Opt:
  Triple SHA-256, Onecoin
  scrypt
Stress-NG:
  MEMFD
  Glibc C String Functions
  Matrix Math
  Atomic
Cpuminer-Opt
Stress-NG:
  Mutex
  Malloc
  SENDFILE
  Forking
  Crypto
  Vector Math
  MMAP
  CPU Stress
  Glibc Qsort Data Sorting
WebP Image Encode
Cpuminer-Opt:
  LBC, LBRY Credits
  Magi
  Skeincoin
  x25x
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
Cpuminer-Opt
Kvazaar
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
WebP Image Encode
m-queens
Kvazaar
Renaissance
SVT-HEVC
AOM AV1
PyHPC Benchmarks
Renaissance
TNN
oneDNN
AOM AV1:
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 4K
GEGL
srsRAN
DaCapo Benchmark
Pennant
Xcompact3d Incompact3d
Stargate Digital Audio Workstation
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 64-QAM:
    UE Mb/s
    eNb Mb/s
KTX-Software toktx
Stargate Digital Audio Workstation
RocksDB
Embree
Timed FFmpeg Compilation
PyHPC Benchmarks
Stargate Digital Audio Workstation
Timed Mesa Compilation
oneDNN
Renaissance
Stargate Digital Audio Workstation
GEGL
VP9 libvpx Encoding
Timed MPlayer Compilation
GEGL
Embree
Numenta Anomaly Benchmark
Algebraic Multi-Grid Benchmark
Liquid-DSP:
  32 - 256 - 57
  16 - 256 - 57
  8 - 256 - 57
PHPBench
7-Zip Compression:
  Decompression Rating
  Compression Rating
Embree
QuantLib
ASTC Encoder
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
ASKAP
Embree
dav1d
Natron
KTX-Software toktx
Selenium
ASTC Encoder
GIMP
Kvazaar
x265
rav1e
Google Draco
Crafty
rav1e
PyHPC Benchmarks
SVT-AV1
AOM AV1
GIMP
Kvazaar
dav1d
libavif avifenc
TNN
rav1e
LAME MP3 Encoding
LuxCoreRender
PyBench
Sysbench
WebP Image Encode
SVT-AV1
GIMP
AOM AV1
GIMP
SVT-AV1
Google Draco
Node.js Express HTTP Load Test
Kvazaar
dav1d
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
GEGL
LULESH
Numenta Anomaly Benchmark
AOM AV1
Unpacking Firefox
ASTC Encoder
N-Queens
rav1e
GEGL
SVT-VP9
Primesieve
ASKAP:
  tConvolve OpenMP - Degridding
  tConvolve OpenMP - Gridding
AOM AV1
SVT-VP9
TNN
AOM AV1
SVT-HEVC
libavif avifenc
AOM AV1
x265
GNU Octave Benchmark
KTX-Software toktx
Unpacking The Linux Kernel
SVT-AV1
SVT-HEVC
Kvazaar
Numenta Anomaly Benchmark
SVT-AV1
x264
Darktable
libavif avifenc
DaCapo Benchmark
Darktable
oneDNN
Darktable
Kvazaar
dav1d
LAMMPS Molecular Dynamics Simulator
Kvazaar
SVT-VP9
SVT-HEVC
WebP Image Encode
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
SVT-HEVC
SVT-AV1:
  Preset 13 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
WebP Image Encode
Darktable