AMD Ryzen 9 7950X3D Modes On Linux

Ryzen 9 7950X3D benchmarks for a future article by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302261-NE-7950X3DMO02
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 5 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 4 Tests
Web Browsers 1 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 21 Tests
Compression Tests 3 Tests
CPU Massive 41 Tests
Creator Workloads 42 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Database Test Suite 3 Tests
Encoding 13 Tests
Fortran Tests 4 Tests
Game Development 6 Tests
HPC - High Performance Computing 28 Tests
Imaging 8 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Linear Algebra 2 Tests
Machine Learning 11 Tests
Molecular Dynamics 8 Tests
MPI Benchmarks 8 Tests
Multi-Core 47 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 11 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 4 Tests
Raytracing 3 Tests
Renderers 10 Tests
Scientific Computing 14 Tests
Software Defined Radio 4 Tests
Server 7 Tests
Server CPU Tests 28 Tests
Single-Threaded 8 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 11 Tests
Common Workstation Benchmarks 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Auto
February 17 2023
  1 Day, 8 Hours, 38 Minutes
Prefer Cache
February 19 2023
  1 Day, 9 Hours, 1 Minute
Prefer Freq
February 20 2023
  1 Day, 12 Hours, 27 Minutes
Invert Hiding All Results Option
  1 Day, 10 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD Ryzen 9 7950X3D Modes On LinuxOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR X670E HERO (9922 BIOS)AMD Device 14d832GBWestern Digital WD_BLACK SN850X 1000GB + 2000GBAMD Radeon RX 7900 XTX 24GB (2304/1249MHz)AMD Device ab30ASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 23.046.2.0-060200rc8daily20230213-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.64.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49)GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionAMD Ryzen 9 7950X3D Modes On Linux BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203- OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-0ubuntu1)- Python 3.11.1- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AutoPrefer CachePrefer FreqResult OverviewPhoronix Test Suite100%104%108%112%Google DracoHimeno BenchmarkOSPRay StudioPyBenchPHPBenchQuantLibDeepSpeechStress-NGsrsRANNumpy BenchmarksimdjsonLAME MP3 EncodingKTX-Software toktxACES DGEMMPennantRadiance BenchmarkSQLite SpeedtestRNNoiseTensorFlow LiteoneDNN

AutoPrefer CachePrefer FreqPer Watt Result OverviewPhoronix Test Suite100%107%113%120%Numpy BenchmarkPHPBenchQuantLibsimdjsonHimeno BenchmarkLZ4 CompressionStress-NGsrsRANACES DGEMMASKAPClickHouseGraphicsMagickNode.js V8 Web Tooling Benchmarklibjpeg-turbo tjbenchAlgebraic Multi-Grid BenchmarkLiquid-DSPASTC EncoderWebP Image EncodeBRL-CADNode.js Express HTTP Load TestOpenVKLChaos Group V-RAYLuxCoreRenderStargate Digital Audio WorkstationIndigoBenchCraftyNatronSeleniumLuaRadioHigh Performance Conjugate Gradientx2657-Zip CompressionXmrigOpenEMSLULESHOSPRaySVT-VP9GNU Radiodav1dEmbreeSVT-HEVCVP9 libvpx EncodingSysbenchx264LAMMPS Molecular Dynamics SimulatorSVT-AV1Cpuminer-OptKvazaarZstd CompressionLeelaChessZeroRocksDBAOM AV1rav1eGROMACSP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

AMD Ryzen 9 7950X3D Modes On Linuxonednn: IP Shapes 3D - u8s8f32 - CPUsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: OFDM_Testsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMtoktx: UASTC 3 + Zstd Compression 19srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsimdjson: PartialTweetspybench: Total For Average Test Timessrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMtoktx: Zstd Compression 19srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsimdjson: LargeRanddraco: Lionsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMradiance: Serialsimdjson: TopTweetsimdjson: DistinctUserIDphpbench: PHP Benchmark Suitepennant: leblancbigquantlib: gnuradio: FIR Filtersimdjson: Kostyaaskap: tConvolve MT - Griddingnumpy: compress-lz4: 3 - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedencode-mp3: WAV To MP3numenta-nab: Earthgecko Skylineavifenc: 0mt-dgemm: Sustained Floating-Point Rateaom-av1: Speed 6 Realtime - Bosphorus 4Kwebp: Defaultonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUcompress-zstd: 3, Long Mode - Decompression Speedgnuradio: Hilbert Transformnumenta-nab: Bayesian Changepointaom-av1: Speed 9 Realtime - Bosphorus 4Kopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timesqlite-speedtest: Timed Time - Size 1,000selenium: ARES-6 - Google Chromernnoise: numenta-nab: Contextual Anomaly Detector OSEgraphics-magick: HWB Color Spacegnuradio: Signal Source (Cosine)gimp: rotatewebp: Quality 100, Highest Compressionstress-ng: CPU Stressgnuradio: Five Back to Back FIR Filtersbuild-llvm: Unix Makefilesavifenc: 2vpxenc: Speed 5 - Bosphorus 4Kclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Runopenvkl: vklBenchmark ISPCluaradio: Five Back to Back FIR Filterstoktx: UASTC 3onnx: GPT-2 - CPU - Standardwebp: Quality 100compress-zstd: 19 - Decompression Speedaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 1080pospray-studio: 2 - 4K - 32 - Path Tracerncnn: CPU - mobilenetunpack-firefox: firefox-84.0.source.tar.xzwebp: Quality 100, Losslessgraphics-magick: Resizingtnn: CPU - SqueezeNet v2darktable: Server Rack - CPU-onlyospray-studio: 1 - 4K - 32 - Path Tracerncnn: CPU - efficientnet-b0unpack-linux: linux-5.19.tar.xzpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixinggegl: Color Enhanceospray-studio: 2 - 1080p - 32 - Path Tracerospray-studio: 3 - 4K - 32 - Path Tracercompress-lz4: 3 - Decompression Speedcompress-zstd: 19 - Compression Speedluxcorerender: Danish Mood - CPUincompact3d: input.i3d 129 Cells Per Directionospray-studio: 3 - 1080p - 1 - Path Tracerpennant: sedovbigselenium: Jetstream 2 - Google Chromeospray-studio: 3 - 4K - 1 - Path Tracernode-web-tooling: gegl: Rotate 90 Degreesospray-studio: 2 - 1080p - 1 - Path Tracerncnn: CPU-v3-v3 - mobilenet-v3ospray-studio: 1 - 1080p - 32 - Path Tracercompress-lz4: 9 - Compression Speedcompress-zstd: 19, Long Mode - Compression Speedcpuminer-opt: Myriad-Groestlstargate: 192000 - 512rav1e: 5gnuradio: FM Deemphasis Filterclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheavifenc: 6ncnn: CPU - resnet18gegl: Antialiasncnn: CPU - blazefacetjbench: Decompression Throughputncnn: CPU-v2-v2 - mobilenet-v2rav1e: 10onednn: Recurrent Neural Network Inference - u8s8f32 - CPUncnn: CPU - vision_transformergegl: Reflectopenems: pyEMS Couplergraphics-magick: Rotateavifenc: 6, Losslessradiance: SMP Parallelospray-studio: 1 - 1080p - 1 - Path Tracernumenta-nab: Relative Entropyluaradio: Complex Phasenode-express-loadtest: ospray-studio: 1 - 4K - 1 - Path Tracernatron: Spaceshipncnn: CPU - alexnetlczero: Eigenopenvino: Vehicle Detection FP16 - CPUcompress-zstd: 3 - Compression Speedpyhpc: CPU - Numpy - 1048576 - Equation of Statencnn: CPU - googlenetopenvino: Vehicle Detection FP16 - CPUgimp: resizegraphics-magick: Swirlstress-ng: Semaphoresgegl: Scalegcrypt: ncnn: CPU - regnety_400mncnn: CPU - mnasnetoctave-benchmark: renaissance: ALS Movie Lenscompress-lz4: 9 - Decompression Speedncnn: CPU - squeezenet_ssdavifenc: 10, Losslessaom-av1: Speed 8 Realtime - Bosphorus 1080prenaissance: Savina Reactors.IOincompact3d: input.i3d 193 Cells Per Directioncpuminer-opt: Ringcoinstargate: 96000 - 512renaissance: Apache Spark PageRankrenaissance: Genetic Algorithm Using Jenetics + Futuresstress-ng: Glibc Qsort Data Sortingvpxenc: Speed 5 - Bosphorus 1080pstress-ng: Glibc C String Functionsstress-ng: Memory Copyinggimp: auto-levelscompress-zstd: 12 - Compression Speedospray: gravity_spheres_volume/dim_512/scivis/real_timecpuminer-opt: Deepcoincompress-zstd: 3, Long Mode - Compression Speedpyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingstress-ng: Socket Activityonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUwireguard: gnuradio: IIR Filterlammps: Rhodopsin Proteinopenvino: Vehicle Detection FP16-INT8 - CPUluaradio: Hilbert Transformgraphics-magick: Enhancedpyhpc: CPU - Numpy - 4194304 - Equation of Statewebp: Quality 100, Lossless, Highest Compressionvpxenc: Speed 0 - Bosphorus 4Kselenium: PSPDFKit WASM - Google Chromegraphics-magick: Noise-Gaussianstress-ng: MEMFDgegl: Cartoonvpxenc: Speed 0 - Bosphorus 1080pstress-ng: Futexaom-av1: Speed 10 Realtime - Bosphorus 1080pliquid-dsp: 16 - 256 - 57ospray: gravity_spheres_volume/dim_512/ao/real_timeaom-av1: Speed 6 Two-Pass - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080px265: Bosphorus 4Kbuild-linux-kernel: allmodconfigopenvino: Vehicle Detection FP16-INT8 - CPUsvt-av1: Preset 8 - Bosphorus 1080pncnn: CPU - FastestDetbuild-mesa: Time To Compileopenvino: Person Vehicle Bike Detection FP16 - CPUncnn: CPU - resnet50ncnn: CPU - shufflenet-v2crafty: Elapsed Timeaom-av1: Speed 6 Realtime - Bosphorus 1080pluxcorerender: DLSC - CPUstargate: 96000 - 1024sysbench: RAM / Memorydarktable: Boat - CPU-onlysvt-vp9: VMAF Optimized - Bosphorus 1080pstress-ng: SENDFILEcompress-zstd: 8, Long Mode - Decompression Speedonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUembree: Pathtracer - Crownrenaissance: Apache Spark ALStnn: CPU - SqueezeNet v1.1askap: tConvolve OpenMP - Griddingstargate: 44100 - 512numenta-nab: KNN CADaom-av1: Speed 8 Realtime - Bosphorus 4Kopenvino: Person Vehicle Bike Detection FP16 - CPUstargate: 44100 - 1024ospray: particle_volume/scivis/real_timedarktable: Server Room - CPU-onlyncnn: CPU - vgg16aom-av1: Speed 4 Two-Pass - Bosphorus 4Kbuild-linux-kernel: defconfigrenaissance: Akka Unbalanced Cobwebbed Treeaom-av1: Speed 0 Two-Pass - Bosphorus 1080pliquid-dsp: 8 - 256 - 57stress-ng: NUMAstress-ng: Matrix Mathappleseed: Emilycompress-zstd: 3 - Decompression Speedluxcorerender: Orange Juice - CPUxmrig: Monero - 1Mrenaissance: Finagle HTTP Requestsgegl: Wavelet Blurcompress-lz4: 1 - Decompression Speedsvt-av1: Preset 12 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080procksdb: Rand Fillgegl: Cropstress-ng: Forkingtoktx: UASTC 4 + Zstd Compression 19openvino: Person Detection FP32 - CPUnumenta-nab: Windowed Gaussianastcenc: Thoroughrenaissance: Rand Forestdacapobench: Jythonbuild-ffmpeg: Time To Compileselenium: WASM collisionDetection - Google Chromecompress-zstd: 12 - Decompression Speedrocksdb: Rand Fill Syncncnn: CPU - yolov4-tinyembree: Pathtracer ISPC - Crownrocksdb: Seq Fillstargate: 192000 - 1024amg: stress-ng: Malloccompress-zstd: 8 - Decompression Speeddav1d: Summer Nature 4Konednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUselenium: Octane - Google Chromeluxcorerender: Rainbow Colors and Prism - CPUsvt-av1: Preset 8 - Bosphorus 4Kbuild-mplayer: Time To Compilegromacs: MPI CPU - water_GMX50_barex264: Bosphorus 4Kindigobench: CPU - Supercarblender: BMW27 - CPU-Onlyopenvino: Person Detection FP32 - CPUgegl: Tile Glasscpuminer-opt: x25xselenium: Speedometer - Google Chromestress-ng: Mutexcompress-7zip: Compression Ratingrav1e: 6openvkl: vklBenchmark Scalarbrl-cad: VGR Performance Metricrenaissance: Apache Spark Bayesdarktable: Masskrug - CPU-onlystress-ng: MMAPx264: Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUsvt-av1: Preset 4 - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080pcompress-zstd: 8 - Compression Speedembree: Pathtracer - Asian Dragonrav1e: 1namd: ATPase Simulation - 327,506 Atomsospray-studio: 3 - 1080p - 32 - Path Traceropenvino: Person Detection FP16 - CPUsvt-hevc: 1 - Bosphorus 1080ptnn: CPU - DenseNetrocksdb: Update Randgraphics-magick: Sharpenlulesh: openvino: Age Gender Recognition Retail 0013 FP16 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 4Kopenvino: Face Detection FP16 - CPUaskap: tConvolve MT - Degriddingastcenc: Exhaustiverocksdb: Read While Writingopenvino: Person Detection FP16 - CPUrocksdb: Read Rand Write Randsvt-vp9: Visual Quality Optimized - Bosphorus 4Klammps: 20k Atomscompress-7zip: Decompression Ratingkvazaar: Bosphorus 1080p - Super Fastsvt-hevc: 1 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Mediumaskap: Hogbom Clean OpenMPprimesieve: 1e12openvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUappleseed: Material Testerrawtherapee: Total Benchmark Timetnn: CPU - MobileNet v2svt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kopenvino: Weld Porosity Detection FP16-INT8 - CPUcpuminer-opt: Magisvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080px265: Bosphorus 1080ptensorflow-lite: Mobilenet Quantrocksdb: Rand Readdav1d: Chimera 1080p 10-bitkvazaar: Bosphorus 4K - Very Fastospray: gravity_spheres_volume/dim_512/pathtracer/real_timedav1d: Chimera 1080pastcenc: Mediumcompress-zstd: 8, Long Mode - Compression Speedcpuminer-opt: Triple SHA-256, Onecoinstress-ng: Vector Mathsvt-av1: Preset 4 - Bosphorus 4Kospray: particle_volume/pathtracer/real_timeopenvino: Face Detection FP16 - CPUsysbench: CPUtensorflow-lite: Inception V4primesieve: 1e13xmrig: Wownero - 1Mblender: Barbershop - CPU-Onlylczero: BLASopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUm-queens: Time To Solveappleseed: Disney Materialkvazaar: Bosphorus 4K - Slowopenvino: Face Detection FP16-INT8 - CPUsvt-av1: Preset 12 - Bosphorus 1080ptachyon: Total Timegimp: unsharp-maskkvazaar: Bosphorus 4K - Ultra Faststress-ng: Atomiconednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUstargate: 480000 - 1024luxcorerender: LuxCore Benchmark - CPUopenvino: Weld Porosity Detection FP16 - CPUaskap: tConvolve OpenMP - Degriddingblender: Fishy Cat - CPU-Onlyliquid-dsp: 32 - 256 - 57ospray: particle_volume/ao/real_timecpuminer-opt: LBC, LBRY Creditsopenvino: Face Detection FP16-INT8 - CPUsvt-hevc: 10 - Bosphorus 1080pkvazaar: Bosphorus 4K - Mediumstargate: 480000 - 512v-ray: CPUblender: Pabellon Barcelona - CPU-Onlycpuminer-opt: scryptcpuminer-opt: Skeincoinaom-av1: Speed 4 Two-Pass - Bosphorus 1080pdav1d: Summer Nature 1080psvt-hevc: 10 - Bosphorus 4Kcompress-lz4: 1 - Compression Speedindigobench: CPU - Bedroombuild-godot: Time To Compilestress-ng: Cryptokvazaar: Bosphorus 1080p - Slowtensorflow-lite: Mobilenet Floatcpuminer-opt: Quad SHA-256, Pyritesvt-vp9: VMAF Optimized - Bosphorus 4Kkvazaar: Bosphorus 1080p - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastbuild-llvm: Ninjakvazaar: Bosphorus 4K - Super Fastbuild2: Time To Compilemrbayes: Primate Phylogeny Analysisselenium: Kraken - Google Chromeembree: Pathtracer ISPC - Asian Dragonastcenc: Fastluaradio: FM Deemphasis Filtergpaw: Carbon Nanotuben-queens: Elapsed Timeospray-studio: 2 - 4K - 1 - Path Tracerhpcg: blender: Classroom - CPU-Onlysvt-hevc: 7 - Bosphorus 4Kselenium: Maze Solver - Google Chromeaom-av1: Speed 0 Two-Pass - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMdolfyn: Computational Fluid Dynamicsstress-ng: IO_uringstress-ng: CPU Cachestress-ng: System V Message Passingdeepspeech: CPUcpuminer-opt: Garlicoinhimeno: Poisson Pressure Solverdacapobench: H2renaissance: Scala Dottyselenium: WASM imageConvolute - Google Chromedraco: Church Facadesvt-av1: Preset 13 - Bosphorus 4Konednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - StandardAutoPrefer CachePrefer Freq0.665517264.9202625000242.37.497127.78.64500633.110.199609.2622.61.703196675.6191.2331.6128.839.14127049624.851844169.21390.75.542371.38899.7378.951968.04.56745.22673.66810.722841108.4128.125.141722252.5720.211.503105.96125.8245723.20203834.8407.2412.57019.90416344813.09.4605.4658780.751356.5285.52836.22622.68311.62313.674071959.15.279185.34417.442004.9106.78248.221279488.9110.4882.29216942.8430.1521266044.643.9531.08533.4213131115006818483.026.04.3814.9518439115433.11530333.012458120.6334.2709803.313094372.1014.9593583.2808744.6331119.7275.833.2247.0324.9431.62337.8560613.8117.050673.23681.5220.58061.6110305.488112.0459697.1591078.41100638825.94.821825997.604033.30.1218.488.0112.53411443423759.337.344145.31312.123.374.6528108.518495.311.773.283221.033454.160.47293354161.555.2552891566.91126.5368.0539.804554878.757172.3110.640296.07.46570197731418.10.23734995.78674.836149.039520.016.3141700.14153.74860.7360.8310.1031396111281.1461.88720.164087777.65238.3814853666677.6610269.22789.94934.28523.3124.70193.7624.7321.2134.8011.603.8415335667219.505.025.56090912821.252.456449.83484816.522433.01280.480.46081131.99551874.3174.1116855.317.33944690.68089.071663.097.9263237.564232.34624.5113.1946.5747732.71.20756130000578.01122833.10146.3079582198.67.9016134.42083.539.34219572.9214.074305.8313987176.89073511.84127.2571097.263.41416.0422391.5232921.964230.102485.53972814.3236.030114454993.65948343758620035942000.762414.1399.461280.499509917.5770.08615.3422.68571.3911.78652.547.2520.2681110.5536811172291.291896717.370198394766790.22.643381.21268.2423579.6714.419383.361051.034.47001.1760.81721375357.3422.662035.1629471322609039.152840664.1324.1813.532964.861.695041969441084.603323667109.3716.339176616230.495.8082.58551.3056.8626.006.1296.91043134.828192.282126.222613.421101.71458.64113.231626.40147296131830.5845.848.95807913.51129.14601015.0425237138021.665.296235.945589.51107446.9717797.383.15619626.1490.191928135.2859.0828.54584.57987320.71305.51772.34458.992813.11079.14212936.160.6344557.7267384.771332.5011519.967.6114877000007.5769213651326.14592.1321.157.19640131224168.20639.9726278728.321409.23174.6617290.985.42849.38243690.4079.721108.20291547117.84173.45281.37252.37260.4655.29170.376373.239.7401367.8940527.5126.8835.98139888.32885138.42101.103.70.410.670.39215.3254.811.11832571.40183.3225423648.1438.567143841.475350.2499081632475.218.843677210.9980.14882014.696769.08132.11966474.93523.645142.75995.3957459.638817.09665.52425185.544346.4633.0135581.701512.24450.664627222.5204366667204.98.714126.37.59562571.611.446622.3566.31.873543610.4191.6360.3079.579.15115434122.622994577.41390.56.042530.47958.4875.741847.04.87947.98676.16810.642504105.9827.925.154562158.6721.311.952108.48123.6636723.98178134.5177.3612.76920.77817024918.19.1045.5358865.011404.4282.58937.12422.20316.99314.704071945.75.130188.88216.951949.1109.81242.101315528.9210.7752.27215842.2470.1561296774.664.0071.06734.2503206115363318752.725.94.3614.6719584115232.38755326.220468120.4634.2369973.333159373.6114.6587913.3072514.6891115.0280.933.2706.9024.5981.63342.2516733.8216.786685.42382.0120.74062.0110275.517114.0269867.2781089.71087139486.04.7518101009.733967.50.1238.477.9212.44611623476176.557.264143.14112.183.374.6538103.618764.511.703.297219.563489.060.81744194211.635.1935281562.21116.2371.8440.344616386.567097.3910.767299.97.50797195171423.90.23635308.63683.479147.187518.216.1141702.47155.64860.7410.8410.0631036121266.5262.59920.354120615.70241.0914714333337.6148768.47787.79034.64517.7344.69194.8384.7521.4374.8211.493.8415231602221.705.075.54997212696.412.476446.12485658.972410.11276.220.46442632.12981874.1175.7156857.027.32445189.86189.721659.227.9772557.596812.32624.5713.1146.1877772.71.21751733333576.94122869.22145.150932210.77.8616231.92096.139.51419460.1213.532306.6213887876.93373184.04127.4451093.403.40916.1150391.5231421.823229.092476.53965314.3536.017514418153.63746143811330036157266.332428.5401.811286.519564417.6770.39415.3482.67671.7911.81952.377.2720.3761116.0236711230656.991898717.405198396745788.22.649379.53268.8023486.3614.402382.651046.434.59141.1810.81594376907.3522.642027.2809507862619073.590440512.3924.1813.482970.041.697642122461082.363311766109.5716.328176398229.995.8082.36550.0846.8685.986.1296.59378634.942192.247126.342613.721105.25457.94113.591622.75147760109830.2845.788.98545912.38129.15751012.0425550137735.115.311235.723591.14107434.0617748.583.06719605.4489.981926135.5158.9928.55084.78981920.69305.71773.41359.096213.14079.19212454.760.6357677.7263964.781335.2911495.967.6214856666677.5618913678326.11591.3021.167.20859931280168.24640.6426242028.341406.81174.9617319.675.42549.44543716.1379.791109.84291933117.97173.35281.56252.31760.4055.35370.337373.239.7789368.2456527.7126.8145.98439908.32830138.45101.103.70.410.670.39223.1220.711.08831486.82181.0625978492.7735.323253781.075163.4407631593521.119.334517209.8610.17003113.420974.50982.00833497.88124.135842.26745.2925158.063317.69355.55023184.015297.8193.4280894.131910.793150.823009223.1241166667243.17.561143.98.53504563.610.191682.2633.31.893195613.0211.5366.4159.7310.06124778623.390174512.41267.75.902322.20885.6173.611835.14.56246.38671.80411.281188102.3226.575.426882267.8687.411.411110.91129.4152422.92550936.0927.0413.13019.96216465008.99.4715.3256651.531352.3276.20935.93521.99321.27323.364192002.55.272183.57017.161990.6108.60241.381311999.1610.7222.23221643.3590.1521299004.764.0541.09433.5983200515214818319.825.44.4614.6181712117833.04655325.828462920.9034.97210013.383155172.8414.9581893.2423174.7251136.9281.133.2097.0225.0551.65344.0802133.8816.744673.15780.5520.37360.9210125.421112.349847.2831071.41105639465.94.741795993.164023.50.1228.618.0512.64711533476539.497.233143.79512.303.424.5857991.418587.511.873.250217.883439.361.34394964152.585.1816071584.11111.3373.0839.834613558.357077.0910.782297.97.56400196701405.50.23435444.42678.655148.626524.716.2521721.24154.64920.7450.8410.1831246051276.1562.34420.394134347.27238.9714880333337.7000268.58796.32434.65519.5164.65195.8124.7821.3014.7711.613.8815387380220.585.055.50646712712.242.452450.47489525.422417.31288.240.46513932.29201857.1174.6276918.417.39149389.95389.871674.007.9966417.530182.33124.7213.2246.4257797.21.20757983333581.60123819.50145.5887332193.47.9216109.42080.439.22319602.6212.522308.0413960246.88472995.04126.5621089.893.43216.1500388.9232121.846228.642469.83947814.4136.240714505993.65455943549493336014622.732423.1399.601279.169554317.5870.48315.2622.69171.7811.85252.257.2920.2651109.9736911172285.001906597.367199395945786.32.636381.37269.5223592.8514.466384.341046.834.62081.1790.81379375887.3722.732028.8379502942619066.293340605.4424.0913.532959.181.701241984541080.713315415109.7616.386177024230.805.8282.64549.4516.8455.996.1096.86082834.859191.659126.632621.901102.64459.41113.421627.87147740017827.9945.928.98056915.11129.53091012.9426480138137.065.298235.283589.71107729.7817769.582.92819658.1488.911931135.6358.9328.47984.7332220.74306.23771.61158.960213.12579.32212527.910.6343577.7432604.781333.2411495.967.4814886333337.5729913661326.09591.0121.197.19551631238167.94641.1126288728.291407.91174.9317314.405.43449.36643758.7679.841108.32291953118.00173.58281.74252.07060.4755.29170.413372.839.7645368.1015527.9126.8205.98339888.33197138.43101.093.70.410.670.39256.0215.911.00129496.7631.9125052017.7335.195163869.374638.2918121638470.718.643623209.9830.14858514.389170.08992.00100499.69722.984743.73585.4623055.593818.31805.53549185.953337.2163.0870282.102112.1816OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.18520.37040.55560.74080.926SE +/- 0.004461, N = 5SE +/- 0.008538, N = 15SE +/- 0.001866, N = 50.8230090.6646270.665517MIN: 0.75MIN: 0.58MIN: 0.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto246810Min: 0.81 / Avg: 0.82 / Max: 0.84Min: 0.61 / Avg: 0.66 / Max: 0.75Min: 0.66 / Avg: 0.67 / Max: 0.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.87, N = 4SE +/- 0.71, N = 4SE +/- 2.41, N = 5223.1222.5264.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto50100150200250Min: 221.2 / Avg: 223.08 / Max: 225.4Min: 221.3 / Avg: 222.45 / Max: 224.5Min: 259.2 / Avg: 264.92 / Max: 270.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestPrefer FreqPrefer CacheAuto50M100M150M200M250MSE +/- 2069084.61, N = 3SE +/- 1311911.24, N = 3SE +/- 2423625.04, N = 42411666672043666672026250001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 22.04.1Test: OFDM_TestPrefer FreqPrefer CacheAuto40M80M120M160M200MMin: 237900000 / Avg: 241166666.67 / Max: 245000000Min: 202300000 / Avg: 204366666.67 / Max: 206800000Min: 197400000 / Avg: 202625000 / Max: 2068000001. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.67, N = 3SE +/- 0.47, N = 3SE +/- 1.95, N = 4243.1204.9242.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto4080120160200Min: 241.8 / Avg: 243.13 / Max: 243.8Min: 204 / Avg: 204.93 / Max: 205.5Min: 238 / Avg: 242.25 / Max: 246.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19Prefer FreqPrefer CacheAuto246810SE +/- 0.080, N = 15SE +/- 0.012, N = 5SE +/- 0.009, N = 67.5618.7147.497
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19Prefer FreqPrefer CacheAuto3691215Min: 7.44 / Avg: 7.56 / Max: 8.68Min: 8.69 / Avg: 8.71 / Max: 8.76Min: 7.47 / Avg: 7.5 / Max: 7.52

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto306090120150SE +/- 1.16, N = 5SE +/- 0.93, N = 15SE +/- 0.19, N = 5143.9126.3127.71. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto306090120150Min: 141.5 / Avg: 143.92 / Max: 147Min: 113.9 / Avg: 126.32 / Max: 130Min: 127.3 / Avg: 127.7 / Max: 128.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsPrefer FreqPrefer CacheAuto246810SE +/- 0.11, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 38.537.598.641. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsPrefer FreqPrefer CacheAuto3691215Min: 8.31 / Avg: 8.53 / Max: 8.65Min: 7.58 / Avg: 7.59 / Max: 7.59Min: 8.59 / Avg: 8.64 / Max: 8.691. (CXX) g++ options: -O3

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesPrefer FreqPrefer CacheAuto120240360480600SE +/- 4.09, N = 4SE +/- 3.30, N = 4SE +/- 2.96, N = 4504562500
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesPrefer FreqPrefer CacheAuto100200300400500Min: 497 / Avg: 503.5 / Max: 514Min: 555 / Avg: 562.25 / Max: 570Min: 493 / Avg: 499.5 / Max: 505

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto140280420560700SE +/- 4.08, N = 5SE +/- 6.69, N = 15SE +/- 2.33, N = 5563.6571.6633.11. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto110220330440550Min: 552.8 / Avg: 563.56 / Max: 572.8Min: 544.6 / Avg: 571.56 / Max: 632Min: 627.3 / Avg: 633.14 / Max: 6381. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19Prefer FreqPrefer CacheAuto3691215SE +/- 0.02, N = 5SE +/- 0.06, N = 5SE +/- 0.03, N = 510.1911.4510.20
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19Prefer FreqPrefer CacheAuto3691215Min: 10.16 / Avg: 10.19 / Max: 10.23Min: 11.3 / Avg: 11.45 / Max: 11.61Min: 10.12 / Avg: 10.2 / Max: 10.25

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto150300450600750SE +/- 2.79, N = 3SE +/- 7.08, N = 15SE +/- 3.55, N = 3682.2622.3609.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto120240360480600Min: 676.7 / Avg: 682.2 / Max: 685.8Min: 602.3 / Avg: 622.35 / Max: 677.7Min: 605.4 / Avg: 609.2 / Max: 616.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto140280420560700SE +/- 1.65, N = 3SE +/- 0.12, N = 3SE +/- 6.79, N = 4633.3566.3622.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMPrefer FreqPrefer CacheAuto110220330440550Min: 630 / Avg: 633.27 / Max: 635.3Min: 566.1 / Avg: 566.3 / Max: 566.5Min: 605.2 / Avg: 622.6 / Max: 638.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomPrefer FreqPrefer CacheAuto0.42530.85061.27591.70122.1265SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.891.871.701. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomPrefer FreqPrefer CacheAuto246810Min: 1.88 / Avg: 1.89 / Max: 1.9Min: 1.86 / Avg: 1.87 / Max: 1.87Min: 1.7 / Avg: 1.7 / Max: 1.711. (CXX) g++ options: -O3

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionPrefer FreqPrefer CacheAuto8001600240032004000SE +/- 27.55, N = 15SE +/- 23.31, N = 8SE +/- 26.74, N = 153195354331961. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionPrefer FreqPrefer CacheAuto6001200180024003000Min: 3141 / Avg: 3194.53 / Max: 3578Min: 3497 / Avg: 3543.25 / Max: 3692Min: 3145 / Avg: 3196.47 / Max: 35661. (CXX) g++ options: -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto150300450600750SE +/- 1.49, N = 4SE +/- 1.16, N = 4SE +/- 5.81, N = 5613.0610.4675.61. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMPrefer FreqPrefer CacheAuto120240360480600Min: 610.1 / Avg: 612.95 / Max: 616.6Min: 608.7 / Avg: 610.35 / Max: 613.8Min: 660 / Avg: 675.62 / Max: 688.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto50100150200250SE +/- 2.19, N = 5SE +/- 1.36, N = 15SE +/- 0.39, N = 5211.5191.6191.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 22.04.1Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto4080120160200Min: 207.5 / Avg: 211.52 / Max: 217.8Min: 187.6 / Avg: 191.59 / Max: 209.3Min: 189.9 / Avg: 191.18 / Max: 192.21. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SerialPrefer FreqPrefer CacheAuto80160240320400366.42360.31331.61

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetPrefer FreqPrefer CacheAuto3691215SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 39.739.578.831. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetPrefer FreqPrefer CacheAuto3691215Min: 9.62 / Avg: 9.73 / Max: 9.85Min: 9.5 / Avg: 9.57 / Max: 9.64Min: 8.81 / Avg: 8.83 / Max: 8.851. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDPrefer FreqPrefer CacheAuto3691215SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 310.069.159.141. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDPrefer FreqPrefer CacheAuto3691215Min: 9.96 / Avg: 10.06 / Max: 10.2Min: 9.12 / Avg: 9.15 / Max: 9.21Min: 9.05 / Avg: 9.14 / Max: 9.191. (CXX) g++ options: -O3

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuitePrefer FreqPrefer CacheAuto300K600K900K1200K1500KSE +/- 12155.78, N = 4SE +/- 1625.68, N = 3SE +/- 11338.88, N = 4124778611543411270496
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuitePrefer FreqPrefer CacheAuto200K400K600K800K1000KMin: 1217414 / Avg: 1247786.25 / Max: 1276407Min: 1151835 / Avg: 1154341 / Max: 1157388Min: 1239861 / Avg: 1270495.5 / Max: 1292683

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigPrefer FreqPrefer CacheAuto612182430SE +/- 0.11, N = 3SE +/- 0.18, N = 3SE +/- 0.12, N = 323.3922.6224.851. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigPrefer FreqPrefer CacheAuto612182430Min: 23.22 / Avg: 23.39 / Max: 23.6Min: 22.32 / Avg: 22.62 / Max: 22.95Min: 24.62 / Avg: 24.85 / Max: 25.021. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Prefer FreqPrefer CacheAuto10002000300040005000SE +/- 34.67, N = 3SE +/- 27.33, N = 3SE +/- 38.40, N = 34512.44577.44169.21. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Prefer FreqPrefer CacheAuto8001600240032004000Min: 4443.4 / Avg: 4512.4 / Max: 4552.9Min: 4525.4 / Avg: 4577.37 / Max: 4618Min: 4098.8 / Avg: 4169.17 / Max: 42311. (CXX) g++ options: -O3 -march=native -rdynamic

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterPrefer FreqPrefer CacheAuto30060090012001500SE +/- 2.90, N = 9SE +/- 3.63, N = 9SE +/- 2.89, N = 91267.71390.51390.71. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterPrefer FreqPrefer CacheAuto2004006008001000Min: 1255.6 / Avg: 1267.73 / Max: 1282.4Min: 1372.7 / Avg: 1390.5 / Max: 1408.1Min: 1379.7 / Avg: 1390.74 / Max: 1404.21. 3.10.5.1

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 35.906.045.541. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaPrefer FreqPrefer CacheAuto246810Min: 5.89 / Avg: 5.9 / Max: 5.91Min: 6 / Avg: 6.04 / Max: 6.08Min: 5.53 / Avg: 5.54 / Max: 5.551. (CXX) g++ options: -O3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 18.48, N = 15SE +/- 31.33, N = 15SE +/- 34.96, N = 152322.202530.472371.381. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingPrefer FreqPrefer CacheAuto400800120016002000Min: 2248.67 / Avg: 2322.2 / Max: 2498.59Min: 2333.66 / Avg: 2530.47 / Max: 2694.56Min: 2242.16 / Avg: 2371.38 / Max: 2672.581. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.73, N = 3SE +/- 9.21, N = 6SE +/- 6.01, N = 15885.61958.48899.73
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkPrefer FreqPrefer CacheAuto2004006008001000Min: 883.71 / Avg: 885.61 / Max: 889.07Min: 914.45 / Avg: 958.48 / Max: 979.92Min: 872.04 / Avg: 899.73 / Max: 977.41

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto20406080100SE +/- 0.68, N = 3SE +/- 0.15, N = 3SE +/- 0.95, N = 473.6175.7478.951. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto1530456075Min: 72.25 / Avg: 73.61 / Max: 74.34Min: 75.56 / Avg: 75.74 / Max: 76.04Min: 77.54 / Avg: 78.95 / Max: 81.661. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000SE +/- 10.27, N = 3SE +/- 17.36, N = 15SE +/- 53.52, N = 31835.11847.01968.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto30060090012001500Min: 1815.8 / Avg: 1835.13 / Max: 1850.8Min: 1746.5 / Avg: 1846.95 / Max: 2073.2Min: 1861 / Avg: 1968.03 / Max: 2021.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Prefer FreqPrefer CacheAuto1.09782.19563.29344.39125.489SE +/- 0.019, N = 8SE +/- 0.064, N = 15SE +/- 0.019, N = 84.5624.8794.5671. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Prefer FreqPrefer CacheAuto246810Min: 4.5 / Avg: 4.56 / Max: 4.66Min: 4.51 / Avg: 4.88 / Max: 5.11Min: 4.5 / Avg: 4.57 / Max: 4.671. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylinePrefer FreqPrefer CacheAuto1122334455SE +/- 0.33, N = 3SE +/- 0.43, N = 3SE +/- 0.45, N = 346.3947.9945.23
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylinePrefer FreqPrefer CacheAuto1020304050Min: 45.94 / Avg: 46.39 / Max: 47.04Min: 47.14 / Avg: 47.99 / Max: 48.56Min: 44.35 / Avg: 45.23 / Max: 45.85

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Prefer FreqPrefer CacheAuto20406080100SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.56, N = 1571.8076.1773.671. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Prefer FreqPrefer CacheAuto1530456075Min: 71.24 / Avg: 71.8 / Max: 72.23Min: 75.88 / Avg: 76.17 / Max: 76.48Min: 71.57 / Avg: 73.67 / Max: 76.751. (CXX) g++ options: -O3 -fPIC -lm

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RatePrefer FreqPrefer CacheAuto3691215SE +/- 0.09, N = 3SE +/- 0.13, N = 4SE +/- 0.10, N = 711.2810.6410.721. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RatePrefer FreqPrefer CacheAuto3691215Min: 11.11 / Avg: 11.28 / Max: 11.4Min: 10.27 / Avg: 10.64 / Max: 10.84Min: 10.4 / Avg: 10.72 / Max: 11.21. (CC) gcc options: -O3 -march=native -fopenmp

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.90, N = 15SE +/- 1.09, N = 15SE +/- 0.99, N = 7102.32105.98108.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 98.92 / Avg: 102.32 / Max: 108.06Min: 96.96 / Avg: 105.98 / Max: 110.43Min: 103.43 / Avg: 108.41 / Max: 111.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultPrefer FreqPrefer CacheAuto714212835SE +/- 0.30, N = 15SE +/- 0.02, N = 13SE +/- 0.07, N = 1326.5727.9228.121. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultPrefer FreqPrefer CacheAuto612182430Min: 25.37 / Avg: 26.57 / Max: 28.3Min: 27.78 / Avg: 27.92 / Max: 28.04Min: 27.55 / Avg: 28.12 / Max: 28.31. (CC) gcc options: -fvisibility=hidden -O2 -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto1.2212.4423.6634.8846.105SE +/- 0.07946, N = 15SE +/- 0.04406, N = 15SE +/- 0.03961, N = 155.426885.154565.14172MIN: 5.02MIN: 4.87MIN: 4.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto246810Min: 5.11 / Avg: 5.43 / Max: 5.83Min: 4.97 / Avg: 5.15 / Max: 5.7Min: 4.9 / Avg: 5.14 / Max: 5.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 2.30, N = 3SE +/- 58.56, N = 3SE +/- 10.66, N = 32267.82158.62252.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2263.3 / Avg: 2267.83 / Max: 2270.8Min: 2046.1 / Avg: 2158.63 / Max: 2243Min: 2235.4 / Avg: 2252.53 / Max: 2272.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformPrefer FreqPrefer CacheAuto160320480640800SE +/- 3.07, N = 9SE +/- 3.09, N = 9SE +/- 3.80, N = 9687.4721.3720.21. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformPrefer FreqPrefer CacheAuto130260390520650Min: 674.7 / Avg: 687.4 / Max: 697.5Min: 702.8 / Avg: 721.32 / Max: 735.1Min: 703.7 / Avg: 720.18 / Max: 7381. 3.10.5.1

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointPrefer FreqPrefer CacheAuto3691215SE +/- 0.14, N = 15SE +/- 0.09, N = 4SE +/- 0.12, N = 1511.4111.9511.50
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointPrefer FreqPrefer CacheAuto3691215Min: 10.53 / Avg: 11.41 / Max: 12.34Min: 11.71 / Avg: 11.95 / Max: 12.14Min: 10.6 / Avg: 11.5 / Max: 12.11

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.99, N = 15SE +/- 1.26, N = 15SE +/- 1.44, N = 15110.91108.48105.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 102.54 / Avg: 110.91 / Max: 114.22Min: 101.71 / Avg: 108.48 / Max: 115.27Min: 100.04 / Avg: 105.96 / Max: 115.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimePrefer FreqPrefer CacheAuto306090120150129.42123.66125.821. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimePrefer FreqPrefer CacheAuto61218243022.9323.9823.201. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Prefer FreqPrefer CacheAuto816243240SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.20, N = 336.0934.5234.841. (CC) gcc options: -O2 -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Prefer FreqPrefer CacheAuto816243240Min: 35.92 / Avg: 36.09 / Max: 36.2Min: 34.37 / Avg: 34.52 / Max: 34.63Min: 34.52 / Avg: 34.84 / Max: 35.221. (CC) gcc options: -O2 -lz

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromePrefer FreqPrefer CacheAuto246810SE +/- 0.09, N = 3SE +/- 0.08, N = 15SE +/- 0.09, N = 37.047.367.241. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromePrefer FreqPrefer CacheAuto3691215Min: 6.94 / Avg: 7.04 / Max: 7.23Min: 6.94 / Avg: 7.36 / Max: 7.87Min: 7.12 / Avg: 7.24 / Max: 7.421. chrome 110.0.5481.96

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Prefer FreqPrefer CacheAuto3691215SE +/- 0.15, N = 15SE +/- 0.07, N = 4SE +/- 0.02, N = 413.1312.7712.571. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Prefer FreqPrefer CacheAuto48121620Min: 12.55 / Avg: 13.13 / Max: 14.13Min: 12.63 / Avg: 12.77 / Max: 12.96Min: 12.55 / Avg: 12.57 / Max: 12.621. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEPrefer FreqPrefer CacheAuto510152025SE +/- 0.18, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 319.9620.7819.90
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEPrefer FreqPrefer CacheAuto510152025Min: 19.65 / Avg: 19.96 / Max: 20.28Min: 20.63 / Avg: 20.78 / Max: 20.86Min: 19.71 / Avg: 19.9 / Max: 20.19

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpacePrefer FreqPrefer CacheAuto400800120016002000SE +/- 6.89, N = 3SE +/- 3.71, N = 3SE +/- 10.37, N = 31646170216341. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpacePrefer FreqPrefer CacheAuto30060090012001500Min: 1632 / Avg: 1645.67 / Max: 1654Min: 1697 / Avg: 1701.67 / Max: 1709Min: 1620 / Avg: 1633.67 / Max: 16541. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Prefer FreqPrefer CacheAuto11002200330044005500SE +/- 41.28, N = 9SE +/- 45.51, N = 9SE +/- 48.78, N = 95008.94918.14813.01. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Prefer FreqPrefer CacheAuto9001800270036004500Min: 4777.3 / Avg: 5008.9 / Max: 5150.7Min: 4753.7 / Avg: 4918.1 / Max: 5121.9Min: 4541 / Avg: 4813.03 / Max: 5000.61. 3.10.5.1

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: rotatePrefer FreqPrefer CacheAuto3691215SE +/- 0.017, N = 5SE +/- 0.008, N = 5SE +/- 0.009, N = 59.4719.1049.460
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: rotatePrefer FreqPrefer CacheAuto3691215Min: 9.42 / Avg: 9.47 / Max: 9.52Min: 9.07 / Avg: 9.1 / Max: 9.12Min: 9.44 / Avg: 9.46 / Max: 9.49

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionPrefer FreqPrefer CacheAuto1.24432.48863.73294.97726.2215SE +/- 0.07, N = 15SE +/- 0.01, N = 8SE +/- 0.01, N = 85.325.535.461. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionPrefer FreqPrefer CacheAuto246810Min: 4.98 / Avg: 5.32 / Max: 5.62Min: 5.48 / Avg: 5.53 / Max: 5.55Min: 5.41 / Avg: 5.46 / Max: 5.511. (CC) gcc options: -fvisibility=hidden -O2 -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressPrefer FreqPrefer CacheAuto13K26K39K52K65KSE +/- 352.22, N = 3SE +/- 137.26, N = 3SE +/- 307.96, N = 356651.5358865.0158780.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU StressPrefer FreqPrefer CacheAuto10K20K30K40K50KMin: 56156.62 / Avg: 56651.53 / Max: 57333.11Min: 58611.67 / Avg: 58865.01 / Max: 59083.27Min: 58415.66 / Avg: 58780.75 / Max: 59392.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto30060090012001500SE +/- 17.40, N = 9SE +/- 25.73, N = 9SE +/- 22.39, N = 91352.31404.41356.51. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto2004006008001000Min: 1232.5 / Avg: 1352.26 / Max: 1395.5Min: 1300 / Avg: 1404.44 / Max: 1509.6Min: 1269.3 / Avg: 1356.54 / Max: 1506.61. 3.10.5.1

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesPrefer FreqPrefer CacheAuto60120180240300SE +/- 2.01, N = 3SE +/- 3.72, N = 3SE +/- 0.94, N = 3276.21282.59285.53
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesPrefer FreqPrefer CacheAuto50100150200250Min: 273.48 / Avg: 276.21 / Max: 280.13Min: 275.48 / Avg: 282.59 / Max: 288.07Min: 283.92 / Avg: 285.53 / Max: 287.16

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Prefer FreqPrefer CacheAuto918273645SE +/- 0.07, N = 3SE +/- 0.31, N = 15SE +/- 0.32, N = 835.9437.1236.231. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Prefer FreqPrefer CacheAuto816243240Min: 35.84 / Avg: 35.94 / Max: 36.08Min: 35.59 / Avg: 37.12 / Max: 38.5Min: 35.64 / Avg: 36.23 / Max: 38.411. (CXX) g++ options: -O3 -fPIC -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto510152025SE +/- 0.19, N = 3SE +/- 0.24, N = 5SE +/- 0.27, N = 321.9922.2022.681. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto510152025Min: 21.75 / Avg: 21.99 / Max: 22.37Min: 21.68 / Avg: 22.2 / Max: 22.77Min: 22.14 / Avg: 22.68 / Max: 22.971. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunPrefer FreqPrefer CacheAuto70140210280350SE +/- 1.19, N = 3SE +/- 2.19, N = 3SE +/- 1.92, N = 3321.27316.99311.62MIN: 19.84 / MAX: 12000MIN: 19.73 / MAX: 10000MIN: 15.96 / MAX: 12000
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunPrefer FreqPrefer CacheAuto60120180240300Min: 319.52 / Avg: 321.27 / Max: 323.54Min: 312.94 / Avg: 316.99 / Max: 320.48Min: 307.91 / Avg: 311.62 / Max: 314.31

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunPrefer FreqPrefer CacheAuto70140210280350SE +/- 2.63, N = 3SE +/- 0.72, N = 3SE +/- 3.39, N = 3323.36314.70313.67MIN: 18.55 / MAX: 12000MIN: 15.62 / MAX: 10000MIN: 15.76 / MAX: 10000
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunPrefer FreqPrefer CacheAuto60120180240300Min: 319.52 / Avg: 323.36 / Max: 328.39Min: 313.38 / Avg: 314.7 / Max: 315.84Min: 308.81 / Avg: 313.67 / Max: 320.19

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCPrefer FreqPrefer CacheAuto90180270360450SE +/- 1.86, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3419407407MIN: 53 / MAX: 5936MIN: 53 / MAX: 5165MIN: 53 / MAX: 5149
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCPrefer FreqPrefer CacheAuto70140210280350Min: 415 / Avg: 418.67 / Max: 421Min: 407 / Avg: 407.33 / Max: 408Min: 407 / Avg: 407.33 / Max: 408

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto400800120016002000SE +/- 17.45, N = 7SE +/- 20.86, N = 5SE +/- 22.01, N = 32002.51945.71959.1
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersPrefer FreqPrefer CacheAuto30060090012001500Min: 1926.2 / Avg: 2002.47 / Max: 2045.3Min: 1885.2 / Avg: 1945.72 / Max: 2004.1Min: 1915.1 / Avg: 1959.07 / Max: 1982.9

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3Prefer FreqPrefer CacheAuto1.18782.37563.56344.75125.939SE +/- 0.002, N = 7SE +/- 0.005, N = 7SE +/- 0.003, N = 75.2725.1305.279
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3Prefer FreqPrefer CacheAuto246810Min: 5.26 / Avg: 5.27 / Max: 5.28Min: 5.12 / Avg: 5.13 / Max: 5.15Min: 5.27 / Avg: 5.28 / Max: 5.29

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto4080120160200SE +/- 2.62, N = 15SE +/- 0.57, N = 3SE +/- 2.28, N = 4183.57188.88185.341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto306090120150Min: 163.72 / Avg: 183.57 / Max: 191.45Min: 187.79 / Avg: 188.88 / Max: 189.73Min: 180.45 / Avg: 185.34 / Max: 191.431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Prefer FreqPrefer CacheAuto48121620SE +/- 0.11, N = 12SE +/- 0.18, N = 15SE +/- 0.14, N = 1517.1616.9517.441. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Prefer FreqPrefer CacheAuto48121620Min: 16.12 / Avg: 17.16 / Max: 17.71Min: 15.94 / Avg: 16.95 / Max: 17.7Min: 16.12 / Avg: 17.44 / Max: 17.711. (CC) gcc options: -fvisibility=hidden -O2 -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000SE +/- 22.67, N = 15SE +/- 2.99, N = 15SE +/- 26.46, N = 151990.61949.12004.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedPrefer FreqPrefer CacheAuto30060090012001500Min: 1925 / Avg: 1990.55 / Max: 2190.1Min: 1925.1 / Avg: 1949.14 / Max: 1975.7Min: 1938.2 / Avg: 2004.89 / Max: 2201.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 1.20, N = 15SE +/- 1.11, N = 15SE +/- 1.08, N = 6108.60109.81106.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 102.97 / Avg: 108.6 / Max: 114.48Min: 102.8 / Avg: 109.81 / Max: 114.23Min: 102.23 / Avg: 106.78 / Max: 110.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 2.13, N = 15SE +/- 2.80, N = 15SE +/- 3.46, N = 15241.38242.10248.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 231.29 / Avg: 241.38 / Max: 254.7Min: 228.36 / Avg: 242.1 / Max: 260.91Min: 232.03 / Avg: 248.22 / Max: 269.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 208.57, N = 3SE +/- 123.29, N = 3SE +/- 113.14, N = 31311991315521279481. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 130799 / Avg: 131199.33 / Max: 131501Min: 131322 / Avg: 131552 / Max: 131744Min: 127787 / Avg: 127947.67 / Max: 1281661. (CXX) g++ options: -O3 -lm -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetPrefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 15SE +/- 0.01, N = 3SE +/- 0.03, N = 39.168.928.91MIN: 8.92 / MAX: 10.19MIN: 8.81 / MAX: 14.7MIN: 8.81 / MAX: 9.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetPrefer FreqPrefer CacheAuto3691215Min: 9.01 / Avg: 9.16 / Max: 9.84Min: 8.89 / Avg: 8.92 / Max: 8.93Min: 8.86 / Avg: 8.91 / Max: 8.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzPrefer FreqPrefer CacheAuto3691215SE +/- 0.08, N = 4SE +/- 0.03, N = 4SE +/- 0.09, N = 410.7210.7810.49
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzPrefer FreqPrefer CacheAuto3691215Min: 10.5 / Avg: 10.72 / Max: 10.87Min: 10.7 / Avg: 10.78 / Max: 10.85Min: 10.24 / Avg: 10.49 / Max: 10.67

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessPrefer FreqPrefer CacheAuto0.51531.03061.54592.06122.5765SE +/- 0.03, N = 15SE +/- 0.02, N = 5SE +/- 0.01, N = 52.232.272.291. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessPrefer FreqPrefer CacheAuto246810Min: 2.03 / Avg: 2.23 / Max: 2.33Min: 2.23 / Avg: 2.27 / Max: 2.32Min: 2.24 / Avg: 2.29 / Max: 2.321. (CC) gcc options: -fvisibility=hidden -O2 -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 31.52, N = 3SE +/- 17.52, N = 3SE +/- 26.27, N = 42216215821691. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingPrefer FreqPrefer CacheAuto400800120016002000Min: 2163 / Avg: 2215.67 / Max: 2272Min: 2139 / Avg: 2158 / Max: 2193Min: 2141 / Avg: 2169.25 / Max: 22481. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Prefer FreqPrefer CacheAuto1020304050SE +/- 0.43, N = 15SE +/- 0.35, N = 9SE +/- 0.36, N = 1543.3642.2542.84MIN: 39.83 / MAX: 47.05MIN: 40.19 / MAX: 43.45MIN: 39.8 / MAX: 45.691. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Prefer FreqPrefer CacheAuto918273645Min: 40.02 / Avg: 43.36 / Max: 46.02Min: 40.34 / Avg: 42.25 / Max: 43.37Min: 40.01 / Avg: 42.84 / Max: 45.471. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Rack - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.03510.07020.10530.14040.1755SE +/- 0.000, N = 14SE +/- 0.000, N = 14SE +/- 0.000, N = 140.1520.1560.152
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Rack - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto12345Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.15 / Avg: 0.15 / Max: 0.15

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 180.62, N = 3SE +/- 114.33, N = 3SE +/- 180.00, N = 31299001296771266041. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 129560 / Avg: 129899.67 / Max: 130176Min: 129449 / Avg: 129676.67 / Max: 129809Min: 126422 / Avg: 126604 / Max: 1269641. (CXX) g++ options: -O3 -lm -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Prefer FreqPrefer CacheAuto1.0712.1423.2134.2845.355SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.00, N = 34.764.664.64MIN: 4.62 / MAX: 5.4MIN: 4.56 / MAX: 10.75MIN: 4.57 / MAX: 5.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Prefer FreqPrefer CacheAuto246810Min: 4.68 / Avg: 4.76 / Max: 4.87Min: 4.6 / Avg: 4.66 / Max: 4.71Min: 4.63 / Avg: 4.64 / Max: 4.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzPrefer FreqPrefer CacheAuto0.91221.82442.73663.64884.561SE +/- 0.011, N = 8SE +/- 0.033, N = 9SE +/- 0.030, N = 84.0544.0073.953
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzPrefer FreqPrefer CacheAuto246810Min: 4 / Avg: 4.05 / Max: 4.11Min: 3.82 / Avg: 4.01 / Max: 4.1Min: 3.81 / Avg: 3.95 / Max: 4.05

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto0.24620.49240.73860.98481.231SE +/- 0.001, N = 3SE +/- 0.008, N = 12SE +/- 0.005, N = 31.0941.0671.085
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto246810Min: 1.09 / Avg: 1.09 / Max: 1.1Min: 0.98 / Avg: 1.07 / Max: 1.09Min: 1.08 / Avg: 1.09 / Max: 1.09

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhancePrefer FreqPrefer CacheAuto816243240SE +/- 0.27, N = 3SE +/- 0.20, N = 3SE +/- 0.10, N = 333.6034.2533.42
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhancePrefer FreqPrefer CacheAuto714212835Min: 33.2 / Avg: 33.6 / Max: 34.12Min: 34.05 / Avg: 34.25 / Max: 34.64Min: 33.27 / Avg: 33.42 / Max: 33.61

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 33.67, N = 3SE +/- 39.37, N = 3SE +/- 52.98, N = 33200532061313111. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto6K12K18K24K30KMin: 31971 / Avg: 32004.67 / Max: 32072Min: 32019 / Avg: 32061.33 / Max: 32140Min: 31206 / Avg: 31311.33 / Max: 313741. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 1166.54, N = 3SE +/- 538.77, N = 3SE +/- 142.59, N = 31521481536331500681. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30K60K90K120K150KMin: 149844 / Avg: 152148 / Max: 153618Min: 152762 / Avg: 153633.33 / Max: 154618Min: 149841 / Avg: 150068 / Max: 1503311. (CXX) g++ options: -O3 -lm -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 30.05, N = 3SE +/- 28.63, N = 3SE +/- 78.84, N = 418319.818752.718483.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 18270.2 / Avg: 18319.83 / Max: 18374Min: 18696.4 / Avg: 18752.67 / Max: 18790Min: 18347 / Avg: 18482.98 / Max: 18710.31. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedPrefer FreqPrefer CacheAuto612182430SE +/- 0.35, N = 15SE +/- 0.28, N = 15SE +/- 0.22, N = 1525.425.926.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedPrefer FreqPrefer CacheAuto612182430Min: 23 / Avg: 25.44 / Max: 26.9Min: 24 / Avg: 25.9 / Max: 27.1Min: 24.2 / Avg: 26.03 / Max: 26.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUPrefer FreqPrefer CacheAuto1.00352.0073.01054.0145.0175SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 34.464.364.38MIN: 2.05 / MAX: 5MIN: 1.84 / MAX: 4.92MIN: 1.82 / MAX: 4.93
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUPrefer FreqPrefer CacheAuto246810Min: 4.44 / Avg: 4.46 / Max: 4.48Min: 4.33 / Avg: 4.36 / Max: 4.38Min: 4.34 / Avg: 4.38 / Max: 4.43

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionPrefer FreqPrefer CacheAuto48121620SE +/- 0.09, N = 4SE +/- 0.15, N = 4SE +/- 0.14, N = 614.6214.6714.951. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionPrefer FreqPrefer CacheAuto48121620Min: 14.42 / Avg: 14.62 / Max: 14.8Min: 14.27 / Avg: 14.67 / Max: 14.97Min: 14.37 / Avg: 14.95 / Max: 15.421. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto30060090012001500SE +/- 1.53, N = 3SE +/- 1.86, N = 3SE +/- 1.73, N = 31178115211541. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000Min: 1175 / Avg: 1178 / Max: 1180Min: 1148 / Avg: 1151.67 / Max: 1154Min: 1151 / Avg: 1154 / Max: 11571. (CXX) g++ options: -O3 -lm -ldl

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigPrefer FreqPrefer CacheAuto816243240SE +/- 0.12, N = 3SE +/- 0.18, N = 3SE +/- 0.21, N = 333.0532.3933.121. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigPrefer FreqPrefer CacheAuto714212835Min: 32.81 / Avg: 33.05 / Max: 33.21Min: 32.14 / Avg: 32.39 / Max: 32.74Min: 32.79 / Avg: 33.12 / Max: 33.521. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromePrefer FreqPrefer CacheAuto70140210280350SE +/- 4.12, N = 3SE +/- 3.84, N = 4SE +/- 3.14, N = 3325.83326.22333.011. chrome 110.0.5481.96
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromePrefer FreqPrefer CacheAuto60120180240300Min: 317.66 / Avg: 325.83 / Max: 330.86Min: 318.99 / Avg: 326.22 / Max: 333.71Min: 327.03 / Avg: 333.01 / Max: 337.691. chrome 110.0.5481.96

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto10002000300040005000SE +/- 31.34, N = 3SE +/- 4.91, N = 3SE +/- 3.46, N = 34629468145811. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto8001600240032004000Min: 4566 / Avg: 4628.67 / Max: 4661Min: 4671 / Avg: 4680.67 / Max: 4687Min: 4575 / Avg: 4581 / Max: 45871. (CXX) g++ options: -O3 -lm -ldl

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkPrefer FreqPrefer CacheAuto510152025SE +/- 0.14, N = 3SE +/- 0.24, N = 3SE +/- 0.08, N = 320.9020.4620.63
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkPrefer FreqPrefer CacheAuto510152025Min: 20.64 / Avg: 20.9 / Max: 21.11Min: 19.99 / Avg: 20.46 / Max: 20.7Min: 20.5 / Avg: 20.63 / Max: 20.77

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesPrefer FreqPrefer CacheAuto816243240SE +/- 0.15, N = 3SE +/- 0.13, N = 3SE +/- 0.22, N = 334.9734.2434.27
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesPrefer FreqPrefer CacheAuto714212835Min: 34.78 / Avg: 34.97 / Max: 35.27Min: 34 / Avg: 34.24 / Max: 34.44Min: 33.91 / Avg: 34.27 / Max: 34.66

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.00, N = 3SE +/- 1.76, N = 3SE +/- 0.00, N = 310019979801. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000Min: 999 / Avg: 1001 / Max: 1002Min: 994 / Avg: 996.67 / Max: 1000Min: 980 / Avg: 980 / Max: 9801. (CXX) g++ options: -O3 -lm -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Prefer FreqPrefer CacheAuto0.76051.5212.28153.0423.8025SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.01, N = 33.383.333.31MIN: 3.27 / MAX: 3.9MIN: 3.24 / MAX: 3.96MIN: 3.25 / MAX: 3.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Prefer FreqPrefer CacheAuto246810Min: 3.31 / Avg: 3.38 / Max: 3.45Min: 3.29 / Avg: 3.33 / Max: 3.36Min: 3.3 / Avg: 3.31 / Max: 3.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 68.70, N = 3SE +/- 38.68, N = 3SE +/- 52.27, N = 33155131593309431. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto5K10K15K20K25KMin: 31423 / Avg: 31551.33 / Max: 31658Min: 31526 / Avg: 31592.67 / Max: 31660Min: 30859 / Avg: 30943.33 / Max: 310391. (CXX) g++ options: -O3 -lm -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedPrefer FreqPrefer CacheAuto1632486480SE +/- 0.81, N = 4SE +/- 0.53, N = 15SE +/- 0.40, N = 372.8473.6172.101. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedPrefer FreqPrefer CacheAuto1428425670Min: 70.73 / Avg: 72.84 / Max: 74.48Min: 71.96 / Avg: 73.61 / Max: 79.78Min: 71.62 / Avg: 72.1 / Max: 72.891. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto48121620SE +/- 0.06, N = 3SE +/- 0.15, N = 15SE +/- 0.00, N = 314.914.614.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto48121620Min: 14.8 / Avg: 14.9 / Max: 15Min: 12.9 / Avg: 14.59 / Max: 15Min: 14.9 / Avg: 14.9 / Max: 14.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlPrefer FreqPrefer CacheAuto13K26K39K52K65KSE +/- 649.11, N = 15SE +/- 588.74, N = 15SE +/- 450.34, N = 155818958791593581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlPrefer FreqPrefer CacheAuto10K20K30K40K50KMin: 51510 / Avg: 58188.67 / Max: 61010Min: 53740 / Avg: 58791.33 / Max: 61900Min: 56150 / Avg: 59358 / Max: 623801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Prefer FreqPrefer CacheAuto0.74411.48822.23232.97643.7205SE +/- 0.028257, N = 7SE +/- 0.017434, N = 3SE +/- 0.026295, N = 33.2423173.3072513.2808741. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810Min: 3.16 / Avg: 3.24 / Max: 3.37Min: 3.28 / Avg: 3.31 / Max: 3.34Min: 3.23 / Avg: 3.28 / Max: 3.321. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Prefer FreqPrefer CacheAuto1.06312.12623.18934.25245.3155SE +/- 0.011, N = 4SE +/- 0.008, N = 4SE +/- 0.033, N = 44.7254.6894.633
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 5Prefer FreqPrefer CacheAuto246810Min: 4.7 / Avg: 4.73 / Max: 4.75Min: 4.66 / Avg: 4.69 / Max: 4.7Min: 4.55 / Avg: 4.63 / Max: 4.71

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterPrefer FreqPrefer CacheAuto2004006008001000SE +/- 3.20, N = 9SE +/- 4.98, N = 9SE +/- 2.62, N = 91136.91115.01119.71. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterPrefer FreqPrefer CacheAuto2004006008001000Min: 1121 / Avg: 1136.87 / Max: 1153.5Min: 1096.5 / Avg: 1115.02 / Max: 1139.1Min: 1100.8 / Avg: 1119.71 / Max: 1128.71. 3.10.5.1

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CachePrefer FreqPrefer CacheAuto60120180240300SE +/- 0.59, N = 3SE +/- 1.64, N = 3SE +/- 1.37, N = 3281.13280.93275.83MIN: 12.85 / MAX: 7500MIN: 13.18 / MAX: 8571.43MIN: 13.18 / MAX: 7500
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CachePrefer FreqPrefer CacheAuto50100150200250Min: 279.97 / Avg: 281.13 / Max: 281.9Min: 278.71 / Avg: 280.93 / Max: 284.14Min: 273.64 / Avg: 275.83 / Max: 278.35

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Prefer FreqPrefer CacheAuto0.73581.47162.20742.94323.679SE +/- 0.008, N = 9SE +/- 0.013, N = 9SE +/- 0.012, N = 93.2093.2703.2241. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Prefer FreqPrefer CacheAuto246810Min: 3.18 / Avg: 3.21 / Max: 3.25Min: 3.21 / Avg: 3.27 / Max: 3.33Min: 3.18 / Avg: 3.22 / Max: 3.31. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Prefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.01, N = 37.026.907.03MIN: 6.83 / MAX: 10.65MIN: 6.76 / MAX: 7.67MIN: 6.9 / MAX: 7.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Prefer FreqPrefer CacheAuto3691215Min: 6.93 / Avg: 7.02 / Max: 7.09Min: 6.89 / Avg: 6.9 / Max: 6.91Min: 7.01 / Avg: 7.03 / Max: 7.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasPrefer FreqPrefer CacheAuto612182430SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 325.0624.6024.94
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasPrefer FreqPrefer CacheAuto612182430Min: 24.91 / Avg: 25.06 / Max: 25.25Min: 24.49 / Avg: 24.6 / Max: 24.78Min: 24.81 / Avg: 24.94 / Max: 25.08

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefacePrefer FreqPrefer CacheAuto0.37130.74261.11391.48521.8565SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.01, N = 31.651.631.62MIN: 1.58 / MAX: 2.22MIN: 1.57 / MAX: 1.98MIN: 1.59 / MAX: 1.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefacePrefer FreqPrefer CacheAuto246810Min: 1.61 / Avg: 1.65 / Max: 1.71Min: 1.6 / Avg: 1.63 / Max: 1.65Min: 1.61 / Avg: 1.62 / Max: 1.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputPrefer FreqPrefer CacheAuto70140210280350SE +/- 1.53, N = 3SE +/- 1.66, N = 3SE +/- 3.34, N = 15344.08342.25337.861. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputPrefer FreqPrefer CacheAuto60120180240300Min: 341.05 / Avg: 344.08 / Max: 345.99Min: 339.84 / Avg: 342.25 / Max: 345.43Min: 311.84 / Avg: 337.86 / Max: 347.911. (CC) gcc options: -O3 -rdynamic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Prefer FreqPrefer CacheAuto0.8731.7462.6193.4924.365SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.01, N = 33.883.823.81MIN: 3.78 / MAX: 7.82MIN: 3.74 / MAX: 4.27MIN: 3.75 / MAX: 4.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Prefer FreqPrefer CacheAuto246810Min: 3.83 / Avg: 3.88 / Max: 3.93Min: 3.78 / Avg: 3.82 / Max: 3.85Min: 3.79 / Avg: 3.81 / Max: 3.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10Prefer FreqPrefer CacheAuto48121620SE +/- 0.16, N = 7SE +/- 0.09, N = 7SE +/- 0.11, N = 716.7416.7917.05
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 10Prefer FreqPrefer CacheAuto48121620Min: 16.13 / Avg: 16.74 / Max: 17.42Min: 16.53 / Avg: 16.79 / Max: 17.23Min: 16.64 / Avg: 17.05 / Max: 17.48

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto150300450600750SE +/- 3.86, N = 3SE +/- 3.19, N = 3SE +/- 2.26, N = 3673.16685.42673.24MIN: 663.46MIN: 674.86MIN: 666.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto120240360480600Min: 666.58 / Avg: 673.16 / Max: 679.96Min: 679.77 / Avg: 685.42 / Max: 690.82Min: 670.49 / Avg: 673.24 / Max: 677.711. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerPrefer FreqPrefer CacheAuto20406080100SE +/- 0.23, N = 15SE +/- 0.91, N = 3SE +/- 1.09, N = 380.5582.0181.52MIN: 79.58 / MAX: 94.08MIN: 79.65 / MAX: 95.45MIN: 80.09 / MAX: 86.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerPrefer FreqPrefer CacheAuto1632486480Min: 79.88 / Avg: 80.55 / Max: 83.7Min: 80.23 / Avg: 82.01 / Max: 83.25Min: 80.36 / Avg: 81.52 / Max: 83.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectPrefer FreqPrefer CacheAuto510152025SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 320.3720.7420.58
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectPrefer FreqPrefer CacheAuto510152025Min: 20.13 / Avg: 20.37 / Max: 20.71Min: 20.65 / Avg: 20.74 / Max: 20.88Min: 20.5 / Avg: 20.58 / Max: 20.69

OpenEMS

OpenEMS is a free and open electromagnetic field solver using the FDTD method. This test profile runs OpenEMS and pyEMS benchmark demos. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerPrefer FreqPrefer CacheAuto1428425670SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.35, N = 360.9262.0161.611. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat
OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerPrefer FreqPrefer CacheAuto1224364860Min: 60.82 / Avg: 60.92 / Max: 61Min: 61.81 / Avg: 62.01 / Max: 62.21Min: 61.01 / Avg: 61.61 / Max: 62.211. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotatePrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.33, N = 3SE +/- 6.51, N = 3SE +/- 2.31, N = 31012102710301. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotatePrefer FreqPrefer CacheAuto2004006008001000Min: 1012 / Avg: 1012.33 / Max: 1013Min: 1020 / Avg: 1027 / Max: 1040Min: 1026 / Avg: 1030 / Max: 10341. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessPrefer FreqPrefer CacheAuto1.24132.48263.72394.96526.2065SE +/- 0.012, N = 7SE +/- 0.021, N = 7SE +/- 0.036, N = 75.4215.5175.4881. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessPrefer FreqPrefer CacheAuto246810Min: 5.38 / Avg: 5.42 / Max: 5.47Min: 5.45 / Avg: 5.52 / Max: 5.6Min: 5.4 / Avg: 5.49 / Max: 5.641. (CXX) g++ options: -O3 -fPIC -lm

Radiance Benchmark

This is a benchmark of NREL Radiance, a synthetic imaging system that is open-source and developed by the Lawrence Berkeley National Laboratory in California. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRadiance Benchmark 5.0Test: SMP ParallelPrefer FreqPrefer CacheAuto306090120150112.34114.03112.05

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.53, N = 3SE +/- 1.86, N = 3SE +/- 1.33, N = 39849869691. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto2004006008001000Min: 981 / Avg: 984 / Max: 986Min: 984 / Avg: 986.33 / Max: 990Min: 966 / Avg: 968.67 / Max: 9701. (CXX) g++ options: -O3 -lm -ldl

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyPrefer FreqPrefer CacheAuto246810SE +/- 0.015, N = 6SE +/- 0.052, N = 6SE +/- 0.049, N = 67.2837.2787.159
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyPrefer FreqPrefer CacheAuto3691215Min: 7.21 / Avg: 7.28 / Max: 7.31Min: 7.09 / Avg: 7.28 / Max: 7.4Min: 7.04 / Avg: 7.16 / Max: 7.3

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhasePrefer FreqPrefer CacheAuto2004006008001000SE +/- 4.41, N = 7SE +/- 4.40, N = 5SE +/- 4.07, N = 31071.41089.71078.4
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhasePrefer FreqPrefer CacheAuto2004006008001000Min: 1056.1 / Avg: 1071.39 / Max: 1088.3Min: 1080.7 / Avg: 1089.72 / Max: 1102.1Min: 1071.9 / Avg: 1078.4 / Max: 1085.9

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestPrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 51.45, N = 5SE +/- 21.96, N = 5SE +/- 17.51, N = 5110561087111006
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestPrefer FreqPrefer CacheAuto2K4K6K8K10KMin: 10921 / Avg: 11055.8 / Max: 11237Min: 10838 / Avg: 10870.8 / Max: 10949Min: 10958 / Avg: 11006.4 / Max: 11055

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto8001600240032004000SE +/- 0.58, N = 3SE +/- 4.58, N = 3SE +/- 35.92, N = 33946394838821. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7001400210028003500Min: 3945 / Avg: 3946 / Max: 3947Min: 3939 / Avg: 3948 / Max: 3954Min: 3833 / Avg: 3882 / Max: 39521. (CXX) g++ options: -O3 -lm -ldl

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipPrefer FreqPrefer CacheAuto246810SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 35.96.05.9
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipPrefer FreqPrefer CacheAuto246810Min: 5.9 / Avg: 5.93 / Max: 6Min: 5.9 / Avg: 6 / Max: 6.1Min: 5.9 / Avg: 5.93 / Max: 6

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetPrefer FreqPrefer CacheAuto1.08452.1693.25354.3385.4225SE +/- 0.01, N = 14SE +/- 0.12, N = 3SE +/- 0.13, N = 34.744.754.82MIN: 4.6 / MAX: 10.72MIN: 4.42 / MAX: 5.49MIN: 4.45 / MAX: 5.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetPrefer FreqPrefer CacheAuto246810Min: 4.67 / Avg: 4.74 / Max: 4.82Min: 4.51 / Avg: 4.75 / Max: 4.87Min: 4.56 / Avg: 4.82 / Max: 4.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenPrefer FreqPrefer CacheAuto400800120016002000SE +/- 14.57, N = 3SE +/- 19.08, N = 3SE +/- 12.03, N = 31795181018251. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenPrefer FreqPrefer CacheAuto30060090012001500Min: 1767 / Avg: 1795 / Max: 1816Min: 1778 / Avg: 1810 / Max: 1844Min: 1807 / Avg: 1825.33 / Max: 18481. (CXX) g++ options: -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 3.10, N = 3SE +/- 3.98, N = 3SE +/- 4.60, N = 3993.161009.73997.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 987.94 / Avg: 993.16 / Max: 998.66Min: 1002.54 / Avg: 1009.73 / Max: 1016.28Min: 989.12 / Avg: 997.6 / Max: 1004.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto9001800270036004500SE +/- 29.94, N = 3SE +/- 26.15, N = 3SE +/- 20.17, N = 34023.53967.54033.31. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Compression SpeedPrefer FreqPrefer CacheAuto7001400210028003500Min: 3991.5 / Avg: 4023.47 / Max: 4083.3Min: 3929.9 / Avg: 3967.53 / Max: 4017.8Min: 3993 / Avg: 4033.27 / Max: 4055.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto0.02770.05540.08310.11080.1385SE +/- 0.001, N = 4SE +/- 0.000, N = 4SE +/- 0.000, N = 40.1220.1230.121
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto12345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetPrefer FreqPrefer CacheAuto246810SE +/- 0.02, N = 15SE +/- 0.04, N = 3SE +/- 0.02, N = 38.618.478.48MIN: 8.31 / MAX: 9.47MIN: 8.29 / MAX: 9.39MIN: 8.36 / MAX: 9.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetPrefer FreqPrefer CacheAuto3691215Min: 8.46 / Avg: 8.61 / Max: 8.73Min: 8.41 / Avg: 8.47 / Max: 8.54Min: 8.45 / Avg: 8.48 / Max: 8.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 38.057.928.01MIN: 3.98 / MAX: 18.88MIN: 4.93 / MAX: 18.91MIN: 4.18 / MAX: 20.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 8 / Avg: 8.05 / Max: 8.09Min: 7.87 / Avg: 7.92 / Max: 7.97Min: 7.95 / Avg: 8.01 / Max: 8.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: resizePrefer FreqPrefer CacheAuto3691215SE +/- 0.09, N = 4SE +/- 0.08, N = 4SE +/- 0.08, N = 412.6512.4512.53
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: resizePrefer FreqPrefer CacheAuto48121620Min: 12.44 / Avg: 12.65 / Max: 12.85Min: 12.2 / Avg: 12.45 / Max: 12.53Min: 12.34 / Avg: 12.53 / Max: 12.72

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.33, N = 3SE +/- 5.17, N = 3SE +/- 1.53, N = 31153116211441. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlPrefer FreqPrefer CacheAuto2004006008001000Min: 1153 / Avg: 1153.33 / Max: 1154Min: 1156 / Avg: 1161.67 / Max: 1172Min: 1142 / Avg: 1144 / Max: 11471. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KSE +/- 783.77, N = 3SE +/- 214.95, N = 3SE +/- 32095.22, N = 73476539.493476176.553423759.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SemaphoresPrefer FreqPrefer CacheAuto600K1200K1800K2400K3000KMin: 3475025.32 / Avg: 3476539.49 / Max: 3477647.78Min: 3475952.16 / Avg: 3476176.55 / Max: 3476606.31Min: 3287977.32 / Avg: 3423759.33 / Max: 3481227.121. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScalePrefer FreqPrefer CacheAuto246810SE +/- 0.025, N = 6SE +/- 0.021, N = 6SE +/- 0.033, N = 67.2337.2647.344
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScalePrefer FreqPrefer CacheAuto3691215Min: 7.14 / Avg: 7.23 / Max: 7.3Min: 7.2 / Avg: 7.26 / Max: 7.32Min: 7.2 / Avg: 7.34 / Max: 7.41

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Prefer FreqPrefer CacheAuto306090120150SE +/- 1.17, N = 3SE +/- 0.78, N = 3SE +/- 0.50, N = 3143.80143.14145.311. (CC) gcc options: -O2 -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Prefer FreqPrefer CacheAuto306090120150Min: 142.4 / Avg: 143.79 / Max: 146.12Min: 141.61 / Avg: 143.14 / Max: 144.15Min: 144.55 / Avg: 145.31 / Max: 146.241. (CC) gcc options: -O2 -fvisibility=hidden

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mPrefer FreqPrefer CacheAuto3691215SE +/- 0.06, N = 15SE +/- 0.16, N = 3SE +/- 0.05, N = 312.3012.1812.12MIN: 11.79 / MAX: 39.44MIN: 11.67 / MAX: 12.88MIN: 11.83 / MAX: 17.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mPrefer FreqPrefer CacheAuto48121620Min: 11.97 / Avg: 12.3 / Max: 12.68Min: 11.88 / Avg: 12.18 / Max: 12.42Min: 12.01 / Avg: 12.12 / Max: 12.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetPrefer FreqPrefer CacheAuto0.76951.5392.30853.0783.8475SE +/- 0.01, N = 15SE +/- 0.02, N = 3SE +/- 0.01, N = 33.423.373.37MIN: 3.32 / MAX: 3.96MIN: 3.3 / MAX: 3.84MIN: 3.32 / MAX: 3.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetPrefer FreqPrefer CacheAuto246810Min: 3.36 / Avg: 3.42 / Max: 3.48Min: 3.33 / Avg: 3.37 / Max: 3.39Min: 3.35 / Avg: 3.37 / Max: 3.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 7.3.0Prefer FreqPrefer CacheAuto1.04692.09383.14074.18765.2345SE +/- 0.012, N = 8SE +/- 0.013, N = 8SE +/- 0.007, N = 84.5854.6534.652
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 7.3.0Prefer FreqPrefer CacheAuto246810Min: 4.54 / Avg: 4.58 / Max: 4.63Min: 4.62 / Avg: 4.65 / Max: 4.73Min: 4.62 / Avg: 4.65 / Max: 4.69

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensPrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 112.03, N = 3SE +/- 88.81, N = 3SE +/- 77.81, N = 37991.48103.68108.5MIN: 7874.73 / MAX: 9046.54MIN: 7926.83 / MAX: 8896.3MIN: 7956.94 / MAX: 8936.02
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensPrefer FreqPrefer CacheAuto14002800420056007000Min: 7874.73 / Avg: 7991.37 / Max: 8215.37Min: 7926.83 / Avg: 8103.63 / Max: 8206.88Min: 7956.94 / Avg: 8108.52 / Max: 8214.85

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 123.72, N = 4SE +/- 33.45, N = 15SE +/- 23.00, N = 318587.518764.518495.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 18362 / Avg: 18587.45 / Max: 18887.3Min: 18501.8 / Avg: 18764.47 / Max: 18861.9Min: 18456.5 / Avg: 18495.27 / Max: 18536.11. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdPrefer FreqPrefer CacheAuto3691215SE +/- 0.03, N = 15SE +/- 0.02, N = 3SE +/- 0.02, N = 311.8711.7011.77MIN: 11.61 / MAX: 18.31MIN: 11.52 / MAX: 17.65MIN: 11.6 / MAX: 12.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdPrefer FreqPrefer CacheAuto3691215Min: 11.72 / Avg: 11.87 / Max: 12.16Min: 11.66 / Avg: 11.7 / Max: 11.73Min: 11.73 / Avg: 11.77 / Max: 11.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessPrefer FreqPrefer CacheAuto0.74181.48362.22542.96723.709SE +/- 0.022, N = 15SE +/- 0.022, N = 15SE +/- 0.027, N = 153.2503.2973.2831. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessPrefer FreqPrefer CacheAuto246810Min: 3.15 / Avg: 3.25 / Max: 3.52Min: 3.2 / Avg: 3.3 / Max: 3.49Min: 3.18 / Avg: 3.28 / Max: 3.481. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.85, N = 10SE +/- 1.80, N = 15SE +/- 2.26, N = 15217.88219.56221.031. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 214.6 / Avg: 217.88 / Max: 221.51Min: 202.55 / Avg: 219.56 / Max: 224.26Min: 201.99 / Avg: 221.03 / Max: 230.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOPrefer FreqPrefer CacheAuto7001400210028003500SE +/- 17.17, N = 3SE +/- 47.73, N = 3SE +/- 31.81, N = 33439.33489.03454.1MIN: 3421.31 / MAX: 5134.14MIN: 3424.57 / MAX: 5127.41MIN: 3399.67 / MAX: 5374.55
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOPrefer FreqPrefer CacheAuto6001200180024003000Min: 3421.31 / Avg: 3439.29 / Max: 3473.62Min: 3424.57 / Avg: 3489 / Max: 3582.21Min: 3399.67 / Avg: 3454.07 / Max: 3509.85

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionPrefer FreqPrefer CacheAuto1428425670SE +/- 0.39, N = 3SE +/- 0.70, N = 4SE +/- 0.26, N = 361.3460.8260.471. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionPrefer FreqPrefer CacheAuto1224364860Min: 60.57 / Avg: 61.34 / Max: 61.8Min: 59.48 / Avg: 60.82 / Max: 62.78Min: 60.02 / Avg: 60.47 / Max: 60.911. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinPrefer FreqPrefer CacheAuto9001800270036004500SE +/- 13.77, N = 3SE +/- 32.16, N = 10SE +/- 12.63, N = 34152.584211.634161.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinPrefer FreqPrefer CacheAuto7001400210028003500Min: 4132.54 / Avg: 4152.58 / Max: 4178.96Min: 4143.37 / Avg: 4211.63 / Max: 4416.2Min: 4141.86 / Avg: 4161.55 / Max: 4185.11. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Prefer FreqPrefer CacheAuto1.18242.36483.54724.72965.912SE +/- 0.035122, N = 3SE +/- 0.017595, N = 3SE +/- 0.033936, N = 35.1816075.1935285.2552891. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810Min: 5.12 / Avg: 5.18 / Max: 5.24Min: 5.16 / Avg: 5.19 / Max: 5.22Min: 5.19 / Avg: 5.26 / Max: 5.311. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankPrefer FreqPrefer CacheAuto30060090012001500SE +/- 15.65, N = 3SE +/- 2.07, N = 3SE +/- 15.84, N = 31584.11562.21566.9MIN: 1414.69 / MAX: 1754.86MIN: 1439.9 / MAX: 1667.7MIN: 1420.34 / MAX: 1721.97
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankPrefer FreqPrefer CacheAuto30060090012001500Min: 1553.25 / Avg: 1584.1 / Max: 1604.13Min: 1559.13 / Avg: 1562.23 / Max: 1566.15Min: 1547.01 / Avg: 1566.9 / Max: 1598.2

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesPrefer FreqPrefer CacheAuto2004006008001000SE +/- 7.37, N = 3SE +/- 8.29, N = 3SE +/- 5.13, N = 31111.31116.21126.5MIN: 1060.74 / MAX: 1144.27MIN: 1043.35 / MAX: 1143.74MIN: 1017.45 / MAX: 1147.28
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + FuturesPrefer FreqPrefer CacheAuto2004006008001000Min: 1096.78 / Avg: 1111.33 / Max: 1120.66Min: 1100.33 / Avg: 1116.2 / Max: 1128.28Min: 1117.08 / Avg: 1126.49 / Max: 1134.75

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.94, N = 3SE +/- 1.15, N = 3SE +/- 0.71, N = 3373.08371.84368.051. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc Qsort Data SortingPrefer FreqPrefer CacheAuto70140210280350Min: 371.2 / Avg: 373.08 / Max: 374.13Min: 369.57 / Avg: 371.84 / Max: 373.23Min: 367.3 / Avg: 368.05 / Max: 369.471. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto918273645SE +/- 0.18, N = 4SE +/- 0.08, N = 4SE +/- 0.20, N = 439.8340.3439.801. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 5 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto816243240Min: 39.35 / Avg: 39.83 / Max: 40.11Min: 40.24 / Avg: 40.34 / Max: 40.58Min: 39.37 / Avg: 39.8 / Max: 40.341. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsPrefer FreqPrefer CacheAuto1000K2000K3000K4000K5000KSE +/- 46384.47, N = 3SE +/- 55352.76, N = 3SE +/- 4595.10, N = 34613558.354616386.564554878.751. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Glibc C String FunctionsPrefer FreqPrefer CacheAuto800K1600K2400K3200K4000KMin: 4562787.85 / Avg: 4613558.35 / Max: 4706184.4Min: 4556657.78 / Avg: 4616386.56 / Max: 4726973.55Min: 4547240.33 / Avg: 4554878.75 / Max: 4563123.541. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingPrefer FreqPrefer CacheAuto15003000450060007500SE +/- 19.10, N = 3SE +/- 23.08, N = 3SE +/- 59.45, N = 97077.097097.397172.311. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Memory CopyingPrefer FreqPrefer CacheAuto12002400360048006000Min: 7055.4 / Avg: 7077.09 / Max: 7115.17Min: 7052.27 / Avg: 7097.39 / Max: 7128.37Min: 7069.74 / Avg: 7172.31 / Max: 7642.631. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: auto-levelsPrefer FreqPrefer CacheAuto3691215SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.04, N = 510.7810.7710.64
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: auto-levelsPrefer FreqPrefer CacheAuto3691215Min: 10.72 / Avg: 10.78 / Max: 10.88Min: 10.7 / Avg: 10.77 / Max: 10.85Min: 10.48 / Avg: 10.64 / Max: 10.74

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedPrefer FreqPrefer CacheAuto70140210280350SE +/- 2.19, N = 3SE +/- 1.82, N = 3SE +/- 1.64, N = 3297.9299.9296.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Compression SpeedPrefer FreqPrefer CacheAuto50100150200250Min: 294.1 / Avg: 297.93 / Max: 301.7Min: 296.3 / Avg: 299.87 / Max: 302.3Min: 293.5 / Avg: 296 / Max: 299.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.03167, N = 3SE +/- 0.03086, N = 3SE +/- 0.01046, N = 37.564007.507977.46570
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.5 / Avg: 7.56 / Max: 7.61Min: 7.48 / Avg: 7.51 / Max: 7.57Min: 7.45 / Avg: 7.47 / Max: 7.48

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 190.35, N = 3SE +/- 83.53, N = 3SE +/- 193.42, N = 31967019517197731. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 19460 / Avg: 19670 / Max: 20050Min: 19350 / Avg: 19516.67 / Max: 19610Min: 19410 / Avg: 19773.33 / Max: 200701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto30060090012001500SE +/- 7.52, N = 3SE +/- 9.62, N = 3SE +/- 1.91, N = 31405.51423.91418.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000Min: 1390.5 / Avg: 1405.53 / Max: 1413.2Min: 1407.6 / Avg: 1423.93 / Max: 1440.9Min: 1414.7 / Avg: 1418.13 / Max: 1421.31. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto0.05330.10660.15990.21320.2665SE +/- 0.002, N = 4SE +/- 0.001, N = 4SE +/- 0.001, N = 40.2340.2360.237
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingPrefer FreqPrefer CacheAuto12345Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityPrefer FreqPrefer CacheAuto8K16K24K32K40KSE +/- 393.19, N = 15SE +/- 279.96, N = 15SE +/- 433.26, N = 1535444.4235308.6334995.781. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Socket ActivityPrefer FreqPrefer CacheAuto6K12K18K24K30KMin: 33825.18 / Avg: 35444.42 / Max: 38803.83Min: 33727.42 / Avg: 35308.63 / Max: 37543.66Min: 33412.07 / Avg: 34995.78 / Max: 39339.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto150300450600750SE +/- 2.87, N = 3SE +/- 1.86, N = 3SE +/- 7.25, N = 3678.66683.48674.84MIN: 670.71MIN: 674.73MIN: 656.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto120240360480600Min: 675.71 / Avg: 678.66 / Max: 684.4Min: 680.13 / Avg: 683.48 / Max: 686.56Min: 660.46 / Avg: 674.84 / Max: 683.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestPrefer FreqPrefer CacheAuto306090120150SE +/- 1.81, N = 3SE +/- 1.26, N = 3SE +/- 0.72, N = 3148.63147.19149.04
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestPrefer FreqPrefer CacheAuto306090120150Min: 146.68 / Avg: 148.63 / Max: 152.24Min: 145.52 / Avg: 147.19 / Max: 149.66Min: 148.04 / Avg: 149.04 / Max: 150.43

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.92, N = 9SE +/- 1.65, N = 9SE +/- 2.24, N = 9524.7518.2520.01. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterPrefer FreqPrefer CacheAuto90180270360450Min: 518.5 / Avg: 524.66 / Max: 527.1Min: 511.7 / Avg: 518.16 / Max: 526.2Min: 512.4 / Avg: 519.99 / Max: 534.11. 3.10.5.1

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinPrefer FreqPrefer CacheAuto48121620SE +/- 0.14, N = 15SE +/- 0.17, N = 15SE +/- 0.11, N = 1316.2516.1116.311. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinPrefer FreqPrefer CacheAuto48121620Min: 15.12 / Avg: 16.25 / Max: 16.7Min: 14.2 / Avg: 16.11 / Max: 16.75Min: 15.4 / Avg: 16.31 / Max: 16.731. (CXX) g++ options: -O3 -lm -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto400800120016002000SE +/- 3.24, N = 3SE +/- 1.37, N = 3SE +/- 3.62, N = 31721.241702.471700.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto30060090012001500Min: 1716.24 / Avg: 1721.24 / Max: 1727.32Min: 1699.85 / Avg: 1702.47 / Max: 1704.5Min: 1693.79 / Avg: 1700.14 / Max: 1706.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformPrefer FreqPrefer CacheAuto306090120150SE +/- 0.35, N = 7SE +/- 1.10, N = 5SE +/- 1.47, N = 3154.6155.6153.7
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformPrefer FreqPrefer CacheAuto306090120150Min: 153.9 / Avg: 154.59 / Max: 156.5Min: 152.9 / Avg: 155.58 / Max: 158.4Min: 150.8 / Avg: 153.73 / Max: 155.3

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedPrefer FreqPrefer CacheAuto110220330440550SE +/- 1.45, N = 3SE +/- 0.88, N = 3SE +/- 1.00, N = 34924864861. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedPrefer FreqPrefer CacheAuto90180270360450Min: 490 / Avg: 492.33 / Max: 495Min: 485 / Avg: 486.33 / Max: 488Min: 485 / Avg: 486 / Max: 4881. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto0.16760.33520.50280.67040.838SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.7450.7410.736
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StatePrefer FreqPrefer CacheAuto246810Min: 0.74 / Avg: 0.75 / Max: 0.75Min: 0.74 / Avg: 0.74 / Max: 0.74Min: 0.73 / Avg: 0.74 / Max: 0.74

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionPrefer FreqPrefer CacheAuto0.1890.3780.5670.7560.945SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.840.840.831. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionPrefer FreqPrefer CacheAuto246810Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.83 / Avg: 0.83 / Max: 0.841. (CC) gcc options: -fvisibility=hidden -O2 -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto3691215SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 310.1810.0610.101. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto3691215Min: 10.13 / Avg: 10.18 / Max: 10.23Min: 9.91 / Avg: 10.06 / Max: 10.22Min: 10 / Avg: 10.1 / Max: 10.211. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromePrefer FreqPrefer CacheAuto7001400210028003500SE +/- 33.73, N = 15SE +/- 42.76, N = 15SE +/- 42.95, N = 153124310331391. chrome 110.0.5481.96
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromePrefer FreqPrefer CacheAuto5001000150020002500Min: 2833 / Avg: 3124.2 / Max: 3259Min: 2760 / Avg: 3103.33 / Max: 3254Min: 2743 / Avg: 3139.47 / Max: 32661. chrome 110.0.5481.96

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianPrefer FreqPrefer CacheAuto130260390520650SE +/- 5.79, N = 12SE +/- 0.88, N = 3SE +/- 0.33, N = 36056126111. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianPrefer FreqPrefer CacheAuto110220330440550Min: 542 / Avg: 604.67 / Max: 616Min: 611 / Avg: 612.33 / Max: 614Min: 610 / Avg: 610.67 / Max: 6111. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDPrefer FreqPrefer CacheAuto30060090012001500SE +/- 3.18, N = 3SE +/- 2.63, N = 3SE +/- 2.10, N = 31276.151266.521281.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MEMFDPrefer FreqPrefer CacheAuto2004006008001000Min: 1270.32 / Avg: 1276.15 / Max: 1281.26Min: 1261.88 / Avg: 1266.52 / Max: 1270.99Min: 1277.19 / Avg: 1281.14 / Max: 1284.341. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonPrefer FreqPrefer CacheAuto1428425670SE +/- 0.66, N = 3SE +/- 0.19, N = 3SE +/- 0.19, N = 362.3462.6061.89
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonPrefer FreqPrefer CacheAuto1224364860Min: 61.67 / Avg: 62.34 / Max: 63.67Min: 62.22 / Avg: 62.6 / Max: 62.85Min: 61.51 / Avg: 61.89 / Max: 62.12

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025SE +/- 0.28, N = 3SE +/- 0.26, N = 3SE +/- 0.25, N = 420.3920.3520.161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.13Speed: Speed 0 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025Min: 20.03 / Avg: 20.39 / Max: 20.95Min: 19.97 / Avg: 20.35 / Max: 20.85Min: 19.7 / Avg: 20.16 / Max: 20.861. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexPrefer FreqPrefer CacheAuto900K1800K2700K3600K4500KSE +/- 37858.78, N = 7SE +/- 54535.09, N = 3SE +/- 41048.77, N = 34134347.274120615.704087777.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: FutexPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KMin: 3957674.19 / Avg: 4134347.27 / Max: 4267266.62Min: 4035049.55 / Avg: 4120615.7 / Max: 4221973.96Min: 4005700.16 / Avg: 4087777.65 / Max: 4130387.921. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 2.18, N = 15SE +/- 2.64, N = 15SE +/- 2.55, N = 15238.97241.09238.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 219.67 / Avg: 238.97 / Max: 251.74Min: 219.74 / Avg: 241.09 / Max: 251.58Min: 224.19 / Avg: 238.38 / Max: 257.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MSE +/- 1880011.82, N = 3SE +/- 6133605.07, N = 3SE +/- 3012381.86, N = 31488033333147143333314853666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MMin: 1484900000 / Avg: 1488033333.33 / Max: 1491400000Min: 1465200000 / Avg: 1471433333.33 / Max: 1483700000Min: 1480700000 / Avg: 1485366666.67 / Max: 14910000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.01807, N = 3SE +/- 0.00354, N = 3SE +/- 0.02259, N = 37.700027.614877.66102
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.68 / Avg: 7.7 / Max: 7.74Min: 7.61 / Avg: 7.61 / Max: 7.62Min: 7.63 / Avg: 7.66 / Max: 7.71

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto1530456075SE +/- 0.09, N = 4SE +/- 0.28, N = 4SE +/- 0.36, N = 468.5868.4769.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto1326395265Min: 68.39 / Avg: 68.58 / Max: 68.81Min: 67.97 / Avg: 68.47 / Max: 69.22Min: 68.28 / Avg: 69.22 / Max: 70.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto2004006008001000SE +/- 4.32, N = 15SE +/- 4.51, N = 15SE +/- 4.42, N = 15796.32787.79789.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto140280420560700Min: 737.88 / Avg: 796.32 / Max: 807.5Min: 728.99 / Avg: 787.79 / Max: 802.21Min: 732.8 / Avg: 789.95 / Max: 806.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto816243240SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.24, N = 334.6534.6434.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto714212835Min: 34.47 / Avg: 34.65 / Max: 34.83Min: 34.55 / Avg: 34.64 / Max: 34.69Min: 34.03 / Avg: 34.28 / Max: 34.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.32, N = 3SE +/- 0.45, N = 3SE +/- 0.26, N = 3519.52517.73523.31
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigPrefer FreqPrefer CacheAuto90180270360450Min: 518.89 / Avg: 519.52 / Max: 519.88Min: 516.97 / Avg: 517.73 / Max: 518.51Min: 522.79 / Avg: 523.31 / Max: 523.6

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto1.05752.1153.17254.235.2875SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.654.694.70MIN: 3 / MAX: 12.96MIN: 3.01 / MAX: 12.79MIN: 3.01 / MAX: 12.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 4.63 / Avg: 4.65 / Max: 4.66Min: 4.69 / Avg: 4.69 / Max: 4.7Min: 4.68 / Avg: 4.7 / Max: 4.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.49, N = 9SE +/- 0.51, N = 9SE +/- 0.63, N = 9195.81194.84193.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 192.74 / Avg: 195.81 / Max: 197.17Min: 192.17 / Avg: 194.84 / Max: 196.62Min: 191.37 / Avg: 193.76 / Max: 196.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetPrefer FreqPrefer CacheAuto1.07552.1513.22654.3025.3775SE +/- 0.02, N = 13SE +/- 0.04, N = 3SE +/- 0.02, N = 34.784.754.73MIN: 4.62 / MAX: 10.44MIN: 4.62 / MAX: 5.91MIN: 4.62 / MAX: 9.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetPrefer FreqPrefer CacheAuto246810Min: 4.68 / Avg: 4.78 / Max: 5Min: 4.67 / Avg: 4.75 / Max: 4.8Min: 4.69 / Avg: 4.73 / Max: 4.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompilePrefer FreqPrefer CacheAuto510152025SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 321.3021.4421.21
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompilePrefer FreqPrefer CacheAuto510152025Min: 21.14 / Avg: 21.3 / Max: 21.45Min: 21.1 / Avg: 21.44 / Max: 21.62Min: 21.11 / Avg: 21.21 / Max: 21.38

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto1.08452.1693.25354.3385.4225SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.774.824.80MIN: 3.42 / MAX: 11.47MIN: 3.53 / MAX: 14.59MIN: 3.36 / MAX: 13.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 4.76 / Avg: 4.77 / Max: 4.78Min: 4.8 / Avg: 4.82 / Max: 4.84Min: 4.79 / Avg: 4.8 / Max: 4.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Prefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 311.6111.4911.60MIN: 11.23 / MAX: 17.95MIN: 11.32 / MAX: 12.4MIN: 11.46 / MAX: 17.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Prefer FreqPrefer CacheAuto3691215Min: 11.34 / Avg: 11.61 / Max: 12.03Min: 11.43 / Avg: 11.49 / Max: 11.55Min: 11.58 / Avg: 11.6 / Max: 11.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Prefer FreqPrefer CacheAuto0.8731.7462.6193.4924.365SE +/- 0.01, N = 13SE +/- 0.02, N = 3SE +/- 0.01, N = 33.883.843.84MIN: 3.75 / MAX: 9.76MIN: 3.73 / MAX: 4.32MIN: 3.76 / MAX: 4.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Prefer FreqPrefer CacheAuto246810Min: 3.84 / Avg: 3.88 / Max: 3.98Min: 3.8 / Avg: 3.84 / Max: 3.87Min: 3.83 / Avg: 3.84 / Max: 3.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimePrefer FreqPrefer CacheAuto3M6M9M12M15MSE +/- 134359.73, N = 4SE +/- 154949.74, N = 4SE +/- 107676.89, N = 41538738015231602153356671. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimePrefer FreqPrefer CacheAuto3M6M9M12M15MMin: 14989523 / Avg: 15387380.25 / Max: 15576360Min: 14777842 / Avg: 15231602 / Max: 15471981Min: 15144082 / Avg: 15335666.5 / Max: 155339911. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250SE +/- 1.96, N = 15SE +/- 2.36, N = 15SE +/- 2.75, N = 15220.58221.70219.501. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto4080120160200Min: 203.06 / Avg: 220.58 / Max: 227.22Min: 202.92 / Avg: 221.7 / Max: 231.61Min: 205.03 / Avg: 219.5 / Max: 236.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUPrefer FreqPrefer CacheAuto1.14082.28163.42244.56325.704SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.055.075.02MIN: 4.92 / MAX: 5.35MIN: 4.95 / MAX: 5.38MIN: 4.9 / MAX: 5.32
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUPrefer FreqPrefer CacheAuto246810Min: 5.04 / Avg: 5.05 / Max: 5.06Min: 5.06 / Avg: 5.07 / Max: 5.08Min: 5.01 / Avg: 5.02 / Max: 5.02

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto1.25122.50243.75365.00486.256SE +/- 0.026611, N = 3SE +/- 0.052075, N = 3SE +/- 0.032795, N = 35.5064675.5499725.5609091. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810Min: 5.45 / Avg: 5.51 / Max: 5.54Min: 5.48 / Avg: 5.55 / Max: 5.65Min: 5.52 / Avg: 5.56 / Max: 5.631. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryPrefer FreqPrefer CacheAuto3K6K9K12K15KSE +/- 62.18, N = 6SE +/- 53.25, N = 6SE +/- 52.79, N = 612712.2412696.4112821.251. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / MemoryPrefer FreqPrefer CacheAuto2K4K6K8K10KMin: 12544.47 / Avg: 12712.24 / Max: 12956.74Min: 12633.98 / Avg: 12696.41 / Max: 12962.45Min: 12702.69 / Avg: 12821.25 / Max: 12980.611. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Boat - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.55711.11421.67132.22842.7855SE +/- 0.003, N = 9SE +/- 0.005, N = 9SE +/- 0.004, N = 92.4522.4762.456
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Boat - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto246810Min: 2.44 / Avg: 2.45 / Max: 2.46Min: 2.45 / Avg: 2.48 / Max: 2.5Min: 2.43 / Avg: 2.46 / Max: 2.47

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto100200300400500SE +/- 0.64, N = 12SE +/- 0.89, N = 12SE +/- 0.41, N = 12450.47446.12449.831. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto80160240320400Min: 445.8 / Avg: 450.47 / Max: 454.73Min: 440.25 / Avg: 446.12 / Max: 450.89Min: 447.46 / Avg: 449.83 / Max: 452.111. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEPrefer FreqPrefer CacheAuto100K200K300K400K500KSE +/- 3764.61, N = 3SE +/- 398.35, N = 3SE +/- 821.42, N = 3489525.42485658.97484816.521. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: SENDFILEPrefer FreqPrefer CacheAuto80K160K240K320K400KMin: 485507.56 / Avg: 489525.42 / Max: 497048.82Min: 484951.47 / Avg: 485658.97 / Max: 486329.95Min: 483174.46 / Avg: 484816.52 / Max: 485681.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 3.64, N = 3SE +/- 6.75, N = 3SE +/- 6.20, N = 32417.32410.12433.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2410.3 / Avg: 2417.27 / Max: 2422.6Min: 2397.3 / Avg: 2410.13 / Max: 2420.2Min: 2423.6 / Avg: 2433 / Max: 2444.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto30060090012001500SE +/- 3.20, N = 3SE +/- 5.17, N = 3SE +/- 10.71, N = 31288.241276.221280.48MIN: 1274.09MIN: 1255.9MIN: 1249.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1284.22 / Avg: 1288.24 / Max: 1294.56Min: 1267.48 / Avg: 1276.22 / Max: 1285.37Min: 1260.49 / Avg: 1280.48 / Max: 1297.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.10470.20940.31410.41880.5235SE +/- 0.000410, N = 3SE +/- 0.001471, N = 3SE +/- 0.000909, N = 30.4651390.4644260.460811MIN: 0.45MIN: 0.45MIN: 0.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto12345Min: 0.46 / Avg: 0.47 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: CrownPrefer FreqPrefer CacheAuto714212835SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 332.2932.1332.00MIN: 31.94 / MAX: 32.98MIN: 31.83 / MAX: 32.72MIN: 31.58 / MAX: 32.66
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: CrownPrefer FreqPrefer CacheAuto714212835Min: 32.17 / Avg: 32.29 / Max: 32.46Min: 32.07 / Avg: 32.13 / Max: 32.18Min: 31.83 / Avg: 32 / Max: 32.12

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSPrefer FreqPrefer CacheAuto400800120016002000SE +/- 1.83, N = 3SE +/- 7.98, N = 3SE +/- 11.39, N = 31857.11874.11874.3MIN: 1796.07 / MAX: 1967.45MIN: 1789.64 / MAX: 2506.58MIN: 1794.8 / MAX: 2057.95
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSPrefer FreqPrefer CacheAuto30060090012001500Min: 1853.43 / Avg: 1857.06 / Max: 1859.23Min: 1862.09 / Avg: 1874.09 / Max: 1889.21Min: 1857.57 / Avg: 1874.32 / Max: 1896.05

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Prefer FreqPrefer CacheAuto4080120160200SE +/- 0.45, N = 4SE +/- 1.78, N = 4SE +/- 0.42, N = 4174.63175.72174.11MIN: 173.65 / MAX: 179.74MIN: 172.53 / MAX: 179.14MIN: 173.47 / MAX: 175.491. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Prefer FreqPrefer CacheAuto306090120150Min: 173.92 / Avg: 174.63 / Max: 175.82Min: 172.62 / Avg: 175.72 / Max: 179Min: 173.68 / Avg: 174.11 / Max: 175.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingPrefer FreqPrefer CacheAuto15003000450060007500SE +/- 60.06, N = 6SE +/- 29.94, N = 6SE +/- 59.67, N = 76918.416857.026855.311. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingPrefer FreqPrefer CacheAuto12002400360048006000Min: 6656.4 / Avg: 6918.41 / Max: 7006.74Min: 6827.08 / Avg: 6857.02 / Max: 7006.74Min: 6656.4 / Avg: 6855.31 / Max: 7006.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810SE +/- 0.040332, N = 3SE +/- 0.057662, N = 3SE +/- 0.076546, N = 37.3914937.3244517.3394461. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Prefer FreqPrefer CacheAuto3691215Min: 7.32 / Avg: 7.39 / Max: 7.45Min: 7.21 / Avg: 7.32 / Max: 7.39Min: 7.19 / Avg: 7.34 / Max: 7.421. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADPrefer FreqPrefer CacheAuto20406080100SE +/- 0.20, N = 3SE +/- 0.40, N = 3SE +/- 0.32, N = 389.9589.8690.68
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADPrefer FreqPrefer CacheAuto20406080100Min: 89.57 / Avg: 89.95 / Max: 90.23Min: 89.06 / Avg: 89.86 / Max: 90.34Min: 90.16 / Avg: 90.68 / Max: 91.26

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.70, N = 15SE +/- 0.85, N = 15SE +/- 0.86, N = 1589.8789.7289.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 86.02 / Avg: 89.87 / Max: 93.81Min: 85.24 / Avg: 89.72 / Max: 93.66Min: 84.52 / Avg: 89.07 / Max: 93.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto400800120016002000SE +/- 2.43, N = 3SE +/- 3.63, N = 3SE +/- 1.97, N = 31674.001659.221663.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto30060090012001500Min: 1670.78 / Avg: 1674 / Max: 1678.76Min: 1652.57 / Avg: 1659.22 / Max: 1665.09Min: 1660.99 / Avg: 1663.09 / Max: 1667.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810SE +/- 0.007132, N = 3SE +/- 0.002520, N = 3SE +/- 0.034705, N = 37.9966417.9772557.9263231. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Prefer FreqPrefer CacheAuto3691215Min: 7.99 / Avg: 8 / Max: 8.01Min: 7.97 / Avg: 7.98 / Max: 7.98Min: 7.86 / Avg: 7.93 / Max: 7.971. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.03588, N = 3SE +/- 0.00464, N = 3SE +/- 0.00618, N = 37.530187.596817.56423
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.46 / Avg: 7.53 / Max: 7.57Min: 7.59 / Avg: 7.6 / Max: 7.61Min: 7.55 / Avg: 7.56 / Max: 7.57

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Room - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.52791.05581.58372.11162.6395SE +/- 0.002, N = 9SE +/- 0.006, N = 9SE +/- 0.006, N = 92.3312.3262.346
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Server Room - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto246810Min: 2.32 / Avg: 2.33 / Max: 2.34Min: 2.31 / Avg: 2.33 / Max: 2.35Min: 2.32 / Avg: 2.35 / Max: 2.38

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Prefer FreqPrefer CacheAuto612182430SE +/- 0.08, N = 15SE +/- 0.03, N = 3SE +/- 0.03, N = 324.7224.5724.51MIN: 24.22 / MAX: 59.52MIN: 24.29 / MAX: 29.41MIN: 24.28 / MAX: 37.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Prefer FreqPrefer CacheAuto612182430Min: 24.48 / Avg: 24.72 / Max: 25.52Min: 24.52 / Avg: 24.57 / Max: 24.6Min: 24.45 / Avg: 24.51 / Max: 24.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 313.2213.1113.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto48121620Min: 13.14 / Avg: 13.22 / Max: 13.32Min: 13.04 / Avg: 13.11 / Max: 13.16Min: 13.12 / Avg: 13.19 / Max: 13.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigPrefer FreqPrefer CacheAuto1122334455SE +/- 0.30, N = 3SE +/- 0.32, N = 3SE +/- 0.31, N = 346.4346.1946.57
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigPrefer FreqPrefer CacheAuto918273645Min: 46.12 / Avg: 46.43 / Max: 47.02Min: 45.74 / Avg: 46.19 / Max: 46.81Min: 46.26 / Avg: 46.57 / Max: 47.19

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreePrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 94.41, N = 4SE +/- 79.65, N = 3SE +/- 58.84, N = 37797.27772.77732.7MIN: 5695.75 / MAX: 7918.79MIN: 5787.99 / MAX: 7897.04MIN: 5663.78 / MAX: 7847.58
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreePrefer FreqPrefer CacheAuto14002800420056007000Min: 7519.9 / Avg: 7797.2 / Max: 7918.79Min: 7624.29 / Avg: 7772.68 / Max: 7897.04Min: 7653.14 / Avg: 7732.69 / Max: 7847.58

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto0.27230.54460.81691.08921.3615SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.201.211.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto246810Min: 1.19 / Avg: 1.2 / Max: 1.21Min: 1.2 / Avg: 1.21 / Max: 1.21Min: 1.19 / Avg: 1.2 / Max: 1.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto160M320M480M640M800MSE +/- 135441.66, N = 3SE +/- 1271434.01, N = 3SE +/- 813961.51, N = 37579833337517333337561300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto130M260M390M520M650MMin: 757780000 / Avg: 757983333.33 / Max: 758240000Min: 749910000 / Avg: 751733333.33 / Max: 754180000Min: 755070000 / Avg: 756130000 / Max: 7577300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAPrefer FreqPrefer CacheAuto130260390520650SE +/- 4.86, N = 13SE +/- 3.65, N = 13SE +/- 3.50, N = 14581.60576.94578.011. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: NUMAPrefer FreqPrefer CacheAuto100200300400500Min: 572.26 / Avg: 581.6 / Max: 639.3Min: 570.19 / Avg: 576.94 / Max: 620.33Min: 570.49 / Avg: 578.01 / Max: 622.671. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 688.41, N = 3SE +/- 79.62, N = 3SE +/- 105.15, N = 3123819.50122869.22122833.101. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Matrix MathPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 123045.2 / Avg: 123819.5 / Max: 125192.59Min: 122719.33 / Avg: 122869.22 / Max: 122990.74Min: 122716.48 / Avg: 122833.1 / Max: 123042.971. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyPrefer FreqPrefer CacheAuto306090120150145.59145.15146.31

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 7.91, N = 3SE +/- 3.70, N = 3SE +/- 2.23, N = 32193.42210.72198.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 3 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2177.9 / Avg: 2193.4 / Max: 2203.9Min: 2203.3 / Avg: 2210.67 / Max: 2215Min: 2195 / Avg: 2198.63 / Max: 2202.71. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 37.927.867.90MIN: 7.07 / MAX: 8.33MIN: 7.02 / MAX: 8.29MIN: 7.04 / MAX: 8.37
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.91 / Avg: 7.92 / Max: 7.93Min: 7.86 / Avg: 7.86 / Max: 7.87Min: 7.88 / Avg: 7.9 / Max: 7.93

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MPrefer FreqPrefer CacheAuto3K6K9K12K15KSE +/- 52.59, N = 3SE +/- 57.72, N = 3SE +/- 84.38, N = 316109.416231.916134.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 16032.1 / Avg: 16109.37 / Max: 16209.8Min: 16173.4 / Avg: 16231.87 / Max: 16347.3Min: 16025.1 / Avg: 16134.4 / Max: 16300.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsPrefer FreqPrefer CacheAuto400800120016002000SE +/- 11.77, N = 3SE +/- 10.69, N = 3SE +/- 12.79, N = 32080.42096.12083.5MIN: 1915.29 / MAX: 2096.84MIN: 1946.09 / MAX: 2151MIN: 1911.93 / MAX: 2147.25
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsPrefer FreqPrefer CacheAuto400800120016002000Min: 2056.93 / Avg: 2080.37 / Max: 2093.92Min: 2075.08 / Avg: 2096.15 / Max: 2109.87Min: 2067.15 / Avg: 2083.53 / Max: 2108.72

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurPrefer FreqPrefer CacheAuto918273645SE +/- 0.42, N = 3SE +/- 0.34, N = 3SE +/- 0.39, N = 339.2239.5139.34
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurPrefer FreqPrefer CacheAuto816243240Min: 38.4 / Avg: 39.22 / Max: 39.78Min: 39.09 / Avg: 39.51 / Max: 40.2Min: 38.77 / Avg: 39.34 / Max: 40.08

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 23.55, N = 3SE +/- 89.48, N = 3SE +/- 30.12, N = 319602.619460.119572.91. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 19560.4 / Avg: 19602.63 / Max: 19641.8Min: 19285.7 / Avg: 19460.13 / Max: 19582Min: 19524.4 / Avg: 19572.9 / Max: 19628.11. (CC) gcc options: -O3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.30, N = 9SE +/- 0.53, N = 9SE +/- 0.30, N = 9212.52213.53214.071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto4080120160200Min: 211.27 / Avg: 212.52 / Max: 214.39Min: 211.31 / Avg: 213.53 / Max: 216.06Min: 212.69 / Avg: 214.07 / Max: 215.691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto70140210280350SE +/- 0.30, N = 10SE +/- 0.72, N = 10SE +/- 0.39, N = 10308.04306.62305.831. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto60120180240300Min: 306.59 / Avg: 308.04 / Max: 309.6Min: 301.81 / Avg: 306.62 / Max: 309.12Min: 303.18 / Avg: 305.83 / Max: 307.221. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random FillPrefer FreqPrefer CacheAuto300K600K900K1200K1500KSE +/- 2340.10, N = 3SE +/- 3639.38, N = 3SE +/- 1364.86, N = 31396024138878713987171. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random FillPrefer FreqPrefer CacheAuto200K400K600K800K1000KMin: 1391481 / Avg: 1396024.33 / Max: 1399269Min: 1381586 / Avg: 1388787.33 / Max: 1393305Min: 1396002 / Avg: 1398716.67 / Max: 14003221. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropPrefer FreqPrefer CacheAuto246810SE +/- 0.014, N = 6SE +/- 0.019, N = 6SE +/- 0.010, N = 66.8846.9336.890
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropPrefer FreqPrefer CacheAuto3691215Min: 6.85 / Avg: 6.88 / Max: 6.94Min: 6.87 / Avg: 6.93 / Max: 6.99Min: 6.86 / Avg: 6.89 / Max: 6.93

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingPrefer FreqPrefer CacheAuto16K32K48K64K80KSE +/- 268.01, N = 3SE +/- 174.31, N = 3SE +/- 262.93, N = 372995.0473184.0473511.841. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: ForkingPrefer FreqPrefer CacheAuto13K26K39K52K65KMin: 72571.37 / Avg: 72995.04 / Max: 73491.25Min: 72838.85 / Avg: 73184.04 / Max: 73398.91Min: 73179.23 / Avg: 73511.84 / Max: 74030.881. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19Prefer FreqPrefer CacheAuto306090120150SE +/- 0.44, N = 3SE +/- 0.12, N = 3SE +/- 0.49, N = 3126.56127.45127.26
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19Prefer FreqPrefer CacheAuto20406080100Min: 125.97 / Avg: 126.56 / Max: 127.42Min: 127.23 / Avg: 127.44 / Max: 127.64Min: 126.28 / Avg: 127.26 / Max: 127.75

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 2.23, N = 3SE +/- 2.98, N = 3SE +/- 0.42, N = 31089.891093.401097.26MIN: 566.46 / MAX: 1284.73MIN: 638.38 / MAX: 1255.16MIN: 580.94 / MAX: 1284.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1085.75 / Avg: 1089.89 / Max: 1093.38Min: 1087.73 / Avg: 1093.4 / Max: 1097.81Min: 1096.43 / Avg: 1097.26 / Max: 1097.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianPrefer FreqPrefer CacheAuto0.77221.54442.31663.08883.861SE +/- 0.013, N = 9SE +/- 0.010, N = 9SE +/- 0.020, N = 93.4323.4093.414
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianPrefer FreqPrefer CacheAuto246810Min: 3.39 / Avg: 3.43 / Max: 3.51Min: 3.36 / Avg: 3.41 / Max: 3.47Min: 3.35 / Avg: 3.41 / Max: 3.53

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughPrefer FreqPrefer CacheAuto48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 316.1516.1216.041. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughPrefer FreqPrefer CacheAuto48121620Min: 16.14 / Avg: 16.15 / Max: 16.16Min: 16.1 / Avg: 16.12 / Max: 16.13Min: 16.03 / Avg: 16.04 / Max: 16.061. (CXX) g++ options: -O3 -flto -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.54, N = 3SE +/- 1.83, N = 3SE +/- 0.34, N = 3388.9391.5391.5MIN: 352.2 / MAX: 453.6MIN: 353.06 / MAX: 470.48MIN: 365 / MAX: 478.75
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestPrefer FreqPrefer CacheAuto70140210280350Min: 387.94 / Avg: 388.93 / Max: 389.79Min: 387.91 / Avg: 391.55 / Max: 393.69Min: 391.19 / Avg: 391.55 / Max: 392.23

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 3.91, N = 9SE +/- 6.57, N = 9SE +/- 8.79, N = 9232123142329
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonPrefer FreqPrefer CacheAuto400800120016002000Min: 2301 / Avg: 2320.56 / Max: 2341Min: 2292 / Avg: 2314.22 / Max: 2352Min: 2298 / Avg: 2328.89 / Max: 2371

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompilePrefer FreqPrefer CacheAuto510152025SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 321.8521.8221.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompilePrefer FreqPrefer CacheAuto510152025Min: 21.78 / Avg: 21.85 / Max: 21.9Min: 21.71 / Avg: 21.82 / Max: 21.97Min: 21.89 / Avg: 21.96 / Max: 22.03

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromePrefer FreqPrefer CacheAuto50100150200250SE +/- 2.69, N = 15SE +/- 2.79, N = 15SE +/- 2.67, N = 15228.64229.09230.101. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromePrefer FreqPrefer CacheAuto4080120160200Min: 217.88 / Avg: 228.64 / Max: 239.61Min: 217.88 / Avg: 229.09 / Max: 239.71Min: 217.89 / Avg: 230.1 / Max: 239.631. chrome 110.0.5481.96

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 23.52, N = 3SE +/- 10.59, N = 3SE +/- 14.86, N = 32469.82476.52485.51. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 12 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2426.2 / Avg: 2469.8 / Max: 2506.9Min: 2457.5 / Avg: 2476.47 / Max: 2494.1Min: 2469.8 / Avg: 2485.5 / Max: 2515.21. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill SyncPrefer FreqPrefer CacheAuto9K18K27K36K45KSE +/- 113.36, N = 3SE +/- 84.16, N = 3SE +/- 41.20, N = 33947839653397281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random Fill SyncPrefer FreqPrefer CacheAuto7K14K21K28K35KMin: 39330 / Avg: 39478.33 / Max: 39701Min: 39530 / Avg: 39653 / Max: 39814Min: 39646 / Avg: 39728 / Max: 397761. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyPrefer FreqPrefer CacheAuto48121620SE +/- 0.09, N = 15SE +/- 0.02, N = 3SE +/- 0.04, N = 314.4114.3514.32MIN: 14.02 / MAX: 20.18MIN: 14.11 / MAX: 14.93MIN: 14.06 / MAX: 19.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyPrefer FreqPrefer CacheAuto48121620Min: 14.15 / Avg: 14.41 / Max: 15.55Min: 14.31 / Avg: 14.35 / Max: 14.39Min: 14.25 / Avg: 14.32 / Max: 14.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: CrownPrefer FreqPrefer CacheAuto816243240SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 336.2436.0236.03MIN: 35.81 / MAX: 36.91MIN: 35.7 / MAX: 36.65MIN: 35.6 / MAX: 36.69
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: CrownPrefer FreqPrefer CacheAuto816243240Min: 36.09 / Avg: 36.24 / Max: 36.37Min: 35.95 / Avg: 36.02 / Max: 36.11Min: 35.87 / Avg: 36.03 / Max: 36.12

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential FillPrefer FreqPrefer CacheAuto300K600K900K1200K1500KSE +/- 3968.04, N = 3SE +/- 3546.36, N = 3SE +/- 2647.98, N = 31450599144181514454991. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Sequential FillPrefer FreqPrefer CacheAuto300K600K900K1200K1500KMin: 1442663 / Avg: 1450599 / Max: 1454597Min: 1436267 / Avg: 1441815.33 / Max: 1448416Min: 1440223 / Avg: 1445499.33 / Max: 14485321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto0.82341.64682.47023.29364.117SE +/- 0.021337, N = 3SE +/- 0.006243, N = 3SE +/- 0.014082, N = 33.6545593.6374613.6594831. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810Min: 3.61 / Avg: 3.65 / Max: 3.68Min: 3.63 / Avg: 3.64 / Max: 3.65Min: 3.63 / Avg: 3.66 / Max: 3.671. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Prefer FreqPrefer CacheAuto90M180M270M360M450MSE +/- 1311932.65, N = 3SE +/- 457867.41, N = 3SE +/- 337440.37, N = 34354949334381133004375862001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Prefer FreqPrefer CacheAuto80M160M240M320M400MMin: 432874300 / Avg: 435494933.33 / Max: 436918000Min: 437304500 / Avg: 438113300 / Max: 438889600Min: 437088300 / Avg: 437586200 / Max: 4382297001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocPrefer FreqPrefer CacheAuto8M16M24M32M40MSE +/- 10732.17, N = 3SE +/- 50781.26, N = 3SE +/- 130418.56, N = 336014622.7336157266.3335942000.761. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MallocPrefer FreqPrefer CacheAuto6M12M18M24M30MMin: 35993327.94 / Avg: 36014622.73 / Max: 36027601.97Min: 36068008.71 / Avg: 36157266.33 / Max: 36243859.62Min: 35770048.52 / Avg: 35942000.76 / Max: 36197833.651. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression SpeedPrefer FreqPrefer CacheAuto5001000150020002500SE +/- 9.76, N = 3SE +/- 13.16, N = 5SE +/- 2.63, N = 32423.12428.52414.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Decompression SpeedPrefer FreqPrefer CacheAuto400800120016002000Min: 2411.8 / Avg: 2423.07 / Max: 2442.5Min: 2386.1 / Avg: 2428.5 / Max: 2456.1Min: 2410 / Avg: 2414.1 / Max: 24191. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4KPrefer FreqPrefer CacheAuto90180270360450SE +/- 2.66, N = 5SE +/- 1.20, N = 5SE +/- 1.15, N = 5399.60401.81399.461. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 4KPrefer FreqPrefer CacheAuto70140210280350Min: 395.01 / Avg: 399.6 / Max: 409.75Min: 399.75 / Avg: 401.81 / Max: 406.32Min: 395.53 / Avg: 399.46 / Max: 401.81. (CC) gcc options: -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto30060090012001500SE +/- 12.75, N = 3SE +/- 5.43, N = 3SE +/- 3.31, N = 31279.161286.511280.49MIN: 1247.77MIN: 1267.38MIN: 1262.861. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1258.42 / Avg: 1279.16 / Max: 1302.39Min: 1278.69 / Avg: 1286.51 / Max: 1296.94Min: 1273.93 / Avg: 1280.49 / Max: 1284.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromePrefer FreqPrefer CacheAuto20K40K60K80K100KSE +/- 958.51, N = 6SE +/- 916.53, N = 15SE +/- 950.48, N = 159554395644950991. chrome 110.0.5481.96
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromePrefer FreqPrefer CacheAuto17K34K51K68K85KMin: 91363 / Avg: 95543 / Max: 97911Min: 90881 / Avg: 95643.93 / Max: 100143Min: 89650 / Avg: 95099.07 / Max: 1013041. chrome 110.0.5481.96

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUPrefer FreqPrefer CacheAuto48121620SE +/- 0.08, N = 5SE +/- 0.08, N = 5SE +/- 0.03, N = 517.5817.6717.57MIN: 15.92 / MAX: 18.06MIN: 15.83 / MAX: 18.03MIN: 15.92 / MAX: 17.78
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUPrefer FreqPrefer CacheAuto48121620Min: 17.44 / Avg: 17.58 / Max: 17.88Min: 17.4 / Avg: 17.67 / Max: 17.85Min: 17.49 / Avg: 17.57 / Max: 17.66

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1632486480SE +/- 0.12, N = 5SE +/- 0.17, N = 5SE +/- 0.21, N = 570.4870.3970.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1428425670Min: 70.25 / Avg: 70.48 / Max: 70.84Min: 69.83 / Avg: 70.39 / Max: 70.79Min: 69.46 / Avg: 70.09 / Max: 70.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompilePrefer FreqPrefer CacheAuto48121620SE +/- 0.02, N = 4SE +/- 0.07, N = 4SE +/- 0.05, N = 415.2615.3515.34
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompilePrefer FreqPrefer CacheAuto48121620Min: 15.23 / Avg: 15.26 / Max: 15.3Min: 15.2 / Avg: 15.35 / Max: 15.46Min: 15.25 / Avg: 15.34 / Max: 15.5

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barePrefer FreqPrefer CacheAuto0.60551.2111.81652.4223.0275SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 32.6912.6762.6851. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_barePrefer FreqPrefer CacheAuto246810Min: 2.69 / Avg: 2.69 / Max: 2.7Min: 2.67 / Avg: 2.68 / Max: 2.68Min: 2.68 / Avg: 2.69 / Max: 2.691. (CXX) g++ options: -O3

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1632486480SE +/- 0.71, N = 15SE +/- 0.72, N = 15SE +/- 0.68, N = 1571.7871.7971.391. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1428425670Min: 63.23 / Avg: 71.78 / Max: 73.18Min: 62.96 / Avg: 71.79 / Max: 73.15Min: 63.5 / Avg: 71.39 / Max: 72.721. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarPrefer FreqPrefer CacheAuto3691215SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 311.8511.8211.79
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarPrefer FreqPrefer CacheAuto3691215Min: 11.77 / Avg: 11.85 / Max: 11.9Min: 11.79 / Avg: 11.82 / Max: 11.87Min: 11.75 / Avg: 11.79 / Max: 11.84

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1224364860SE +/- 0.06, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 352.2552.3752.54
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1122334455Min: 52.14 / Avg: 52.25 / Max: 52.34Min: 52.15 / Avg: 52.37 / Max: 52.59Min: 52.34 / Avg: 52.54 / Max: 52.76

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 37.297.277.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.28 / Avg: 7.29 / Max: 7.31Min: 7.23 / Avg: 7.27 / Max: 7.31Min: 7.24 / Avg: 7.25 / Max: 7.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassPrefer FreqPrefer CacheAuto510152025SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 320.2720.3820.27
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassPrefer FreqPrefer CacheAuto510152025Min: 20.21 / Avg: 20.27 / Max: 20.33Min: 20.07 / Avg: 20.38 / Max: 20.58Min: 20.19 / Avg: 20.27 / Max: 20.38

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.01, N = 3SE +/- 5.11, N = 3SE +/- 3.00, N = 31109.971116.021110.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xPrefer FreqPrefer CacheAuto2004006008001000Min: 1108.03 / Avg: 1109.97 / Max: 1111.45Min: 1106.87 / Avg: 1116.02 / Max: 1124.55Min: 1105.97 / Avg: 1110.55 / Max: 1116.21. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromePrefer FreqPrefer CacheAuto80160240320400SE +/- 4.91, N = 3SE +/- 2.08, N = 3SE +/- 2.65, N = 33693673681. chrome 110.0.5481.96
OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromePrefer FreqPrefer CacheAuto70140210280350Min: 361 / Avg: 369.33 / Max: 378Min: 363 / Avg: 367 / Max: 370Min: 364 / Avg: 368 / Max: 3731. chrome 110.0.5481.96

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexPrefer FreqPrefer CacheAuto2M4M6M8M10MSE +/- 29801.71, N = 3SE +/- 82230.41, N = 3SE +/- 41091.60, N = 311172285.0011230656.9911172291.291. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MutexPrefer FreqPrefer CacheAuto2M4M6M8M10MMin: 11114475.24 / Avg: 11172285 / Max: 11213757.65Min: 11072978.22 / Avg: 11230656.99 / Max: 11349975.69Min: 11092121.74 / Avg: 11172291.29 / Max: 11228034.621. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingPrefer FreqPrefer CacheAuto40K80K120K160K200KSE +/- 439.96, N = 3SE +/- 298.57, N = 3SE +/- 177.38, N = 31906591898711896711. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingPrefer FreqPrefer CacheAuto30K60K90K120K150KMin: 190141 / Avg: 190659 / Max: 191534Min: 189292 / Avg: 189871 / Max: 190287Min: 189455 / Avg: 189671.33 / Max: 1900231. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Prefer FreqPrefer CacheAuto246810SE +/- 0.056, N = 6SE +/- 0.013, N = 6SE +/- 0.038, N = 67.3677.4057.370
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 6Prefer FreqPrefer CacheAuto3691215Min: 7.23 / Avg: 7.37 / Max: 7.53Min: 7.35 / Avg: 7.41 / Max: 7.43Min: 7.22 / Avg: 7.37 / Max: 7.5

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.00, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3199198198MIN: 18 / MAX: 3749MIN: 18 / MAX: 3736MIN: 17 / MAX: 3749
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarPrefer FreqPrefer CacheAuto4080120160200Min: 199 / Avg: 199 / Max: 199Min: 197 / Avg: 197.67 / Max: 198Min: 197 / Avg: 198.33 / Max: 199

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.34VGR Performance MetricPrefer FreqPrefer CacheAuto80K160K240K320K400K3959453967453947661. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.84, N = 3SE +/- 3.20, N = 3SE +/- 2.28, N = 3786.3788.2790.2MIN: 582.5 / MAX: 787.77MIN: 581.24 / MAX: 794.49MIN: 583.89 / MAX: 794.71
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesPrefer FreqPrefer CacheAuto140280420560700Min: 784.88 / Avg: 786.26 / Max: 787.77Min: 783.95 / Avg: 788.24 / Max: 794.49Min: 787.71 / Avg: 790.16 / Max: 794.71

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Masskrug - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto0.5961.1921.7882.3842.98SE +/- 0.010, N = 9SE +/- 0.008, N = 9SE +/- 0.003, N = 92.6362.6492.643
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 4.2.0Test: Masskrug - Acceleration: CPU-onlyPrefer FreqPrefer CacheAuto246810Min: 2.59 / Avg: 2.64 / Max: 2.69Min: 2.61 / Avg: 2.65 / Max: 2.67Min: 2.62 / Avg: 2.64 / Max: 2.65

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.31, N = 3SE +/- 0.19, N = 3SE +/- 0.60, N = 3381.37379.53381.211. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: MMAPPrefer FreqPrefer CacheAuto70140210280350Min: 381.01 / Avg: 381.37 / Max: 381.98Min: 379.34 / Avg: 379.53 / Max: 379.9Min: 380.25 / Avg: 381.21 / Max: 382.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto60120180240300SE +/- 1.82, N = 15SE +/- 1.51, N = 15SE +/- 2.12, N = 10269.52268.80268.241. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto50100150200250Min: 244.55 / Avg: 269.52 / Max: 273.74Min: 248.45 / Avg: 268.8 / Max: 272.84Min: 249.58 / Avg: 268.24 / Max: 272.071. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto5K10K15K20K25KSE +/- 31.27, N = 3SE +/- 118.07, N = 3SE +/- 11.18, N = 323592.8523486.3623579.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto4K8K12K16K20KMin: 23535.81 / Avg: 23592.85 / Max: 23643.56Min: 23251.21 / Avg: 23486.36 / Max: 23622.65Min: 23560.77 / Avg: 23579.67 / Max: 23599.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto48121620SE +/- 0.02, N = 4SE +/- 0.05, N = 4SE +/- 0.04, N = 414.4714.4014.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto48121620Min: 14.43 / Avg: 14.47 / Max: 14.52Min: 14.3 / Avg: 14.4 / Max: 14.49Min: 14.3 / Avg: 14.42 / Max: 14.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto80160240320400SE +/- 1.24, N = 15SE +/- 1.35, N = 11SE +/- 1.44, N = 11384.34382.65383.361. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto70140210280350Min: 368.3 / Avg: 384.34 / Max: 388.32Min: 370.56 / Avg: 382.65 / Max: 386.66Min: 369.94 / Avg: 383.36 / Max: 387.451. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000SE +/- 6.16, N = 3SE +/- 11.34, N = 5SE +/- 5.17, N = 31046.81046.41051.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000Min: 1036.4 / Avg: 1046.77 / Max: 1057.7Min: 1001.8 / Avg: 1046.38 / Max: 1065Min: 1045.1 / Avg: 1051 / Max: 1061.31. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian DragonPrefer FreqPrefer CacheAuto816243240SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 334.6234.5934.47MIN: 34.35 / MAX: 35.31MIN: 34.42 / MAX: 34.96MIN: 34.23 / MAX: 34.9
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer - Model: Asian DragonPrefer FreqPrefer CacheAuto714212835Min: 34.52 / Avg: 34.62 / Max: 34.69Min: 34.57 / Avg: 34.59 / Max: 34.62Min: 34.37 / Avg: 34.47 / Max: 34.59

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 1Prefer FreqPrefer CacheAuto0.26570.53140.79711.06281.3285SE +/- 0.002, N = 3SE +/- 0.006, N = 3SE +/- 0.009, N = 31.1791.1811.176
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.6.1Speed: 1Prefer FreqPrefer CacheAuto246810Min: 1.18 / Avg: 1.18 / Max: 1.18Min: 1.17 / Avg: 1.18 / Max: 1.19Min: 1.16 / Avg: 1.18 / Max: 1.19

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsPrefer FreqPrefer CacheAuto0.18390.36780.55170.73560.9195SE +/- 0.00066, N = 3SE +/- 0.00028, N = 3SE +/- 0.00047, N = 30.813790.815940.81721
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsPrefer FreqPrefer CacheAuto246810Min: 0.81 / Avg: 0.81 / Max: 0.81Min: 0.82 / Avg: 0.82 / Max: 0.82Min: 0.82 / Avg: 0.82 / Max: 0.82

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto8K16K24K32K40KSE +/- 63.89, N = 3SE +/- 97.48, N = 3SE +/- 49.58, N = 33758837690375351. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7K14K21K28K35KMin: 37465 / Avg: 37587.67 / Max: 37680Min: 37579 / Avg: 37689.67 / Max: 37884Min: 37459 / Avg: 37534.67 / Max: 376281. (CXX) g++ options: -O3 -lm -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 37.377.357.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto3691215Min: 7.32 / Avg: 7.37 / Max: 7.42Min: 7.31 / Avg: 7.35 / Max: 7.37Min: 7.3 / Avg: 7.34 / Max: 7.371. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 322.7322.6422.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto510152025Min: 22.69 / Avg: 22.73 / Max: 22.77Min: 22.51 / Avg: 22.64 / Max: 22.73Min: 22.61 / Avg: 22.66 / Max: 22.681. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetPrefer FreqPrefer CacheAuto400800120016002000SE +/- 6.39, N = 3SE +/- 2.74, N = 3SE +/- 5.47, N = 32028.842027.282035.16MIN: 1972.23 / MAX: 2118.4MIN: 1978.04 / MAX: 2109.64MIN: 1988.46 / MAX: 2119.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetPrefer FreqPrefer CacheAuto400800120016002000Min: 2020.55 / Avg: 2028.84 / Max: 2041.4Min: 2021.86 / Avg: 2027.28 / Max: 2030.67Min: 2025.61 / Avg: 2035.16 / Max: 2044.561. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomPrefer FreqPrefer CacheAuto200K400K600K800K1000KSE +/- 293.65, N = 3SE +/- 3396.18, N = 3SE +/- 1979.12, N = 39502949507869471321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Update RandomPrefer FreqPrefer CacheAuto160K320K480K640K800KMin: 949716 / Avg: 950294.33 / Max: 950672Min: 944024 / Avg: 950785.67 / Max: 954725Min: 943934 / Avg: 947132 / Max: 9507511. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.00, N = 32612612601. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenPrefer FreqPrefer CacheAuto50100150200250Min: 260 / Avg: 261 / Max: 262Min: 260 / Avg: 260.67 / Max: 261Min: 260 / Avg: 260 / Max: 2601. (CC) gcc options: -fopenmp -O2 -ljbig -lwebp -lwebpmux -lheif -lde265 -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lxml2 -lz -lzstd -lm -lpthread

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Prefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 18.86, N = 7SE +/- 24.38, N = 7SE +/- 16.15, N = 79066.299073.599039.151. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Prefer FreqPrefer CacheAuto16003200480064008000Min: 8978.96 / Avg: 9066.29 / Max: 9133.4Min: 8947.88 / Avg: 9073.59 / Max: 9137.29Min: 8983.36 / Avg: 9039.15 / Max: 9119.021. (CXX) g++ options: -O3 -fopenmp -lm -lmpi_cxx -lmpi

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto9K18K27K36K45KSE +/- 18.59, N = 3SE +/- 60.47, N = 3SE +/- 26.82, N = 340605.4440512.3940664.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto7K14K21K28K35KMin: 40581.55 / Avg: 40605.44 / Max: 40642.06Min: 40433.18 / Avg: 40512.39 / Max: 40631.14Min: 40617.19 / Avg: 40664.13 / Max: 40710.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto612182430SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 324.0924.1824.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto612182430Min: 23.99 / Avg: 24.09 / Max: 24.15Min: 24.13 / Avg: 24.18 / Max: 24.25Min: 24.04 / Avg: 24.18 / Max: 24.41. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto3691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 313.5313.4813.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto48121620Min: 13.5 / Avg: 13.53 / Max: 13.57Min: 13.47 / Avg: 13.48 / Max: 13.51Min: 13.51 / Avg: 13.53 / Max: 13.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingPrefer FreqPrefer CacheAuto6001200180024003000SE +/- 11.63, N = 15SE +/- 9.57, N = 15SE +/- 12.65, N = 152959.182970.042964.861. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingPrefer FreqPrefer CacheAuto5001000150020002500Min: 2884.29 / Avg: 2959.18 / Max: 3006.42Min: 2877.47 / Avg: 2970.04 / Max: 3006.42Min: 2860.08 / Avg: 2964.86 / Max: 3004.31. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustivePrefer FreqPrefer CacheAuto0.38280.76561.14841.53121.914SE +/- 0.0015, N = 3SE +/- 0.0022, N = 3SE +/- 0.0015, N = 31.70121.69761.69501. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustivePrefer FreqPrefer CacheAuto246810Min: 1.7 / Avg: 1.7 / Max: 1.7Min: 1.69 / Avg: 1.7 / Max: 1.7Min: 1.69 / Avg: 1.7 / Max: 1.71. (CXX) g++ options: -O3 -flto -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While WritingPrefer FreqPrefer CacheAuto900K1800K2700K3600K4500KSE +/- 25035.49, N = 3SE +/- 24244.30, N = 3SE +/- 12110.14, N = 34198454421224641969441. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read While WritingPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KMin: 4173149 / Avg: 4198454 / Max: 4248524Min: 4179982 / Avg: 4212246 / Max: 4259725Min: 4173414 / Avg: 4196944 / Max: 42136811. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000SE +/- 2.74, N = 3SE +/- 2.11, N = 3SE +/- 3.67, N = 31080.711082.361084.60MIN: 591.34 / MAX: 1282.05MIN: 781.51 / MAX: 1247.1MIN: 963.74 / MAX: 1282.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1075.73 / Avg: 1080.71 / Max: 1085.19Min: 1079.79 / Avg: 1082.36 / Max: 1086.54Min: 1077.96 / Avg: 1084.6 / Max: 1090.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write RandomPrefer FreqPrefer CacheAuto700K1400K2100K2800K3500KSE +/- 3924.51, N = 3SE +/- 9373.61, N = 3SE +/- 3489.00, N = 33315415331176633236671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Read Random Write RandomPrefer FreqPrefer CacheAuto600K1200K1800K2400K3000KMin: 3307677 / Avg: 3315415.33 / Max: 3320422Min: 3298196 / Avg: 3311766 / Max: 3329753Min: 3317011 / Avg: 3323666.67 / Max: 33288101. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.77, N = 15SE +/- 0.75, N = 15SE +/- 0.76, N = 15109.76109.57109.371. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 101.79 / Avg: 109.76 / Max: 111.5Min: 101.96 / Avg: 109.57 / Max: 111.08Min: 101.56 / Avg: 109.37 / Max: 111.281. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsPrefer FreqPrefer CacheAuto48121620SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 316.3916.3316.341. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsPrefer FreqPrefer CacheAuto48121620Min: 16.21 / Avg: 16.39 / Max: 16.52Min: 16.13 / Avg: 16.33 / Max: 16.45Min: 16.19 / Avg: 16.34 / Max: 16.51. (CXX) g++ options: -O3 -lm -ldl

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingPrefer FreqPrefer CacheAuto40K80K120K160K200KSE +/- 194.35, N = 3SE +/- 304.82, N = 3SE +/- 73.05, N = 31770241763981766161. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingPrefer FreqPrefer CacheAuto30K60K90K120K150KMin: 176704 / Avg: 177023.67 / Max: 177375Min: 176076 / Avg: 176397.67 / Max: 177007Min: 176497 / Avg: 176616.33 / Max: 1767491. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastPrefer FreqPrefer CacheAuto50100150200250SE +/- 0.21, N = 10SE +/- 0.18, N = 10SE +/- 0.15, N = 10230.80229.99230.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Super FastPrefer FreqPrefer CacheAuto4080120160200Min: 229.36 / Avg: 230.8 / Max: 231.56Min: 229.06 / Avg: 229.99 / Max: 230.71Min: 229.95 / Avg: 230.49 / Max: 231.341. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1.30952.6193.92855.2386.5475SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.825.805.801. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto246810Min: 5.82 / Avg: 5.82 / Max: 5.82Min: 5.79 / Avg: 5.8 / Max: 5.8Min: 5.8 / Avg: 5.8 / Max: 5.811. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumPrefer FreqPrefer CacheAuto20406080100SE +/- 0.09, N = 6SE +/- 0.09, N = 6SE +/- 0.11, N = 682.6482.3682.581. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: MediumPrefer FreqPrefer CacheAuto1632486480Min: 82.21 / Avg: 82.64 / Max: 82.84Min: 82.09 / Avg: 82.36 / Max: 82.65Min: 82.15 / Avg: 82.58 / Max: 82.851. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPPrefer FreqPrefer CacheAuto120240360480600SE +/- 0.00, N = 5SE +/- 2.00, N = 5SE +/- 2.26, N = 5549.45550.08551.311. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPPrefer FreqPrefer CacheAuto100200300400500Min: 549.45 / Avg: 549.45 / Max: 549.45Min: 543.48 / Avg: 550.08 / Max: 555.56Min: 543.48 / Avg: 551.31 / Max: 555.561. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Prefer FreqPrefer CacheAuto246810SE +/- 0.007, N = 6SE +/- 0.004, N = 6SE +/- 0.005, N = 66.8456.8686.8621. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Prefer FreqPrefer CacheAuto3691215Min: 6.82 / Avg: 6.84 / Max: 6.87Min: 6.86 / Avg: 6.87 / Max: 6.89Min: 6.84 / Avg: 6.86 / Max: 6.881. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 35.995.986.00MIN: 3.14 / MAX: 13.64MIN: 3.07 / MAX: 15.39MIN: 3.08 / MAX: 13.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 5.98 / Avg: 5.99 / Max: 6.01Min: 5.98 / Avg: 5.98 / Max: 5.99Min: 5.99 / Avg: 6 / Max: 6.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 36.106.126.12MIN: 3.29 / MAX: 13.39MIN: 3.21 / MAX: 13.65MIN: 3.2 / MAX: 14.831. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 6.09 / Avg: 6.1 / Max: 6.1Min: 6.11 / Avg: 6.12 / Max: 6.12Min: 6.11 / Avg: 6.12 / Max: 6.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterPrefer FreqPrefer CacheAuto2040608010096.8696.5996.91

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimePrefer FreqPrefer CacheAuto816243240SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 334.8634.9434.831. RawTherapee, version 5.9, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimePrefer FreqPrefer CacheAuto714212835Min: 34.66 / Avg: 34.86 / Max: 34.99Min: 34.79 / Avg: 34.94 / Max: 35.03Min: 34.69 / Avg: 34.83 / Max: 35.051. RawTherapee, version 5.9, command line.

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Prefer FreqPrefer CacheAuto4080120160200SE +/- 0.45, N = 4SE +/- 1.59, N = 9SE +/- 0.08, N = 4191.66192.25192.28MIN: 189.14 / MAX: 202.5MIN: 185.61 / MAX: 215.47MIN: 190.51 / MAX: 200.261. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Prefer FreqPrefer CacheAuto4080120160200Min: 190.68 / Avg: 191.66 / Max: 192.64Min: 188.9 / Avg: 192.25 / Max: 202.78Min: 192.03 / Avg: 192.28 / Max: 192.391. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150SE +/- 0.18, N = 7SE +/- 0.11, N = 7SE +/- 0.17, N = 7126.63126.34126.221. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 125.9 / Avg: 126.63 / Max: 127.36Min: 125.95 / Avg: 126.34 / Max: 126.78Min: 125.47 / Avg: 126.22 / Max: 126.731. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto6001200180024003000SE +/- 1.52, N = 3SE +/- 1.72, N = 3SE +/- 3.36, N = 32621.902613.722613.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto5001000150020002500Min: 2619.81 / Avg: 2621.9 / Max: 2624.86Min: 2610.91 / Avg: 2613.72 / Max: 2616.85Min: 2607.11 / Avg: 2613.42 / Max: 2618.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.99, N = 3SE +/- 3.26, N = 3SE +/- 2.10, N = 31102.641105.251101.711. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiPrefer FreqPrefer CacheAuto2004006008001000Min: 1101 / Avg: 1102.64 / Max: 1104.42Min: 1101.6 / Avg: 1105.25 / Max: 1111.75Min: 1097.54 / Avg: 1101.71 / Max: 1104.231. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto100200300400500SE +/- 0.43, N = 12SE +/- 0.64, N = 12SE +/- 0.50, N = 12459.41457.94458.641. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto80160240320400Min: 455.26 / Avg: 459.41 / Max: 461.28Min: 454.49 / Avg: 457.94 / Max: 461.05Min: 455.51 / Avg: 458.64 / Max: 461.951. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto306090120150SE +/- 0.26, N = 7SE +/- 0.22, N = 7SE +/- 0.17, N = 7113.42113.59113.231. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto20406080100Min: 112.37 / Avg: 113.42 / Max: 114.29Min: 112.92 / Avg: 113.59 / Max: 114.7Min: 112.67 / Avg: 113.23 / Max: 114.051. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantPrefer FreqPrefer CacheAuto30060090012001500SE +/- 8.09, N = 3SE +/- 1.70, N = 3SE +/- 6.58, N = 31627.871622.751626.40
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantPrefer FreqPrefer CacheAuto30060090012001500Min: 1619.47 / Avg: 1627.87 / Max: 1644.05Min: 1620.07 / Avg: 1622.75 / Max: 1625.91Min: 1619.63 / Avg: 1626.4 / Max: 1639.55

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadPrefer FreqPrefer CacheAuto30M60M90M120M150MSE +/- 260132.99, N = 3SE +/- 515512.16, N = 3SE +/- 134841.05, N = 31477400171477601091472961311. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 7.9.2Test: Random ReadPrefer FreqPrefer CacheAuto30M60M90M120M150MMin: 147323351 / Avg: 147740017 / Max: 148218164Min: 147032125 / Avg: 147760109.33 / Max: 148756389Min: 147113196 / Avg: 147296131 / Max: 1475592011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.23, N = 5SE +/- 0.54, N = 5SE +/- 1.52, N = 5827.99830.28830.581. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080p 10-bitPrefer FreqPrefer CacheAuto150300450600750Min: 825.23 / Avg: 827.99 / Max: 832.23Min: 829.34 / Avg: 830.28 / Max: 831.66Min: 825.07 / Avg: 830.58 / Max: 834.171. (CC) gcc options: -pthread -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastPrefer FreqPrefer CacheAuto1020304050SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.05, N = 445.9245.7845.841. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastPrefer FreqPrefer CacheAuto918273645Min: 45.86 / Avg: 45.92 / Max: 45.96Min: 45.72 / Avg: 45.78 / Max: 45.9Min: 45.71 / Avg: 45.84 / Max: 45.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timePrefer FreqPrefer CacheAuto3691215SE +/- 0.00141, N = 3SE +/- 0.00821, N = 3SE +/- 0.01032, N = 38.980568.985458.95807
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timePrefer FreqPrefer CacheAuto3691215Min: 8.98 / Avg: 8.98 / Max: 8.98Min: 8.97 / Avg: 8.99 / Max: 9Min: 8.94 / Avg: 8.96 / Max: 8.97

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.73, N = 5SE +/- 0.55, N = 5SE +/- 0.76, N = 5915.11912.38913.511. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Chimera 1080pPrefer FreqPrefer CacheAuto160320480640800Min: 913.32 / Avg: 915.11 / Max: 916.99Min: 910.78 / Avg: 912.38 / Max: 913.66Min: 911 / Avg: 913.51 / Max: 915.711. (CC) gcc options: -pthread -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumPrefer FreqPrefer CacheAuto306090120150SE +/- 0.06, N = 7SE +/- 0.05, N = 7SE +/- 0.04, N = 7129.53129.16129.151. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumPrefer FreqPrefer CacheAuto20406080100Min: 129.18 / Avg: 129.53 / Max: 129.73Min: 128.93 / Avg: 129.16 / Max: 129.26Min: 128.98 / Avg: 129.15 / Max: 129.231. (CXX) g++ options: -O3 -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000SE +/- 1.70, N = 3SE +/- 4.68, N = 3SE +/- 5.38, N = 31012.91012.01015.01. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedPrefer FreqPrefer CacheAuto2004006008001000Min: 1011 / Avg: 1012.9 / Max: 1016.3Min: 1005 / Avg: 1012.03 / Max: 1020.9Min: 1004.4 / Avg: 1015.03 / Max: 1021.81. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinPrefer FreqPrefer CacheAuto90K180K270K360K450KSE +/- 1105.18, N = 3SE +/- 260.58, N = 3SE +/- 126.80, N = 34264804255504252371. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinPrefer FreqPrefer CacheAuto70K140K210K280K350KMin: 425340 / Avg: 426480 / Max: 428690Min: 425260 / Avg: 425550 / Max: 426070Min: 425100 / Avg: 425236.67 / Max: 4254901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 73.54, N = 3SE +/- 118.20, N = 3SE +/- 98.70, N = 3138137.06137735.11138021.661. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: Vector MathPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 138007.51 / Avg: 138137.06 / Max: 138262.13Min: 137538.82 / Avg: 137735.11 / Max: 137947.33Min: 137855.12 / Avg: 138021.66 / Max: 138196.721. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto1.1952.393.5854.785.975SE +/- 0.016, N = 3SE +/- 0.018, N = 3SE +/- 0.009, N = 35.2985.3115.2961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto246810Min: 5.27 / Avg: 5.3 / Max: 5.32Min: 5.29 / Avg: 5.31 / Max: 5.35Min: 5.28 / Avg: 5.3 / Max: 5.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timePrefer FreqPrefer CacheAuto50100150200250SE +/- 0.84, N = 3SE +/- 0.12, N = 3SE +/- 0.40, N = 3235.28235.72235.95
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timePrefer FreqPrefer CacheAuto4080120160200Min: 233.72 / Avg: 235.28 / Max: 236.58Min: 235.53 / Avg: 235.72 / Max: 235.95Min: 235.18 / Avg: 235.94 / Max: 236.51

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto130260390520650SE +/- 0.84, N = 3SE +/- 0.31, N = 3SE +/- 0.52, N = 3589.71591.14589.51MIN: 329.37 / MAX: 617.43MIN: 290.4 / MAX: 616.78MIN: 368.88 / MAX: 615.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto100200300400500Min: 588.03 / Avg: 589.71 / Max: 590.71Min: 590.67 / Avg: 591.14 / Max: 591.72Min: 588.83 / Avg: 589.51 / Max: 590.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUPrefer FreqPrefer CacheAuto20K40K60K80K100KSE +/- 32.58, N = 3SE +/- 67.91, N = 3SE +/- 81.18, N = 3107729.78107434.06107446.971. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 107683.21 / Avg: 107729.78 / Max: 107792.54Min: 107336.86 / Avg: 107434.06 / Max: 107564.81Min: 107324.45 / Avg: 107446.97 / Max: 107600.51. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Prefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 12.93, N = 3SE +/- 15.43, N = 3SE +/- 16.88, N = 317769.517748.517797.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Prefer FreqPrefer CacheAuto3K6K9K12K15KMin: 17748.4 / Avg: 17769.5 / Max: 17793Min: 17732 / Avg: 17748.47 / Max: 17779.3Min: 17764.3 / Avg: 17797.33 / Max: 17819.9

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Prefer FreqPrefer CacheAuto20406080100SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 382.9383.0783.161. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Prefer FreqPrefer CacheAuto1632486480Min: 82.86 / Avg: 82.93 / Max: 83.05Min: 83.04 / Avg: 83.07 / Max: 83.09Min: 83.06 / Avg: 83.16 / Max: 83.241. (CXX) g++ options: -O3

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 18.99, N = 3SE +/- 7.50, N = 3SE +/- 24.03, N = 319658.119605.419626.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 19629.4 / Avg: 19658.13 / Max: 19694Min: 19590.6 / Avg: 19605.43 / Max: 19614.8Min: 19589 / Avg: 19626.1 / Max: 19671.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.59, N = 3SE +/- 0.55, N = 3SE +/- 0.17, N = 3488.91489.98490.19
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto90180270360450Min: 488.32 / Avg: 488.91 / Max: 490.1Min: 489.16 / Avg: 489.98 / Max: 491.03Min: 489.98 / Avg: 490.19 / Max: 490.52

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASPrefer FreqPrefer CacheAuto400800120016002000SE +/- 2.96, N = 3SE +/- 6.23, N = 3SE +/- 22.10, N = 31931192619281. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASPrefer FreqPrefer CacheAuto30060090012001500Min: 1927 / Avg: 1931.33 / Max: 1937Min: 1914 / Avg: 1926.33 / Max: 1934Min: 1897 / Avg: 1928.33 / Max: 19711. (CXX) g++ options: -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto306090120150SE +/- 0.31, N = 3SE +/- 0.23, N = 3SE +/- 0.15, N = 3135.63135.51135.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto306090120150Min: 135.01 / Avg: 135.63 / Max: 135.97Min: 135.12 / Avg: 135.51 / Max: 135.9Min: 135.08 / Avg: 135.28 / Max: 135.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto1326395265SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 358.9358.9959.08MIN: 41.49 / MAX: 69.51MIN: 29.03 / MAX: 69.75MIN: 27.39 / MAX: 72.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPUPrefer FreqPrefer CacheAuto1224364860Min: 58.77 / Avg: 58.93 / Max: 59.2Min: 58.82 / Avg: 58.99 / Max: 59.17Min: 58.97 / Avg: 59.08 / Max: 59.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolvePrefer FreqPrefer CacheAuto714212835SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 328.4828.5528.551. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolvePrefer FreqPrefer CacheAuto612182430Min: 28.44 / Avg: 28.48 / Max: 28.52Min: 28.5 / Avg: 28.55 / Max: 28.6Min: 28.52 / Avg: 28.54 / Max: 28.591. (CXX) g++ options: -fopenmp -O2 -march=native

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialPrefer FreqPrefer CacheAuto2040608010084.7384.7984.58

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowPrefer FreqPrefer CacheAuto510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 320.7420.6920.711. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: SlowPrefer FreqPrefer CacheAuto510152025Min: 20.74 / Avg: 20.74 / Max: 20.75Min: 20.68 / Avg: 20.69 / Max: 20.69Min: 20.68 / Avg: 20.71 / Max: 20.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto70140210280350SE +/- 0.10, N = 3SE +/- 0.48, N = 3SE +/- 0.44, N = 3306.23305.71305.51MIN: 293.51 / MAX: 313.39MIN: 264.82 / MAX: 314.53MIN: 290.82 / MAX: 316.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto50100150200250Min: 306.05 / Avg: 306.23 / Max: 306.38Min: 304.86 / Avg: 305.71 / Max: 306.51Min: 304.65 / Avg: 305.51 / Max: 306.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto170340510680850SE +/- 1.28, N = 13SE +/- 1.73, N = 13SE +/- 3.12, N = 13771.61773.41772.341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto140280420560700Min: 764.14 / Avg: 771.61 / Max: 782.31Min: 765.28 / Avg: 773.41 / Max: 784.63Min: 744.03 / Avg: 772.34 / Max: 783.381. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimePrefer FreqPrefer CacheAuto1326395265SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 358.9659.1058.991. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimePrefer FreqPrefer CacheAuto1224364860Min: 58.76 / Avg: 58.96 / Max: 59.16Min: 59.03 / Avg: 59.1 / Max: 59.14Min: 58.86 / Avg: 58.99 / Max: 59.121. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: unsharp-maskPrefer FreqPrefer CacheAuto3691215SE +/- 0.05, N = 4SE +/- 0.05, N = 4SE +/- 0.03, N = 413.1313.1413.11
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.32Test: unsharp-maskPrefer FreqPrefer CacheAuto48121620Min: 13.05 / Avg: 13.12 / Max: 13.27Min: 13.02 / Avg: 13.14 / Max: 13.23Min: 13.04 / Avg: 13.11 / Max: 13.18

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto20406080100SE +/- 0.02, N = 6SE +/- 0.06, N = 6SE +/- 0.20, N = 679.3279.1979.141. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto1530456075Min: 79.24 / Avg: 79.32 / Max: 79.39Min: 78.97 / Avg: 79.19 / Max: 79.43Min: 78.88 / Avg: 79.14 / Max: 80.131. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicPrefer FreqPrefer CacheAuto50K100K150K200K250KSE +/- 147.40, N = 3SE +/- 84.51, N = 3SE +/- 178.50, N = 3212527.91212454.76212936.161. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: AtomicPrefer FreqPrefer CacheAuto40K80K120K160K200KMin: 212243.57 / Avg: 212527.91 / Max: 212737.49Min: 212320.94 / Avg: 212454.76 / Max: 212611.07Min: 212694.94 / Avg: 212936.16 / Max: 213284.71. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.1430.2860.4290.5720.715SE +/- 0.001721, N = 9SE +/- 0.001156, N = 9SE +/- 0.001435, N = 90.6343570.6357670.634455MIN: 0.61MIN: 0.62MIN: 0.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto246810Min: 0.63 / Avg: 0.63 / Max: 0.64Min: 0.63 / Avg: 0.64 / Max: 0.64Min: 0.63 / Avg: 0.63 / Max: 0.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto246810SE +/- 0.035806, N = 3SE +/- 0.016609, N = 3SE +/- 0.032079, N = 37.7432607.7263967.7267381. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Prefer FreqPrefer CacheAuto3691215Min: 7.67 / Avg: 7.74 / Max: 7.78Min: 7.69 / Avg: 7.73 / Max: 7.75Min: 7.69 / Avg: 7.73 / Max: 7.791. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUPrefer FreqPrefer CacheAuto1.07552.1513.22654.3025.3775SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 34.784.784.77MIN: 2.12 / MAX: 5.4MIN: 2.1 / MAX: 5.41MIN: 2.09 / MAX: 5.39
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUPrefer FreqPrefer CacheAuto246810Min: 4.77 / Avg: 4.78 / Max: 4.79Min: 4.75 / Avg: 4.78 / Max: 4.8Min: 4.76 / Avg: 4.77 / Max: 4.77

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto30060090012001500SE +/- 1.63, N = 3SE +/- 0.83, N = 3SE +/- 1.06, N = 31333.241335.291332.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUPrefer FreqPrefer CacheAuto2004006008001000Min: 1330.44 / Avg: 1333.24 / Max: 1336.09Min: 1333.69 / Avg: 1335.29 / Max: 1336.5Min: 1330.59 / Avg: 1332.5 / Max: 1334.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingPrefer FreqPrefer CacheAuto2K4K6K8K10KSE +/- 80.38, N = 6SE +/- 80.38, N = 6SE +/- 171.09, N = 711495.911495.911519.91. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingPrefer FreqPrefer CacheAuto2K4K6K8K10KMin: 11094 / Avg: 11495.92 / Max: 11576.3Min: 11094 / Avg: 11495.92 / Max: 11576.3Min: 11094 / Avg: 11519.94 / Max: 12102.51. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1530456075SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 367.4867.6267.61
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto1326395265Min: 67.43 / Avg: 67.48 / Max: 67.58Min: 67.49 / Avg: 67.62 / Max: 67.74Min: 67.43 / Avg: 67.61 / Max: 67.83

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MSE +/- 2628265.17, N = 3SE +/- 1675642.50, N = 3SE +/- 971253.49, N = 31488633333148566666714877000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Prefer FreqPrefer CacheAuto300M600M900M1200M1500MMin: 1484000000 / Avg: 1488633333.33 / Max: 1493100000Min: 1482500000 / Avg: 1485666666.67 / Max: 1488200000Min: 1486400000 / Avg: 1487700000 / Max: 14896000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timePrefer FreqPrefer CacheAuto246810SE +/- 0.00336, N = 3SE +/- 0.00154, N = 3SE +/- 0.00120, N = 37.572997.561897.57692
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timePrefer FreqPrefer CacheAuto3691215Min: 7.57 / Avg: 7.57 / Max: 7.58Min: 7.56 / Avg: 7.56 / Max: 7.56Min: 7.57 / Avg: 7.58 / Max: 7.58

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsPrefer FreqPrefer CacheAuto30K60K90K120K150KSE +/- 78.60, N = 3SE +/- 290.08, N = 3SE +/- 27.28, N = 31366131367831365131. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsPrefer FreqPrefer CacheAuto20K40K60K80K100KMin: 136460 / Avg: 136613.33 / Max: 136720Min: 136440 / Avg: 136783.33 / Max: 137360Min: 136460 / Avg: 136513.33 / Max: 1365501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto612182430SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 326.0926.1126.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto612182430Min: 26.06 / Avg: 26.09 / Max: 26.12Min: 26.02 / Avg: 26.11 / Max: 26.22Min: 26.09 / Avg: 26.14 / Max: 26.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto130260390520650SE +/- 1.08, N = 12SE +/- 1.13, N = 12SE +/- 1.15, N = 12591.01591.30592.131. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto100200300400500Min: 581.4 / Avg: 591.01 / Max: 594.65Min: 585.37 / Avg: 591.3 / Max: 600Min: 582.52 / Avg: 592.13 / Max: 598.211. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumPrefer FreqPrefer CacheAuto510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 321.1921.1621.151. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: MediumPrefer FreqPrefer CacheAuto510152025Min: 21.19 / Avg: 21.19 / Max: 21.2Min: 21.14 / Avg: 21.16 / Max: 21.17Min: 21.11 / Avg: 21.15 / Max: 21.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Prefer FreqPrefer CacheAuto246810SE +/- 0.049732, N = 3SE +/- 0.018306, N = 3SE +/- 0.071291, N = 37.1955167.2085997.1964011. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Prefer FreqPrefer CacheAuto3691215Min: 7.12 / Avg: 7.2 / Max: 7.29Min: 7.17 / Avg: 7.21 / Max: 7.24Min: 7.12 / Avg: 7.2 / Max: 7.341. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 97.34, N = 3SE +/- 72.19, N = 3SE +/- 58.09, N = 3312383128031224
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUPrefer FreqPrefer CacheAuto5K10K15K20K25KMin: 31093 / Avg: 31238 / Max: 31423Min: 31193 / Avg: 31279.67 / Max: 31423Min: 31126 / Avg: 31223.67 / Max: 31327

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 3167.94168.24168.20
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto306090120150Min: 167.76 / Avg: 167.94 / Max: 168.22Min: 168.05 / Avg: 168.24 / Max: 168.38Min: 168.03 / Avg: 168.2 / Max: 168.42

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptPrefer FreqPrefer CacheAuto140280420560700SE +/- 0.23, N = 3SE +/- 0.56, N = 3SE +/- 0.42, N = 3641.11640.64639.971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptPrefer FreqPrefer CacheAuto110220330440550Min: 640.65 / Avg: 641.11 / Max: 641.37Min: 639.54 / Avg: 640.64 / Max: 641.34Min: 639.24 / Avg: 639.97 / Max: 640.691. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinPrefer FreqPrefer CacheAuto60K120K180K240K300KSE +/- 231.40, N = 3SE +/- 25.17, N = 3SE +/- 104.14, N = 32628872624202627871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinPrefer FreqPrefer CacheAuto50K100K150K200K250KMin: 262430 / Avg: 262886.67 / Max: 263180Min: 262390 / Avg: 262420 / Max: 262470Min: 262600 / Avg: 262786.67 / Max: 2629601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto714212835SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 328.2928.3428.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pPrefer FreqPrefer CacheAuto612182430Min: 28.21 / Avg: 28.29 / Max: 28.37Min: 28.26 / Avg: 28.34 / Max: 28.44Min: 28.31 / Avg: 28.32 / Max: 28.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pPrefer FreqPrefer CacheAuto30060090012001500SE +/- 1.15, N = 10SE +/- 1.44, N = 10SE +/- 1.39, N = 101407.911406.811409.231. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.1Video Input: Summer Nature 1080pPrefer FreqPrefer CacheAuto2004006008001000Min: 1402.75 / Avg: 1407.91 / Max: 1412.86Min: 1401.97 / Avg: 1406.81 / Max: 1416.87Min: 1400.89 / Avg: 1409.23 / Max: 1415.081. (CC) gcc options: -pthread -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.34, N = 8SE +/- 0.10, N = 8SE +/- 0.36, N = 8174.93174.96174.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150Min: 173.86 / Avg: 174.93 / Max: 176.68Min: 174.32 / Avg: 174.96 / Max: 175.18Min: 173.66 / Avg: 174.66 / Max: 176.681. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedPrefer FreqPrefer CacheAuto4K8K12K16K20KSE +/- 99.20, N = 3SE +/- 79.02, N = 3SE +/- 78.27, N = 317314.4017319.6717290.981. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedPrefer FreqPrefer CacheAuto3K6K9K12K15KMin: 17126.55 / Avg: 17314.4 / Max: 17463.63Min: 17168.9 / Avg: 17319.67 / Max: 17436.11Min: 17190.25 / Avg: 17290.98 / Max: 17445.121. (CC) gcc options: -O3

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomPrefer FreqPrefer CacheAuto1.22272.44543.66814.89086.1135SE +/- 0.009, N = 3SE +/- 0.010, N = 3SE +/- 0.011, N = 35.4345.4255.428
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomPrefer FreqPrefer CacheAuto246810Min: 5.42 / Avg: 5.43 / Max: 5.45Min: 5.41 / Avg: 5.43 / Max: 5.45Min: 5.41 / Avg: 5.43 / Max: 5.44

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompilePrefer FreqPrefer CacheAuto1122334455SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 349.3749.4549.38
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompilePrefer FreqPrefer CacheAuto1020304050Min: 49.2 / Avg: 49.37 / Max: 49.56Min: 49.34 / Avg: 49.44 / Max: 49.5Min: 49.35 / Avg: 49.38 / Max: 49.42

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoPrefer FreqPrefer CacheAuto9K18K27K36K45KSE +/- 33.31, N = 3SE +/- 82.33, N = 3SE +/- 19.05, N = 343758.7643716.1343690.401. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CryptoPrefer FreqPrefer CacheAuto8K16K24K32K40KMin: 43705.09 / Avg: 43758.76 / Max: 43819.79Min: 43551.7 / Avg: 43716.13 / Max: 43805.96Min: 43652.32 / Avg: 43690.4 / Max: 43710.451. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowPrefer FreqPrefer CacheAuto20406080100SE +/- 0.07, N = 6SE +/- 0.06, N = 6SE +/- 0.12, N = 679.8479.7979.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: SlowPrefer FreqPrefer CacheAuto1530456075Min: 79.6 / Avg: 79.84 / Max: 80.04Min: 79.57 / Avg: 79.79 / Max: 79.96Min: 79.49 / Avg: 79.72 / Max: 80.261. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatPrefer FreqPrefer CacheAuto2004006008001000SE +/- 0.75, N = 3SE +/- 2.07, N = 3SE +/- 1.48, N = 31108.321109.841108.20
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatPrefer FreqPrefer CacheAuto2004006008001000Min: 1106.99 / Avg: 1108.32 / Max: 1109.58Min: 1105.69 / Avg: 1109.84 / Max: 1112.02Min: 1105.24 / Avg: 1108.2 / Max: 1109.76

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyritePrefer FreqPrefer CacheAuto60K120K180K240K300KSE +/- 957.29, N = 3SE +/- 356.85, N = 3SE +/- 551.67, N = 32919532919332915471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyritePrefer FreqPrefer CacheAuto50K100K150K200K250KMin: 290040 / Avg: 291953.33 / Max: 292970Min: 291220 / Avg: 291933.33 / Max: 292310Min: 290490 / Avg: 291546.67 / Max: 2923501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto306090120150SE +/- 0.22, N = 7SE +/- 0.23, N = 7SE +/- 0.16, N = 7118.00117.97117.841. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 116.99 / Avg: 118 / Max: 118.55Min: 117.3 / Avg: 117.97 / Max: 118.74Min: 117.19 / Avg: 117.84 / Max: 118.611. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastPrefer FreqPrefer CacheAuto4080120160200SE +/- 0.11, N = 9SE +/- 0.11, N = 9SE +/- 0.10, N = 9173.58173.35173.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Very FastPrefer FreqPrefer CacheAuto306090120150Min: 173.19 / Avg: 173.58 / Max: 174.06Min: 172.86 / Avg: 173.35 / Max: 173.92Min: 173.16 / Avg: 173.45 / Max: 174.161. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.31, N = 11SE +/- 0.34, N = 11SE +/- 0.28, N = 11281.74281.56281.371. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 1080p - Video Preset: Ultra FastPrefer FreqPrefer CacheAuto50100150200250Min: 280.09 / Avg: 281.74 / Max: 283.51Min: 279.87 / Avg: 281.56 / Max: 283.02Min: 279.95 / Avg: 281.37 / Max: 282.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.26, N = 3SE +/- 0.39, N = 3SE +/- 0.14, N = 3252.07252.32252.37
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaPrefer FreqPrefer CacheAuto50100150200250Min: 251.67 / Avg: 252.07 / Max: 252.56Min: 251.79 / Avg: 252.32 / Max: 253.09Min: 252.1 / Avg: 252.37 / Max: 252.58

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastPrefer FreqPrefer CacheAuto1428425670SE +/- 0.06, N = 5SE +/- 0.04, N = 5SE +/- 0.04, N = 560.4760.4060.461. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastPrefer FreqPrefer CacheAuto1224364860Min: 60.28 / Avg: 60.47 / Max: 60.63Min: 60.3 / Avg: 60.4 / Max: 60.53Min: 60.36 / Avg: 60.46 / Max: 60.61. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompilePrefer FreqPrefer CacheAuto1224364860SE +/- 0.42, N = 3SE +/- 0.31, N = 3SE +/- 0.37, N = 355.2955.3555.29
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompilePrefer FreqPrefer CacheAuto1122334455Min: 54.56 / Avg: 55.29 / Max: 56.02Min: 54.73 / Avg: 55.35 / Max: 55.67Min: 54.63 / Avg: 55.29 / Max: 55.91

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisPrefer FreqPrefer CacheAuto1632486480SE +/- 0.02, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 370.4170.3470.381. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisPrefer FreqPrefer CacheAuto1428425670Min: 70.38 / Avg: 70.41 / Max: 70.45Min: 70.06 / Avg: 70.34 / Max: 70.56Min: 70.2 / Avg: 70.38 / Max: 70.511. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromePrefer FreqPrefer CacheAuto80160240320400SE +/- 3.94, N = 15SE +/- 3.24, N = 3SE +/- 3.39, N = 15372.8373.2373.21. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromePrefer FreqPrefer CacheAuto70140210280350Min: 345.3 / Avg: 372.8 / Max: 390.3Min: 369.1 / Avg: 373.2 / Max: 379.6Min: 344.4 / Avg: 373.17 / Max: 388.11. chrome 110.0.5481.96

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian DragonPrefer FreqPrefer CacheAuto918273645SE +/- 0.02, N = 4SE +/- 0.02, N = 4SE +/- 0.06, N = 439.7639.7839.74MIN: 39.42 / MAX: 40.83MIN: 39.43 / MAX: 40.87MIN: 39.35 / MAX: 40.85
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0Binary: Pathtracer ISPC - Model: Asian DragonPrefer FreqPrefer CacheAuto816243240Min: 39.72 / Avg: 39.76 / Max: 39.8Min: 39.74 / Avg: 39.78 / Max: 39.82Min: 39.62 / Avg: 39.74 / Max: 39.87

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastPrefer FreqPrefer CacheAuto80160240320400SE +/- 0.25, N = 5SE +/- 0.29, N = 5SE +/- 0.23, N = 5368.10368.25367.891. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastPrefer FreqPrefer CacheAuto70140210280350Min: 367.69 / Avg: 368.1 / Max: 369.04Min: 367.44 / Avg: 368.25 / Max: 368.92Min: 367.23 / Avg: 367.89 / Max: 368.351. (CXX) g++ options: -O3 -flto -pthread

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterPrefer FreqPrefer CacheAuto110220330440550SE +/- 2.13, N = 7SE +/- 3.13, N = 5SE +/- 4.15, N = 3527.9527.7527.5
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterPrefer FreqPrefer CacheAuto90180270360450Min: 518.6 / Avg: 527.9 / Max: 534Min: 515.7 / Avg: 527.66 / Max: 532.2Min: 519.4 / Avg: 527.53 / Max: 533

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubePrefer FreqPrefer CacheAuto306090120150SE +/- 0.10, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3126.82126.81126.881. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubePrefer FreqPrefer CacheAuto20406080100Min: 126.69 / Avg: 126.82 / Max: 127.03Min: 126.55 / Avg: 126.81 / Max: 127.02Min: 126.64 / Avg: 126.88 / Max: 127.141. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimePrefer FreqPrefer CacheAuto1.34642.69284.03925.38566.732SE +/- 0.003, N = 7SE +/- 0.004, N = 7SE +/- 0.005, N = 75.9835.9845.9811. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimePrefer FreqPrefer CacheAuto246810Min: 5.97 / Avg: 5.98 / Max: 5.99Min: 5.97 / Avg: 5.98 / Max: 6Min: 5.97 / Avg: 5.98 / Max: 6.011. (CC) gcc options: -static -fopenmp -O3 -march=native

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto9001800270036004500SE +/- 7.55, N = 3SE +/- 8.89, N = 3SE +/- 3.51, N = 33988399039881. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerPrefer FreqPrefer CacheAuto7001400210028003500Min: 3973 / Avg: 3988 / Max: 3997Min: 3977 / Avg: 3990 / Max: 4007Min: 3981 / Avg: 3988 / Max: 39921. (CXX) g++ options: -O3 -lm -ldl

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Prefer FreqPrefer CacheAuto246810SE +/- 0.00077, N = 3SE +/- 0.01712, N = 3SE +/- 0.00551, N = 38.331978.328308.328851. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Prefer FreqPrefer CacheAuto3691215Min: 8.33 / Avg: 8.33 / Max: 8.33Min: 8.3 / Avg: 8.33 / Max: 8.35Min: 8.32 / Avg: 8.33 / Max: 8.341. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto306090120150SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3138.43138.45138.42
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyPrefer FreqPrefer CacheAuto306090120150Min: 138.19 / Avg: 138.43 / Max: 138.63Min: 138.26 / Avg: 138.45 / Max: 138.56Min: 138.28 / Avg: 138.42 / Max: 138.53

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100SE +/- 0.10, N = 6SE +/- 0.21, N = 6SE +/- 0.12, N = 6101.09101.10101.101. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto20406080100Min: 100.67 / Avg: 101.09 / Max: 101.32Min: 100.38 / Avg: 101.1 / Max: 101.57Min: 100.69 / Avg: 101.1 / Max: 101.511. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromePrefer FreqPrefer CacheAuto0.83251.6652.49753.334.1625SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 43.73.73.71. chrome 110.0.5481.96
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromePrefer FreqPrefer CacheAuto246810Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.7 / Avg: 3.7 / Max: 3.71. chrome 110.0.5481.96

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto0.09230.18460.27690.36920.4615SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.410.410.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.6Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto12345Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.41 / Avg: 0.41 / Max: 0.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto0.15080.30160.45240.60320.754SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.670.670.67MIN: 0.34 / MAX: 8.81MIN: 0.37 / MAX: 8.32MIN: 0.37 / MAX: 8.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUPrefer FreqPrefer CacheAuto246810Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.67 / Max: 0.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto0.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.390.390.39MIN: 0.22 / MAX: 7.76MIN: 0.23 / MAX: 9.35MIN: 0.23 / MAX: 7.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUPrefer FreqPrefer CacheAuto12345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

srsRAN

MinAvgMaxPrefer Freq21.351.957.5Prefer Cache19.040.957.4Auto19.038.240.9OpenBenchmarking.orgWatts, Fewer Is BettersrsRAN 22.04.1CPU Power Consumption Monitor1632486480

OpenBenchmarking.orgUE Mb/s Per Watt, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto1.26722.53443.80165.06886.3364.9365.4515.632

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto60120180240300SE +/- 0.99, N = 3SE +/- 3.93, N = 15SE +/- 1.33, N = 3256.0223.1215.31. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMPrefer FreqPrefer CacheAuto50100150200250Min: 254 / Avg: 255.97 / Max: 257.1Min: 212.5 / Avg: 223.09 / Max: 253Min: 213.6 / Avg: 215.27 / Max: 217.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

MinAvgMaxPrefer Freq18.935.642.0Prefer Cache10.637.457.9Auto19.846.558.0OpenBenchmarking.orgWatts, Fewer Is BettersrsRAN 22.04.1CPU Power Consumption Monitor1632486480

OpenBenchmarking.orgUE Mb/s Per Watt, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto2468106.0685.8955.482

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto60120180240300SE +/- 1.29, N = 5SE +/- 3.75, N = 15SE +/- 0.89, N = 5215.9220.7254.81. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 22.04.1Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMPrefer FreqPrefer CacheAuto50100150200250Min: 212.8 / Avg: 215.94 / Max: 219.4Min: 212.1 / Avg: 220.72 / Max: 255.7Min: 252.6 / Avg: 254.8 / Max: 256.91. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -ldl -lpthread -lm

Dolfyn

MinAvgMaxPrefer Freq17.542.759.9Prefer Cache19.742.759.9Auto16.842.659.9OpenBenchmarking.orgWatts, Fewer Is BetterDolfyn 0.527CPU Power Consumption Monitor1632486480

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsPrefer FreqPrefer CacheAuto3691215SE +/- 0.20, N = 15SE +/- 0.20, N = 15SE +/- 0.19, N = 1511.0011.0911.12
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsPrefer FreqPrefer CacheAuto3691215Min: 10.18 / Avg: 11 / Max: 11.92Min: 10.31 / Avg: 11.09 / Max: 11.96Min: 10.24 / Avg: 11.12 / Max: 11.93

Stress-NG

MinAvgMaxPrefer Freq19.855.662.1Prefer Cache19.456.363.3Auto18.956.663.0OpenBenchmarking.orgWatts, Fewer Is BetterStress-NG 0.14.06CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgBogo Ops/s Per Watt, More Is BetterStress-NG 0.14.06Test: IO_uringPrefer FreqPrefer CacheAuto120240360480600530.42559.19575.56

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringPrefer FreqPrefer CacheAuto7K14K21K28K35KSE +/- 283.23, N = 12SE +/- 592.21, N = 14SE +/- 645.93, N = 1529496.7631486.8232571.401. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: IO_uringPrefer FreqPrefer CacheAuto6K12K18K24K30KMin: 26897.82 / Avg: 29496.76 / Max: 31276.89Min: 26166.76 / Avg: 31486.82 / Max: 33875.27Min: 28136.72 / Avg: 32571.4 / Max: 38716.831. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

Test: x86_64 RdRand

Auto: The test run did not produce a result. E: stress-ng: error: [982014] No stress workers invoked (one or more were unsupported)

Prefer Cache: The test run did not produce a result. E: stress-ng: error: [943105] No stress workers invoked (one or more were unsupported)

Prefer Freq: The test run did not produce a result. E: stress-ng: error: [939716] No stress workers invoked (one or more were unsupported)

MinAvgMaxPrefer Freq20.661.589.9Prefer Cache20.769.489.0Auto17.069.588.8OpenBenchmarking.orgWatts, Fewer Is BetterStress-NG 0.14.06CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgBogo Ops/s Per Watt, More Is BetterStress-NG 0.14.06Test: CPU CachePrefer FreqPrefer CacheAuto0.59361.18721.78082.37442.9680.5192.6072.638

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CachePrefer FreqPrefer CacheAuto4080120160200SE +/- 0.31, N = 6SE +/- 1.74, N = 15SE +/- 3.49, N = 1531.91181.06183.321. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: CPU CachePrefer FreqPrefer CacheAuto306090120150Min: 30.82 / Avg: 31.91 / Max: 32.59Min: 171.79 / Avg: 181.06 / Max: 198.11Min: 168.28 / Avg: 183.32 / Max: 224.441. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

MinAvgMaxPrefer Freq21.184.294.8Prefer Cache20.484.5104.2Auto18.584.196.0OpenBenchmarking.orgWatts, Fewer Is BetterStress-NG 0.14.06CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgBogo Ops/s Per Watt, More Is BetterStress-NG 0.14.06Test: System V Message PassingPrefer FreqPrefer CacheAuto70K140K210K280K350K297618.27307502.13302192.38

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingPrefer FreqPrefer CacheAuto6M12M18M24M30MSE +/- 15666.97, N = 3SE +/- 518902.00, N = 15SE +/- 221720.67, N = 725052017.7325978492.7725423648.141. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14.06Test: System V Message PassingPrefer FreqPrefer CacheAuto5M10M15M20M25MMin: 25025852.55 / Avg: 25052017.73 / Max: 25080030.15Min: 24923838.67 / Avg: 25978492.77 / Max: 30420911.82Min: 24969753.1 / Avg: 25423648.14 / Max: 26330489.331. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread

DeepSpeech

MinAvgMaxPrefer Freq21.143.058.4Prefer Cache20.242.850.6Auto18.450.861.8OpenBenchmarking.orgWatts, Fewer Is BetterDeepSpeech 0.6CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUPrefer FreqPrefer CacheAuto918273645SE +/- 0.40, N = 3SE +/- 0.33, N = 3SE +/- 0.61, N = 1535.2035.3238.57
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUPrefer FreqPrefer CacheAuto816243240Min: 34.55 / Avg: 35.2 / Max: 35.92Min: 34.68 / Avg: 35.32 / Max: 35.78Min: 33.72 / Avg: 38.57 / Max: 40.67

Cpuminer-Opt

MinAvgMaxPrefer Freq20.894.8108.6Prefer Cache17.594.9107.7Auto22.095.9107.9OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.20.3CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinPrefer FreqPrefer CacheAuto91827364540.8039.8540.07

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinPrefer FreqPrefer CacheAuto8001600240032004000SE +/- 137.18, N = 15SE +/- 45.48, N = 15SE +/- 55.34, N = 33869.373781.073841.471. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinPrefer FreqPrefer CacheAuto7001400210028003500Min: 3119.44 / Avg: 3869.37 / Max: 4560.72Min: 3375.88 / Avg: 3781.07 / Max: 4063.33Min: 3782.16 / Avg: 3841.47 / Max: 3952.051. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Himeno Benchmark

MinAvgMaxPrefer Freq17.040.944.7Prefer Cache16.851.956.5Auto15.750.156.6OpenBenchmarking.orgWatts, Fewer Is BetterHimeno Benchmark 3.0CPU Power Consumption Monitor1632486480

OpenBenchmarking.orgMFLOPS Per Watt, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverPrefer FreqPrefer CacheAuto306090120150113.4699.43106.87

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverPrefer FreqPrefer CacheAuto11002200330044005500SE +/- 83.07, N = 15SE +/- 120.69, N = 15SE +/- 140.79, N = 154638.295163.445350.251. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverPrefer FreqPrefer CacheAuto9001800270036004500Min: 4161.95 / Avg: 4638.29 / Max: 5055.63Min: 4533.32 / Avg: 5163.44 / Max: 5888.94Min: 4682.76 / Avg: 5350.25 / Max: 6576.311. (CC) gcc options: -O3 -mavx2

DaCapo Benchmark

MinAvgMaxPrefer Freq19.637.359.6Prefer Cache17.637.260.4Auto18.737.361.0OpenBenchmarking.orgWatts, Fewer Is BetterDaCapo Benchmark 9.12-MR1CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Prefer FreqPrefer CacheAuto400800120016002000SE +/- 36.66, N = 20SE +/- 31.69, N = 20SE +/- 32.29, N = 20163815931632
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Prefer FreqPrefer CacheAuto30060090012001500Min: 1358 / Avg: 1638.35 / Max: 1934Min: 1386 / Avg: 1593.15 / Max: 1814Min: 1378 / Avg: 1631.7 / Max: 1882

Renaissance

MinAvgMaxPrefer Freq19.950.090.4Prefer Cache20.150.390.8Auto15.949.991.7OpenBenchmarking.orgWatts, Fewer Is BetterRenaissance 0.14CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyPrefer FreqPrefer CacheAuto110220330440550SE +/- 7.93, N = 15SE +/- 1.94, N = 3SE +/- 9.27, N = 15470.7521.1475.2MIN: 360.24 / MAX: 753.62MIN: 368.91 / MAX: 746.18MIN: 358.08 / MAX: 752.1
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyPrefer FreqPrefer CacheAuto90180270360450Min: 434.4 / Avg: 470.72 / Max: 522.64Min: 519.07 / Avg: 521.13 / Max: 525.01Min: 427.47 / Avg: 475.19 / Max: 525.69

Selenium

MinAvgMaxPrefer Freq19.627.648.9Prefer Cache18.227.546.9Auto19.927.751.6OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor1530456075

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromePrefer FreqPrefer CacheAuto510152025SE +/- 0.37, N = 15SE +/- 0.30, N = 15SE +/- 0.38, N = 1518.6419.3318.841. chrome 110.0.5481.96
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromePrefer FreqPrefer CacheAuto510152025Min: 16.89 / Avg: 18.64 / Max: 20.16Min: 16.99 / Avg: 19.33 / Max: 20.14Min: 16.88 / Avg: 18.84 / Max: 20.321. chrome 110.0.5481.96

Google Draco

MinAvgMaxPrefer Freq18.232.243.3Prefer Cache18.639.056.6Auto19.332.955.8OpenBenchmarking.orgWatts, Fewer Is BetterGoogle Draco 1.5.0CPU Power Consumption Monitor1632486480

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadePrefer FreqPrefer CacheAuto10002000300040005000SE +/- 2.19, N = 8SE +/- 6.68, N = 7SE +/- 57.92, N = 153623451736771. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadePrefer FreqPrefer CacheAuto8001600240032004000Min: 3610 / Avg: 3623.38 / Max: 3628Min: 4500 / Avg: 4517.29 / Max: 4537Min: 3603 / Avg: 3676.87 / Max: 44861. (CXX) g++ options: -O3

SVT-AV1

MinAvgMaxPrefer Freq18.950.897.8Prefer Cache19.951.498.0Auto19.251.097.0OpenBenchmarking.orgWatts, Fewer Is BetterSVT-AV1 1.4CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto0.93151.8632.79453.7264.65754.1314.0844.140

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto50100150200250SE +/- 3.40, N = 15SE +/- 3.44, N = 15SE +/- 3.51, N = 15209.98209.86211.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KPrefer FreqPrefer CacheAuto4080120160200Min: 174.83 / Avg: 209.98 / Max: 217.39Min: 176.67 / Avg: 209.86 / Max: 217.2Min: 175.9 / Avg: 211 / Max: 217.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

MinAvgMaxPrefer Freq20.177.4124.4Prefer Cache15.375.2123.4Auto10.977.3124.7OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 3.0CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto0.03830.07660.11490.15320.1915SE +/- 0.001025, N = 13SE +/- 0.002974, N = 15SE +/- 0.002416, N = 120.1485850.1700310.148820MIN: 0.13MIN: 0.14MIN: 0.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUPrefer FreqPrefer CacheAuto12345Min: 0.14 / Avg: 0.15 / Max: 0.16Min: 0.15 / Avg: 0.17 / Max: 0.2Min: 0.14 / Avg: 0.15 / Max: 0.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

ONNX Runtime

MinAvgMaxPrefer Freq17.667.476.7Prefer Cache21.667.870.3Auto18.567.276.5OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.14CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto48121620SE +/- 0.43, N = 12SE +/- 0.06, N = 3SE +/- 0.51, N = 1514.3913.4214.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto48121620Min: 13.39 / Avg: 14.39 / Max: 17.57Min: 13.3 / Avg: 13.42 / Max: 13.49Min: 13.27 / Avg: 14.7 / Max: 181. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto20406080100SE +/- 1.82, N = 12SE +/- 0.35, N = 3SE +/- 2.15, N = 1570.0974.5169.081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1428425670Min: 56.91 / Avg: 70.09 / Max: 74.69Min: 74.11 / Avg: 74.51 / Max: 75.2Min: 55.55 / Avg: 69.08 / Max: 75.381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto0.47690.95381.43071.90762.3845SE +/- 0.00189, N = 3SE +/- 0.00651, N = 3SE +/- 0.04888, N = 152.001002.008332.119661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 2 / Avg: 2 / Max: 2Min: 2 / Avg: 2.01 / Max: 2.02Min: 1.98 / Avg: 2.12 / Max: 2.531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto110220330440550SE +/- 0.47, N = 3SE +/- 1.61, N = 3SE +/- 9.98, N = 15499.70497.88474.941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto90180270360450Min: 498.89 / Avg: 499.7 / Max: 500.52Min: 494.71 / Avg: 497.88 / Max: 499.94Min: 395.89 / Avg: 474.93 / Max: 503.771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto612182430SE +/- 0.52, N = 12SE +/- 0.98, N = 15SE +/- 0.72, N = 1522.9824.1423.651. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto612182430Min: 21.57 / Avg: 22.98 / Max: 26.12Min: 21.52 / Avg: 24.14 / Max: 30.7Min: 21.59 / Avg: 23.65 / Max: 30.661. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1020304050SE +/- 0.93, N = 12SE +/- 1.48, N = 15SE +/- 1.11, N = 1543.7442.2742.761. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto918273645Min: 38.28 / Avg: 43.74 / Max: 46.35Min: 32.57 / Avg: 42.27 / Max: 46.46Min: 32.61 / Avg: 42.76 / Max: 46.321. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1326395265SE +/- 2.10, N = 15SE +/- 2.61, N = 15SE +/- 2.18, N = 1555.5958.0659.641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1224364860Min: 49.03 / Avg: 55.59 / Max: 68.82Min: 49.04 / Avg: 58.06 / Max: 70.11Min: 49.39 / Avg: 59.64 / Max: 68.441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto510152025SE +/- 0.63, N = 15SE +/- 0.75, N = 15SE +/- 0.65, N = 1518.3217.6917.101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto510152025Min: 14.53 / Avg: 18.32 / Max: 20.39Min: 14.26 / Avg: 17.69 / Max: 20.39Min: 14.61 / Avg: 17.1 / Max: 20.251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto1.24882.49763.74644.99526.244SE +/- 0.29232, N = 12SE +/- 0.21808, N = 15SE +/- 0.23907, N = 155.535495.550235.524251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 4.7 / Avg: 5.54 / Max: 7.11Min: 4.71 / Avg: 5.55 / Max: 6.72Min: 4.69 / Avg: 5.52 / Max: 7.151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto4080120160200SE +/- 9.16, N = 12SE +/- 7.02, N = 15SE +/- 7.51, N = 15185.95184.02185.541. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto306090120150Min: 140.55 / Avg: 185.95 / Max: 212.97Min: 148.88 / Avg: 184.02 / Max: 212.43Min: 139.91 / Avg: 185.54 / Max: 213.061. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto80160240320400SE +/- 19.31, N = 15SE +/- 15.84, N = 12SE +/- 20.18, N = 15337.22297.82346.461. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto60120180240300Min: 276.35 / Avg: 337.22 / Max: 469.59Min: 277.23 / Avg: 297.82 / Max: 470.93Min: 276.66 / Avg: 346.46 / Max: 473.821. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto0.77131.54262.31393.08523.8565SE +/- 0.15247, N = 15SE +/- 0.12057, N = 12SE +/- 0.15740, N = 153.087023.428083.013551. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto246810Min: 2.13 / Avg: 3.09 / Max: 3.62Min: 2.12 / Avg: 3.43 / Max: 3.61Min: 2.11 / Avg: 3.01 / Max: 3.611. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto20406080100SE +/- 0.73, N = 3SE +/- 3.28, N = 15SE +/- 0.82, N = 582.1094.1381.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto20406080100Min: 80.65 / Avg: 82.1 / Max: 82.88Min: 82.6 / Avg: 94.13 / Max: 113.58Min: 78.51 / Avg: 81.7 / Max: 82.841. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto3691215SE +/- 0.11, N = 3SE +/- 0.35, N = 15SE +/- 0.13, N = 512.1810.7912.241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardPrefer FreqPrefer CacheAuto48121620Min: 12.06 / Avg: 12.18 / Max: 12.4Min: 8.8 / Avg: 10.79 / Max: 12.11Min: 12.07 / Avg: 12.24 / Max: 12.741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

438 Results Shown

oneDNN
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  OFDM_Test
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
KTX-Software toktx
srsRAN
simdjson
PyBench
srsRAN
KTX-Software toktx
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
simdjson
Google Draco
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
Radiance Benchmark
simdjson:
  TopTweet
  DistinctUserID
PHPBench
Pennant
QuantLib
GNU Radio
simdjson
ASKAP
Numpy Benchmark
LZ4 Compression
Zstd Compression
LAME MP3 Encoding
Numenta Anomaly Benchmark
libavif avifenc
ACES DGEMM
AOM AV1
WebP Image Encode
oneDNN
Zstd Compression
GNU Radio
Numenta Anomaly Benchmark
AOM AV1
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
SQLite Speedtest
Selenium
RNNoise
Numenta Anomaly Benchmark
GraphicsMagick
GNU Radio
GIMP
WebP Image Encode
Stress-NG
GNU Radio
Timed LLVM Compilation
libavif avifenc
VP9 libvpx Encoding
ClickHouse:
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, Third Run
OpenVKL
LuaRadio
KTX-Software toktx
ONNX Runtime
WebP Image Encode
Zstd Compression
AOM AV1:
  Speed 10 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 1080p
OSPRay Studio
NCNN
Unpacking Firefox
WebP Image Encode
GraphicsMagick
TNN
Darktable
OSPRay Studio
NCNN
Unpacking The Linux Kernel
PyHPC Benchmarks
GEGL
OSPRay Studio:
  2 - 1080p - 32 - Path Tracer
  3 - 4K - 32 - Path Tracer
LZ4 Compression
Zstd Compression
LuxCoreRender
Xcompact3d Incompact3d
OSPRay Studio
Pennant
Selenium
OSPRay Studio
Node.js V8 Web Tooling Benchmark
GEGL
OSPRay Studio
NCNN
OSPRay Studio
LZ4 Compression
Zstd Compression
Cpuminer-Opt
Stargate Digital Audio Workstation
rav1e
GNU Radio
ClickHouse
libavif avifenc
NCNN
GEGL
NCNN
libjpeg-turbo tjbench
NCNN
rav1e
oneDNN
NCNN
GEGL
OpenEMS
GraphicsMagick
libavif avifenc
Radiance Benchmark
OSPRay Studio
Numenta Anomaly Benchmark
LuaRadio
Node.js Express HTTP Load Test
OSPRay Studio
Natron
NCNN
LeelaChessZero
OpenVINO
Zstd Compression
PyHPC Benchmarks
NCNN
OpenVINO
GIMP
GraphicsMagick
Stress-NG
GEGL
Gcrypt Library
NCNN:
  CPU - regnety_400m
  CPU - mnasnet
GNU Octave Benchmark
Renaissance
LZ4 Compression
NCNN
libavif avifenc
AOM AV1
Renaissance
Xcompact3d Incompact3d
Cpuminer-Opt
Stargate Digital Audio Workstation
Renaissance:
  Apache Spark PageRank
  Genetic Algorithm Using Jenetics + Futures
Stress-NG
VP9 libvpx Encoding
Stress-NG:
  Glibc C String Functions
  Memory Copying
GIMP
Zstd Compression
OSPRay
Cpuminer-Opt
Zstd Compression
PyHPC Benchmarks
Stress-NG
oneDNN
WireGuard + Linux Networking Stack Stress Test
GNU Radio
LAMMPS Molecular Dynamics Simulator
OpenVINO
LuaRadio
GraphicsMagick
PyHPC Benchmarks
WebP Image Encode
VP9 libvpx Encoding
Selenium
GraphicsMagick
Stress-NG
GEGL
VP9 libvpx Encoding
Stress-NG
AOM AV1
Liquid-DSP
OSPRay
AOM AV1
SVT-AV1
x265
Timed Linux Kernel Compilation
OpenVINO
SVT-AV1
NCNN
Timed Mesa Compilation
OpenVINO
NCNN:
  CPU - resnet50
  CPU - shufflenet-v2
Crafty
AOM AV1
LuxCoreRender
Stargate Digital Audio Workstation
Sysbench
Darktable
SVT-VP9
Stress-NG
Zstd Compression
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
Embree
Renaissance
TNN
ASKAP
Stargate Digital Audio Workstation
Numenta Anomaly Benchmark
AOM AV1
OpenVINO
Stargate Digital Audio Workstation
OSPRay
Darktable
NCNN
AOM AV1
Timed Linux Kernel Compilation
Renaissance
AOM AV1
Liquid-DSP
Stress-NG:
  NUMA
  Matrix Math
Appleseed
Zstd Compression
LuxCoreRender
Xmrig
Renaissance
GEGL
LZ4 Compression
SVT-AV1
SVT-HEVC
RocksDB
GEGL
Stress-NG
KTX-Software toktx
OpenVINO
Numenta Anomaly Benchmark
ASTC Encoder
Renaissance
DaCapo Benchmark
Timed FFmpeg Compilation
Selenium
Zstd Compression
RocksDB
NCNN
Embree
RocksDB
Stargate Digital Audio Workstation
Algebraic Multi-Grid Benchmark
Stress-NG
Zstd Compression
dav1d
oneDNN
Selenium
LuxCoreRender
SVT-AV1
Timed MPlayer Compilation
GROMACS
x264
IndigoBench
Blender
OpenVINO
GEGL
Cpuminer-Opt
Selenium
Stress-NG
7-Zip Compression
rav1e
OpenVKL
BRL-CAD
Renaissance
Darktable
Stress-NG
x264
OpenVINO
SVT-AV1
SVT-VP9
Zstd Compression
Embree
rav1e
NAMD
OSPRay Studio
OpenVINO
SVT-HEVC
TNN
RocksDB
GraphicsMagick
LULESH
OpenVINO
AOM AV1
OpenVINO
ASKAP
ASTC Encoder
RocksDB
OpenVINO
RocksDB
SVT-VP9
LAMMPS Molecular Dynamics Simulator
7-Zip Compression
Kvazaar
SVT-HEVC
Kvazaar
ASKAP
Primesieve
OpenVINO:
  Weld Porosity Detection FP16 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Appleseed
RawTherapee
TNN
SVT-VP9
OpenVINO
Cpuminer-Opt
SVT-VP9
x265
TensorFlow Lite
RocksDB
dav1d
Kvazaar
OSPRay
dav1d
ASTC Encoder
Zstd Compression
Cpuminer-Opt
Stress-NG
SVT-AV1
OSPRay
OpenVINO
Sysbench
TensorFlow Lite
Primesieve
Xmrig
Blender
LeelaChessZero
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
m-queens
Appleseed
Kvazaar
OpenVINO
SVT-AV1
Tachyon
GIMP
Kvazaar
Stress-NG
oneDNN
Stargate Digital Audio Workstation
LuxCoreRender
OpenVINO
ASKAP
Blender
Liquid-DSP
OSPRay
Cpuminer-Opt
OpenVINO
SVT-HEVC
Kvazaar
Stargate Digital Audio Workstation
Chaos Group V-RAY
Blender
Cpuminer-Opt:
  scrypt
  Skeincoin
AOM AV1
dav1d
SVT-HEVC
LZ4 Compression
IndigoBench
Timed Godot Game Engine Compilation
Stress-NG
Kvazaar
TensorFlow Lite
Cpuminer-Opt
SVT-VP9
Kvazaar:
  Bosphorus 1080p - Very Fast
  Bosphorus 1080p - Ultra Fast
Timed LLVM Compilation
Kvazaar
Build2
Timed MrBayes Analysis
Selenium
Embree
ASTC Encoder
LuaRadio
GPAW
N-Queens
OSPRay Studio
High Performance Conjugate Gradient
Blender
SVT-HEVC
Selenium
AOM AV1
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
srsRAN:
  CPU Power Consumption Monitor
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
srsRAN
srsRAN:
  CPU Power Consumption Monitor
  4G PHY_DL_Test 100 PRB SISO 64-QAM
srsRAN
Dolfyn
Dolfyn
Stress-NG:
  CPU Power Consumption Monitor
  IO_uring
Stress-NG
Stress-NG:
  CPU Power Consumption Monitor
  CPU Cache
Stress-NG
Stress-NG:
  CPU Power Consumption Monitor
  System V Message Passing
Stress-NG
DeepSpeech
DeepSpeech
Cpuminer-Opt:
  CPU Power Consumption Monitor
  Garlicoin
Cpuminer-Opt
Himeno Benchmark:
  CPU Power Consumption Monitor
  Poisson Pressure Solver
Himeno Benchmark
DaCapo Benchmark
DaCapo Benchmark
Renaissance
Renaissance
Selenium
Selenium
Google Draco
Google Draco
SVT-AV1:
  CPU Power Consumption Monitor
  Preset 13 - Bosphorus 4K
SVT-AV1
oneDNN
oneDNN
ONNX Runtime
ONNX Runtime:
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second