raptor lake extra

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210201-PTS-RAPTORLA85
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
AV1 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 9 Tests
C/C++ Compiler Tests 19 Tests
CPU Massive 29 Tests
Creator Workloads 30 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 11 Tests
Game Development 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 6 Tests
Java 3 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 12 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 35 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 2 Tests
Raytracing 4 Tests
Renderers 8 Tests
Scientific Computing 4 Tests
Server 7 Tests
Server CPU Tests 20 Tests
Single-Threaded 7 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
13600K A
October 16 2022
  8 Hours, 56 Minutes
i5-13600K
October 17 2022
  2 Hours, 43 Minutes
13900K
October 17 2022
  8 Hours, 19 Minutes
13900K R
October 18 2022
  6 Hours, 1 Minute
Invert Hiding All Results Option
  6 Hours, 30 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


raptor lake extraProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution13600K Ai5-13600K13900K13900K RIntel Core i5-13600K @ 5.10GHz (14 Cores / 20 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (1720 BIOS)Intel Device 7aa732GB2000GB Samsung SSD 980 PRO 2TBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (2004 BIOS)2000GB Samsung SSD 980 PRO 2TB + 2000GBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- 13600K A: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- i5-13600K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- 13900K: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108- 13900K R: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

13600K Ai5-13600K13900K13900K RResult OverviewPhoronix Test Suite100%123%146%169%192%SVT-HEVCtoyBrot Fractal GeneratorOpenRadiossQuadRayEmbreeTimed MrBayes AnalysisOSPRayLeelaChessZeroJPEG XL Decoding libjxlx264SVT-VP9Timed LLVM CompilationStockfishx265LAMMPS Molecular Dynamics SimulatorJava Gradle BuildSVT-AV1DaCapo BenchmarkTSCPasmFishlibavif avifencCoremarkZstd CompressionNAMDTimed Godot Game Engine CompilationJPEG XL libjxlNode.js Express HTTP Load TestTimed Mesa CompilationTimed Linux Kernel CompilationAOM AV1Renaissance

raptor lake extraopenradioss: Cell Phone Drop Testperl-benchmark: Interpretertensorflow-lite: NASNet Mobileonnx: ArcFace ResNet-100 - CPU - Standardtensorflow-lite: Inception ResNet V2ncnn: CPU - efficientnet-b0darktable: Server Rack - CPU-onlyncnn: CPU - resnet18sysbench: CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamtoybrot: TBBopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenvino: Vehicle Detection FP16 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streammnn: mobilenet-v1-1.0toybrot: OpenMPtoybrot: C++ Threadsdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamtoybrot: C++ Taskscompress-zstd: 3, Long Mode - Compression Speedopenvino: Weld Porosity Detection FP16 - CPUncnn: CPU - regnety_400mopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUquadray: 5 - 4Kmnn: MobileNetV2_224openvino: Person Vehicle Bike Detection FP16 - CPUonnx: ArcFace ResNet-100 - CPU - Parallelspacy: en_core_web_trfopenvino: Person Detection FP16 - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streammnn: nasnetembree: Pathtracer - Asian Dragon Objembree: Pathtracer - Asian Dragonopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUquadray: 5 - 1080popenvino: Vehicle Detection FP16-INT8 - CPUpyperformance: python_startupncnn: CPU - vision_transformerembree: Pathtracer ISPC - Asian Dragon Objquadray: 1 - 4Ktensorflow: CPU - 64 - AlexNetquadray: 2 - 4Kospray-studio: 2 - 1080p - 32 - Path Tracerembree: Pathtracer ISPC - Asian Dragonrenaissance: Finagle HTTP Requestsembree: Pathtracer - Crownospray-studio: 1 - 1080p - 32 - Path Tracerquadray: 3 - 4Kospray-studio: 3 - 1080p - 32 - Path Tracertensorflow: CPU - 32 - AlexNetquadray: 3 - 1080plczero: BLASquadray: 1 - 1080paom-av1: Speed 6 Realtime - Bosphorus 1080pmrbayes: Primate Phylogeny Analysismnn: inception-v3quadray: 2 - 1080ponnx: bertsquad-12 - CPU - Standardembree: Pathtracer ISPC - Crownncnn: CPU - FastestDetjpegxl-decode: Alltensorflow: CPU - 16 - GoogLeNetopenvino: Face Detection FP16 - CPUtensorflow: CPU - 256 - AlexNetospray-studio: 3 - 1080p - 1 - Path Tracerrocksdb: Update Randncnn: CPU - mnasnetospray-studio: 2 - 4K - 1 - Path Tracerospray-studio: 2 - 1080p - 16 - Path Tracerospray-studio: 3 - 1080p - 16 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracerospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 2 - 4K - 32 - Path Tracertensorflow: CPU - 16 - ResNet-50astcenc: Fasttensorflow: CPU - 32 - GoogLeNetospray-studio: 3 - 4K - 32 - Path Tracerospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 2 - 4K - 16 - Path Tracerdarktable: Masskrug - CPU-onlycompress-zstd: 19 - Compression Speedsvt-vp9: Visual Quality Optimized - Bosphorus 1080pospray-studio: 1 - 4K - 32 - Path Tracerastcenc: Exhaustiveai-benchmark: Device Inference Scoreospray-studio: 1 - 4K - 16 - Path Tracermnn: SqueezeNetV1.0ncnn: CPU - vgg16openvino: Face Detection FP16 - CPUtensorflow: CPU - 32 - ResNet-50svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080px265: Bosphorus 4Kaircrack-ng: ai-benchmark: Device AI Scorerocksdb: Read While Writingospray: gravity_spheres_volume/dim_512/ao/real_timetensorflow: CPU - 16 - AlexNetospray: particle_volume/scivis/real_timetensorflow: CPU - 512 - AlexNetai-benchmark: Device Training Scoreospray: gravity_spheres_volume/dim_512/pathtracer/real_timeprimesieve: 1e12build-nodejs: Time To Compilenatron: Spaceshipospray: particle_volume/pathtracer/real_timeastcenc: Mediummnn: mobilenetV3ospray: particle_volume/ao/real_timeonnx: yolov4 - CPU - Standardtensorflow-lite: Inception V4astcenc: Thoroughcompress-zstd: 8 - Compression Speedsvt-vp9: VMAF Optimized - Bosphorus 1080pblender: Barbershop - CPU-Onlyospray: gravity_spheres_volume/dim_512/scivis/real_timencnn: CPU - shufflenet-v2aom-av1: Speed 9 Realtime - Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenradioss: Rubber O-Ring Seal Installationtensorflow-lite: Mobilenet Quantx264: Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Ktensorflow: CPU - 64 - ResNet-50deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streammnn: resnet-v2-50svt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kdacapobench: H2tensorflow-lite: SqueezeNetx264: Bosphorus 1080pblender: Classroom - CPU-Onlytensorflow: CPU - 64 - GoogLeNetopenradioss: INIVOL and Fluid Structure Interaction Drop Containeronnx: GPT-2 - CPU - Standarddeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamavifenc: 6svt-hevc: 10 - Bosphorus 1080pdarktable: Server Room - CPU-onlybuild-llvm: Unix Makefilesaom-av1: Speed 8 Realtime - Bosphorus 1080ponnx: bertsquad-12 - CPU - Paralleldacapobench: Jythonncnn: CPU-v3-v3 - mobilenet-v3avifenc: 0svt-av1: Preset 10 - Bosphorus 1080pstockfish: Total Timesvt-av1: Preset 8 - Bosphorus 4Kindigobench: CPU - Bedroomtensorflow: CPU - 256 - ResNet-50renaissance: Akka Unbalanced Cobwebbed Treencnn: CPU - resnet50svt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kopenvino: Vehicle Detection FP16-INT8 - CPUlammps: 20k Atomsbuild-llvm: Ninjasvt-vp9: Visual Quality Optimized - Bosphorus 4Klammps: Rhodopsin Proteinsvt-av1: Preset 12 - Bosphorus 1080pmnn: squeezenetv1.1clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacherenaissance: Genetic Algorithm Using Jenetics + Futuressvt-hevc: 7 - Bosphorus 4Ktnn: CPU - DenseNetaom-av1: Speed 8 Realtime - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 1080pblender: Fishy Cat - CPU-Onlyprimesieve: 1e13tnn: CPU - MobileNet v2deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamsqlite-speedtest: Timed Time - Size 1,000avifenc: 2java-gradle-perf: Reactortnn: CPU - SqueezeNet v1.1tensorflow: CPU - 256 - GoogLeNetblender: Pabellon Barcelona - CPU-Onlypyperformance: django_templatedacapobench: Tradesoaprawtherapee: Total Benchmark Timephpbench: PHP Benchmark Suitepyperformance: pickle_pure_pythononnx: GPT-2 - CPU - Parallelpyperformance: gopyperformance: crypto_pyaesdeepspeech: CPUopenvino: Person Detection FP16 - CPUpyperformance: raytraceopenscad: Leonardo Phone Case Slimencode-mp3: WAV To MP3openscad: Mini-ITX Casetscp: AI Chess Performancepyperformance: pathlibperl-benchmark: Pod2htmltnn: CPU - SqueezeNet v2encode-wavpack: WAV To WavPackopenscad: Projector Mount Swivelpyperformance: regex_compileencode-opus: WAV To Opus Encodetensorflow-lite: Mobilenet Floatopenradioss: Bird Strike on Windshieldaom-av1: Speed 10 Realtime - Bosphorus 4Klczero: Eigenasmfish: 1024 Hash Memory, 26 Depthpyperformance: chaoscoremark: CoreMark Size 666 - Iterations Per Secondpybench: Total For Average Test Timesopenscad: Retro Carncnn: CPU - blazefaceopenvino: Person Vehicle Bike Detection FP16 - CPUpyperformance: json_loadsblender: BMW27 - CPU-Onlyclickhouse: 100M Rows Web Analytics Dataset, Third Runappleseed: Emilyclickhouse: 100M Rows Web Analytics Dataset, Second Runopenscad: Pistoldarktable: Boat - CPU-onlyspacy: en_core_web_lgpyperformance: 2to3openradioss: Bumper Beamavifenc: 6, Losslessencode-flac: WAV To FLACtensorflow: CPU - 512 - GoogLeNetpyperformance: nbodycompress-zstd: 3 - Compression Speedaom-av1: Speed 9 Realtime - Bosphorus 4Kpyperformance: floatjpegxl-decode: 1ncnn: CPU - squeezenet_ssdopenvino: Vehicle Detection FP16 - CPUsvt-hevc: 1 - Bosphorus 1080pjpegxl: JPEG - 80openvino: Age Gender Recognition Retail 0013 FP16 - CPUappleseed: Material Testerjpegxl: PNG - 80renaissance: Rand Forestaom-av1: Speed 6 Two-Pass - Bosphorus 4Kjpegxl: PNG - 90jpegxl: JPEG - 90namd: ATPase Simulation - 327,506 Atomsopenvino: Person Detection FP32 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 4Kncnn: CPU - mobilenetcompress-zstd: 19, Long Mode - Compression Speedrocksdb: Rand Readtachyon: Total Timencnn: CPU - googlenetnode-web-tooling: build-godot: Time To Compilex265: Bosphorus 1080prenaissance: Apache Spark ALSncnn: CPU - alexnetdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamrenaissance: Scala Dottysvt-hevc: 1 - Bosphorus 4Kbuild-python: Released Build, PGO + LTO Optimizedaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 1080prenaissance: In-Memory Database Shootoutappleseed: Disney Materialsvt-av1: Preset 4 - Bosphorus 1080prenaissance: Apache Spark Bayesbuild-linux-kernel: defconfigjpegxl: PNG - 100aom-av1: Speed 6 Two-Pass - Bosphorus 1080pnode-express-loadtest: onnx: super-resolution-10 - CPU - Standardbuild-mesa: Time To Compileopenvino: Weld Porosity Detection FP16 - CPUcompress-zstd: 8, Long Mode - Compression Speedjpegxl: JPEG - 100deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamgimp: rotateindigobench: CPU - Supercaronnx: fcn-resnet101-11 - CPU - Parallelbuild-php: Time To Compilecompress-zstd: 19 - Decompression Speedrocksdb: Read Rand Write Randaom-av1: Speed 0 Two-Pass - Bosphorus 1080psvt-av1: Preset 4 - Bosphorus 4Kncnn: CPU - yolov4-tinydacapobench: Tradebeansdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamopenvino: Face Detection FP16-INT8 - CPUonnx: yolov4 - CPU - Parallelbuild-wasmer: Time To Compilencnn: CPU-v2-v2 - mobilenet-v2compress-zstd: 19, Long Mode - Decompression Speedsvt-vp9: VMAF Optimized - Bosphorus 4Kbuild-python: Defaultcompress-zstd: 3 - Decompression Speedrenaissance: Apache Spark PageRankdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamonnx: fcn-resnet101-11 - CPU - Standardavifenc: 10, Losslessdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamcompress-zstd: 8 - Decompression Speedrenaissance: ALS Movie Lensgimp: auto-levelsonnx: super-resolution-10 - CPU - Parallelsysbench: RAM / Memorycompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Decompression Speedrenaissance: Savina Reactors.IOy-cruncher: 1Bopenvino: Weld Porosity Detection FP16-INT8 - CPUy-cruncher: 500Mgimp: resizeopenvino: Machine Translation EN To DE FP16 - CPUgimp: unsharp-maskbuild-linux-kernel: allmodconfigdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamoctave-benchmark: deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streambuild-erlang: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamsmhasher: wyhash13600K Ai5-13600K13900K13900K R14.080.0028092319654317792592803.970.1395.8958743.01191.3668129.15342690712.47811.8983813.246115.5764.71173.8142783225488192.119525721938.947.467.14112.41.5387.92045.680.472.0239.3634515572047.6296.68377.05218.403820.43120.671.937.986.83168.1720.09166.73150.151.986358623.09441895.8621931.6473582133.926.41107326.1570.49111.97820.5527.6597218.39412.96326.0370.893.18170.8922045367122.87738229951350321869860824111324.58205.193370.8228138318367232142647292161222752.89655.3279.482361101.030213621196854.13328.521544.5224.15332.97187.224.8540912.359367229893673.67901111.76.79439186.1123104.8092716.739386.5064.8167.98176.96350.9636.8236960026166.310.12691182.9320.21886.553.589172.32178.3720877.33173.552420.7544.28135.524.4672.262517.659100.820191901.72190.53223.6674.19494.881057416.781459.57944.854445.12.134410.844150.4673920402.59101.86313.9584097472454.6922.64525.156675.811.91106.421163.677625.889.825392.84483.99.673555.5972.932215.541173.167.541764.50766.61213.1439.29145.759113.68190.761180.97445.177422.129537.48547.876155.845143.16478.38271.142439.3911422451216736412856.143.930262.392329.8135.40724.54719741149.720.0617930241.46911.9214.54790.44.9161450.81262.8286.9618774784409351.4645300.064535192.5241.01533.4213.478.65233.95244.528471235.8655.5862.87518828172161.937.34212.61680.8366.55727.684.5851.662.5210.11320.7814.7611.359303.66132.87714411.66419.916.8811.5111.180.900612.399.068.165111411788181.35887.6521.3176.29281.842118.34.4533.527729.8235449.53.76195.2330.321.32076.3125.6951457.151896.861.8920.9654.1115871493131.319294.4112310.9681.333612.291210.4627.5419745.5814721.126210700.92.42613.31178536.527212.8750739.9692.834849.194.9814.8935157.42152.054.1785894.00336.384854087979.410.997502121115.295756.95519.24396.730.61116.3614.35913.77744.3912.506705.5578.1978121.98084.9958.49398.1309122.98518.4456107.822107.65255982782225427255961044.70.4718.749620.18321.9220.59336.791.9722.63371918.517.19011.646.38108725.9671.99112.3727.6418.646330.9554.6278.8341.6184.3125.683.63556.771714.79907166.9976.820121148.2313.583.54717179.96173.6344.76135.26102.292124188.71495.034.963438.92426.473150.292035100.49319.5993988996354.4566688.6108.302160.5119.802396.82284.519.729548.5691226.967.4566.16212.6239.41142.66848.07163.26420591974114254.4184.44180949053594642432.678409160.067.285893.182.8663.1614.6711.4511.75421.816.6511.5911.260.899899.0249.876.60481.562104.0457.43.730.2921.342134.67.145901.661.5250.9653.871577131.771236.70.954708.50.892.43617784847.696.445159.32158.34.0485406.47807.45746.55520.54473.7708.178104.570.0005140361183.55343279224.090.2356.03105916.06341.3364228.33841525321.921420.52451420.814327.19111.46512.2181650415293317.8081155391531.2778.01177.352.36592.963020.10.682.92413.5149722392927.8137.78149.57926.025928.37180.942.6811.064.94122.127.33889.16204.282.684714730.51062518.123.2157463332.254855179.098.5181534.5493.76146.86826.27510.0973824.15163.84426.9692.664.13221.317076907882.67577723338273451476683618864231.36261.47890.182229881458576611298623080966212.28968.9351.311896021.28451707958895.16522.861926.1630.01413.76153.0730.8350724.199455037007034.48781137.798.35096229.3328435.8704913.609314.4385.9205.2594.5911.1818.3657149027731.212.36631401.2382.22728.24.312122.41150.2825289.52143.632080.6653.38158.1429.4687.027221.216118.3317692100.04225.84187.9788.36418.47888714.125970.7734.183520.381.807361.295130.3187017382.2687.065367.2454660436863.6083.07529.237757.210.25123.339185.831722.5511.193344.19396.6811.126631.4273.373247.911066.875.221543.69758.98241.9444.68162.19100.24168.049159.65439.863725.078333.15542.525153.601126.99588.34240.6221.3196035.194160032419265481145049.003362.682078.7584.89821.99422031128.710.0553804537.35110.6914.07881.14.51396.39241.6979.4817115261357746.3709677.4193554682.2781.12591.2212.171.12258.67221.167983258.5350.4112.60520762156147.026.92111.46288.8960.56287.278.4747.168.399.6293.615.8912.3910149.22121.88430312.67390.418.1312.5412.140.827642.69.818.124710517191987.90577.5123.0770.76488.262240.84.6331.112832.1375424.74.01181.9810.3122.742001.4133.9087887.607854.358.3121.0257.1616721465330.006311.281296.8177.993312.8179.9437.93210243.3544930.727390610.932.53512.9171035.00313.4152838.4162.885032.896.7314.3575349.22088.652.3987863.91337.6265589.67757.310.735487520515.6359105673.24406.831.2471092.3614.05913.58245.0712.349698.378.0876123.641438.425526.01964.9678.41348.0636124.009863.5258.4076107.380599.240.000520422465995910108.560.12111.43339.5123228.5693169151410.87991414.7852111.28132.3251653115256320.1917155411548.511.550.682.8612236137.70310.00726.076928.34182.68127.5727.50229.16204.242.674674630.64412561.822.9033462182.1954973179.448.4886134.3787.81148.87727.20110.0623.88172.93424.0992.71220.5717232.23574023676275231459673319067131.28259.695289.892212241447570511265823193967622.29867.9349.371877791.2915956325.15822.8929.82384.75150.6530.8350505.524.48167137.618.25976227.85.9044813.686315.727204.80993.94461.1838.3204931966.212.23361219.4378.334.316892.82148.08145.192007.8151.52131.229.4286.857421.25698.5418482282.79225.4187.6588.22415.7414.201170.39974.285517.691.803362.947127.6517922.2187.793361.3064013492662.87829.187625.110.4123.157183.49411.312346.58696.5211.139618.4713.354244.031078.577.521551.22158.43241.3539.7160.803100.11168.738159.97840.085424.939133.53842.8144.461129.17788.1182835.00249.286758.8264.83722.0521943340.0556246837.1874.084.4131553.51236.2778.23168953144036712804.5619492.2761.171.17256.04260.6950.342.62120568148.986.66911.47588.696245.477.1768.110.516.0212.3912.71387.118.1412.5312.170.830029.777.5147.588.24137.0622.5872.41786.342271.94.831.148632.1004427.2182.8040.3122.672011.37.597847.058.2511.0257.141632830.3871292.5177.287912.934110.0843.6244907.10.932.51412.74177435.114938.7992.775010.598.5914.7135321.92082.152.31133.98437.41715583.47738.710.6725887.55664.44358.231.32114.2313.49212.33699.2188.1464122.748738.869625.72245.028.47558.0998123.456363.1848.4399107.7274OpenBenchmarking.org

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Test13600K Ai5-13600K13900K13900K R2040608010014.08107.65104.5799.24

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Interpreter13600K A13900K13900K R0.00060.00120.00180.00240.0030.002809230.000514030.00052042

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile13600K A13900K13900K R50K100K150K200K250K196543.061183.5246599.0

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard13600K A13900K40080012001600200017795341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V213600K A13900K13900K R130K260K390K520K650K259280327922591010

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b013600K A13900K13900K R2468103.974.098.56MIN: 3.83 / MAX: 4.8MIN: 4.04 / MAX: 4.64MIN: 3.85 / MAX: 1157.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-only13600K A13900K13900K R0.05290.10580.15870.21160.26450.1390.2350.121

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet1813600K A13900K13900K R36912155.896.0311.43MIN: 5.68 / MAX: 6.8MIN: 5.96 / MAX: 7.7MIN: 6.05 / MAX: 1070.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU13600K A13900K20K40K60K80K100K58743.01105916.061. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R70140210280350191.37341.34339.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R50100150200250129.15228.34228.57

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBB13600K Ai5-13600K13900K13900K R6K12K18K24K30K269072559815253169151. (CXX) g++ options: -O3 -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13600K A13900K51015202512.4721.92MIN: 9.36 / MAX: 29.42MIN: 12.55 / MAX: 43.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R30060090012001500811.901420.521410.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R30060090012001500813.251420.811414.79

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13600K A13900K61218243015.5727.19MIN: 11.78 / MAX: 28.06MIN: 14.86 / MAX: 50.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2040608010064.71111.47111.28

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.013600K A13900K13900K R0.85821.71642.57463.43284.2913.8142.2182.325MIN: 3.74 / MAX: 10.16MIN: 2.18 / MAX: 5.03MIN: 2.13 / MAX: 26.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMP13600K Ai5-13600K13900K13900K R6K12K18K24K30K278322782216504165311. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threads13600K Ai5-13600K13900K13900K R5K10K15K20K25K254882542715293152561. (CXX) g++ options: -O3 -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R70140210280350192.12317.81320.19

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasks13600K Ai5-13600K13900K13900K R6K12K18K24K30K257212559615539155411. (CXX) g++ options: -O3 -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed13600K Ai5-13600K13900K13900K R30060090012001500938.91044.71531.21548.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13600K A13900K2040608010047.4677.00MIN: 24.39 / MAX: 65.06MIN: 43.7 / MAX: 99.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400m13600K A13900K13900K R36912157.148.0111.55MIN: 6.93 / MAX: 7.88MIN: 7.87 / MAX: 9.45MIN: 7.99 / MAX: 760.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13600K A13900K4080120160200112.40177.35MIN: 91.63 / MAX: 160.94MIN: 136.29 / MAX: 238.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13600K A13900K0.5311.0621.5932.1242.6551.502.36MIN: 1.1 / MAX: 3.03MIN: 1.3 / MAX: 41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13600K A13900K130260390520650387.90592.96MIN: 290.34 / MAX: 782.61MIN: 334.92 / MAX: 1081.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13600K A13900K60012001800240030002045.683020.10MIN: 1719.02 / MAX: 2706.1MIN: 2516.37 / MAX: 3733.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4K13600K Ai5-13600K13900K13900K R0.1530.3060.4590.6120.7650.470.470.680.681. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_22413600K A13900K13900K R0.65791.31581.97372.63163.28952.0232.9242.861MIN: 1.98 / MAX: 7.95MIN: 2.82 / MAX: 5.77MIN: 2.75 / MAX: 26.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13600K A13900K36912159.3613.51MIN: 7.85 / MAX: 16.55MIN: 9.52 / MAX: 25.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel13600K A13900K1102203304405503454971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trf13600K A13900K13900K R5001000150020002500155722392236

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13600K A13900K60012001800240030002047.622927.80MIN: 1754.76 / MAX: 2684.24MIN: 2449.49 / MAX: 3630.71. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R30609012015096.68137.78137.70

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnet13600K A13900K13900K R36912157.0529.57910.007MIN: 6.87 / MAX: 13.4MIN: 9.06 / MAX: 12.8MIN: 9.22 / MAX: 33.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obj13600K Ai5-13600K13900K13900K R61218243018.4018.7526.0326.08MIN: 17.47 / MAX: 18.88MIN: 17.66 / MAX: 19.42MIN: 23.96 / MAX: 28.42MIN: 24.12 / MAX: 28.46

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon13600K Ai5-13600K13900K13900K R71421283520.4320.1828.3728.34MIN: 19.38 / MAX: 20.88MIN: 19.07 / MAX: 20.82MIN: 25.94 / MAX: 30.73MIN: 26.01 / MAX: 30.55

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13600K A13900K0.21150.4230.63450.8461.05750.670.94MIN: 0.51 / MAX: 3.31MIN: 0.52 / MAX: 3.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080p13600K Ai5-13600K13900K13900K R0.6031.2061.8092.4123.0151.931.922.682.681. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13600K A13900K36912157.9811.06MIN: 6.36 / MAX: 17.98MIN: 6.09 / MAX: 56.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup13600K A13900K2468106.834.94

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformer13600K A13900K13900K R4080120160200168.17122.10127.57MIN: 162.92 / MAX: 453.72MIN: 120.07 / MAX: 169.61MIN: 120.18 / MAX: 251.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obj13600K Ai5-13600K13900K13900K R61218243020.0920.5927.3427.50MIN: 18.73 / MAX: 20.73MIN: 19.51 / MAX: 21.33MIN: 25.47 / MAX: 29.51MIN: 25.51 / MAX: 29.58

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4K13600K Ai5-13600K13900K13900K R36912156.736.799.169.161. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNet13600K A13900K13900K R4080120160200150.15204.28204.24

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4K13600K Ai5-13600K13900K13900K R0.6031.2061.8092.4123.0151.981.972.682.671. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R14K28K42K56K70K6358647147467461. (CXX) g++ options: -O3 -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon13600K Ai5-13600K13900K13900K R71421283523.0922.6330.5130.64MIN: 21.7 / MAX: 23.76MIN: 21.12 / MAX: 23.45MIN: 28.47 / MAX: 32.55MIN: 28.62 / MAX: 32.96

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requests13600K Ai5-13600K13900K13900K R50010001500200025001895.81918.52518.12561.8MIN: 1759.76 / MAX: 2097.29MIN: 1787.45 / MAX: 2020.03MIN: 2345.07 / MAX: 2554.71MIN: 2392.59 / MAX: 2600.24

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crowni5-13600K13900K13900K R61218243017.1923.2222.90MIN: 16.19 / MAX: 18.01MIN: 22.43 / MAX: 24.28MIN: 22.12 / MAX: 24.12

Binary: Pathtracer - Model: Crown

13600K A: The test quit with a non-zero exit status.

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R13K26K39K52K65K6219346333462181. (CXX) g++ options: -O3 -ldl

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4K13600K Ai5-13600K13900K13900K R0.4950.991.4851.982.4751.641.642.202.191. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R16K32K48K64K80K7358254855549731. (CXX) g++ options: -O3 -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNet13600K A13900K13900K R4080120160200133.92179.09179.44

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080p13600K Ai5-13600K13900K13900K R2468106.416.388.518.481. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAS13600K Ai5-13600K13900K13900K R2004006008001000107310878158611. (CXX) g++ options: -flto -pthread

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080p13600K Ai5-13600K13900K13900K R81624324026.1525.9634.5434.371. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R2040608010070.4971.9993.7687.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis13600K Ai5-13600K13900K13900K R306090120150111.98112.37146.87148.881. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v313600K A13900K13900K R61218243020.5526.2827.20MIN: 19.85 / MAX: 33.62MIN: 25.74 / MAX: 34.85MIN: 25.48 / MAX: 84.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080p13600K Ai5-13600K13900K13900K R36912157.657.6410.0910.061. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standard13600K A13900K20040060080010009727381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown13600K Ai5-13600K13900K13900K R61218243018.3918.6524.1523.88MIN: 17.16 / MAX: 19.46MIN: 17.42 / MAX: 19.66MIN: 23.21 / MAX: 25.4MIN: 22.95 / MAX: 25.18

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDet13600K A13900K13900K R0.8641.7282.5923.4564.322.963.842.93MIN: 2.88 / MAX: 3.35MIN: 3.79 / MAX: 5.75MIN: 2.88 / MAX: 3.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: All13600K Ai5-13600K13900K13900K R90180270360450326.03330.95426.96424.09

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNet13600K A13900K13900K R2040608010070.8992.6692.71

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13600K A13900K0.92931.85862.78793.71724.64653.184.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNet13600K A13900K13900K R50100150200250170.89221.30220.57

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R50010001500200025002204170717231. (CXX) g++ options: -O3 -ldl

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update Random13600K A13900K150K300K450K600K750K5367126907881. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnet13600K A13900K13900K R0.64581.29161.93742.58323.2292.872.672.23MIN: 2.78 / MAX: 3.5MIN: 2.63 / MAX: 3.05MIN: 2.19 / MAX: 2.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R160032004800640080007382577757401. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R6K12K18K24K30K2995123338236761. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R8K16K24K32K40K3503227345275231. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R4008001200160020001869147614591. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R2K4K6K8K10K8608683667331. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R50K100K150K200K250K2411131886421906711. (CXX) g++ options: -O3 -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-5013600K A13900K13900K R71421283524.5831.3631.28

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Fast13600K A13900K13900K R60120180240300205.19261.48259.701. (CXX) g++ options: -O3 -flto -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNet13600K A13900K13900K R2040608010070.8290.1889.89

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R60K120K180K240K300K2813832229882212241. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R4008001200160020001836145814471. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R150030004500600075007232576657051. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R30K60K90K120K150K1426471129861126581. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R6K12K18K24K30K2921623080231931. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R30K60K90K120K150K12227596621967621. (CXX) g++ options: -O3 -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-only13600K A13900K13900K R0.65161.30321.95482.60643.2582.8962.2892.298

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed13600K Ai5-13600K13900K13900K R153045607555.354.668.967.91. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R80160240320400279.48278.80351.31349.371. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R50K100K150K200K250K2361101896021877791. (CXX) g++ options: -O3 -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive13600K A13900K13900K R0.29060.58120.87181.16241.4531.03021.28451.29151. (CXX) g++ options: -O3 -flto -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score13600K A13900K40080012001600200013621707

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R30K60K90K120K150K11968595889956321. (CXX) g++ options: -O3 -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.013600K A13900K13900K R1.16212.32423.48634.64845.81054.1335.1655.158MIN: 4.06 / MAX: 9.73MIN: 5 / MAX: 8.1MIN: 4.89 / MAX: 30.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg1613600K A13900K13900K R71421283528.5222.8622.89MIN: 26.42 / MAX: 321.9MIN: 22.49 / MAX: 25.47MIN: 22.33 / MAX: 40.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13600K A13900K4008001200160020001544.521926.16MIN: 1481.77 / MAX: 1650.78MIN: 1766.06 / MAX: 2069.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-5013600K A13900K13900K R71421283524.1530.0129.82

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R90180270360450332.97341.60413.76384.751. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4080120160200187.20184.31153.07150.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R71421283524.8525.6830.8330.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.713600K A13900K13900K R11K22K33K44K55K40912.3650724.2050505.521. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score13600K A13900K1000200030004000500036724550

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While Writing13600K A13900K800K1600K2400K3200K4000K298936737007031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_time13600K Ai5-13600K13900K13900K R1.00982.01963.02944.03925.0493.679013.635504.487814.48167

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNet13600K A13900K13900K R306090120150111.70137.79137.61

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_time13600K Ai5-13600K13900K13900K R2468106.794396.771718.350968.25976

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNet13600K A13900K13900K R50100150200250186.11229.33227.80

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score13600K A13900K600120018002400300023102843

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time13600K Ai5-13600K13900K13900K R1.32852.6573.98555.3146.64254.809274.799075.870495.90448

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1213600K A13900K13900K R4812162016.7413.6113.691. (CXX) g++ options: -O3

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile13600K A13900K13900K R80160240320400386.51314.44315.73

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: Spaceship13600K A13900K1.32752.6553.98255.316.63754.85.9

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_time13600K Ai5-13600K13900K13900K R50100150200250167.98167.00205.25204.81

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Medium13600K A13900K13900K R2040608010076.9694.5993.941. (CXX) g++ options: -O3 -flto -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV313600K A13900K13900K R0.26620.53240.79861.06481.3310.9631.1811.183MIN: 0.94 / MAX: 1.39MIN: 1.12 / MAX: 3.08MIN: 1.11 / MAX: 17.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_time13600K Ai5-13600K13900K13900K R2468106.823696.820128.365718.32049

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Standard13600K A13900K1302603905206506004901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V413600K A13900K13900K R7K14K21K28K35K26166.327731.231966.2

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough13600K A13900K13900K R369121510.1312.3712.231. (CXX) g++ options: -O3 -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed13600K Ai5-13600K13900K13900K R300600900120015001182.91148.21401.21219.41. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R80160240320400320.21313.58382.22378.331. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-Only13600K A13900K2004006008001000886.55728.20

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_time13600K Ai5-13600K13900K13900K R0.97131.94262.91393.88524.85653.589173.547174.312124.31689

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v213600K A13900K13900K R0.63451.2691.90352.5383.17252.322.412.82MIN: 2.26 / MAX: 2.82MIN: 2.36 / MAX: 3.64MIN: 2.79 / MAX: 3.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4080120160200178.37179.96150.28148.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13600K A13900K5K10K15K20K25K20877.3325289.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installation13600K Ai5-13600K13900K13900K R4080120160200173.55173.63143.63145.19

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant13600K A13900K13900K R50010001500200025002420.752080.662007.81

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R122436486044.2844.7653.3851.521. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R306090120150135.50135.26158.14131.201. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-5013600K A13900K13900K R71421283524.4629.4629.42

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2040608010072.2687.0386.86

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-5013600K A13900K13900K R51015202517.6621.2221.26MIN: 17.37 / MAX: 23.46MIN: 20.23 / MAX: 32.68MIN: 20.75 / MAX: 82.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R306090120150100.80102.29118.3398.541. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H213600K Ai5-13600K13900K13900K R50010001500200025002019212417691848

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet13600K A13900K13900K R50010001500200025001901.722100.042282.79

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R50100150200250190.53188.71225.84225.401. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-Only13600K A13900K13900K R50100150200250223.66187.97187.65

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNet13600K A13900K13900K R2040608010074.1988.3688.22

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container13600K Ai5-13600K13900K13900K R110220330440550494.88495.03418.47415.74

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard13600K A13900K2K4K6K8K10K1057488871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13600K A13900K13900K R4812162016.7814.1314.20

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13600K A13900K13900K R163248648059.5870.7770.40

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 613600K Ai5-13600K13900K13900K R1.11672.23343.35014.46685.58354.8544.9634.1834.2851. (CXX) g++ options: -O3 -fPIC -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R110220330440550445.10438.92520.38517.691. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-only13600K A13900K13900K R0.48020.96041.44061.92082.4012.1341.8071.803

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix Makefiles13600K Ai5-13600K13900K13900K R90180270360450410.84426.47361.30362.95

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R306090120150150.46150.29130.31127.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Parallel13600K A13900K20040060080010007398701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython13600K Ai5-13600K13900K13900K R4008001200160020002040203517381792

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v313600K A13900K13900K R0.58281.16561.74842.33122.9142.592.262.21MIN: 2.49 / MAX: 3.35MIN: 2.21 / MAX: 3.53MIN: 2.16 / MAX: 2.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 013600K Ai5-13600K13900K13900K R20406080100101.86100.4987.0787.791. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R80160240320400313.96319.60367.25361.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total Time13600K Ai5-13600K13900K13900K R10M20M30M40M50M409747243988996346604368401349261. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R142842567054.6954.4663.6162.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom13600K A13900K0.69191.38382.07572.76763.45952.6453.075

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5013600K A13900K13900K R71421283525.1529.2329.18

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Tree13600K Ai5-13600K13900K13900K R170034005100680085006675.86688.67757.27625.1MIN: 5051.51MIN: 5113.28 / MAX: 6688.65MIN: 6001.17 / MAX: 7757.23MIN: 5855.12

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet5013600K A13900K13900K R369121511.9110.2510.40MIN: 11.62 / MAX: 12.51MIN: 10.14 / MAX: 11.58MIN: 10.2 / MAX: 20.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R306090120150106.42108.30123.34123.161. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R4080120160200163.68160.51185.83183.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13600K A13900K160320480640800625.88722.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atoms13600K Ai5-13600K13900K13900K R36912159.8259.80211.19311.3121. (CXX) g++ options: -O3 -lm -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja13600K Ai5-13600K13900K13900K R90180270360450392.84396.82344.19346.59

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010083.9084.5196.6896.521. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein13600K Ai5-13600K13900K13900K R36912159.6739.72911.12611.1391. (CXX) g++ options: -O3 -lm -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R140280420560700555.60548.57631.43618.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.113600K A13900K13900K R0.75891.51782.27673.03563.79452.9323.3733.354MIN: 2.86 / MAX: 3.73MIN: 3.27 / MAX: 5.96MIN: 3.18 / MAX: 27.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold Cache13600K A13900K13900K R50100150200250215.54247.91244.03MIN: 17.58 / MAX: 15000MIN: 19.5 / MAX: 30000MIN: 19.19 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futures13600K Ai5-13600K13900K13900K R300600900120015001173.11226.91066.81078.5MIN: 1080.27 / MAX: 1197.68MIN: 1201.44 / MAX: 1243.4MIN: 1025.04 / MAX: 1095.71MIN: 1048.78 / MAX: 1097.69

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010067.5467.4575.2277.521. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet13600K A13900K13900K R4008001200160020001764.511543.701551.22MIN: 1705.47 / MAX: 1870.65MIN: 1496.06 / MAX: 1636.76MIN: 1496.08 / MAX: 1635.131. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R153045607566.6166.1658.9858.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R50100150200250213.14212.62241.94241.351. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R102030405039.2939.4144.6839.701. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4080120160200145.76142.67162.19160.801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-Only13600K A13900K13900K R306090120150113.68100.24100.11

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1313600K A13900K13900K R4080120160200190.76168.05168.741. (CXX) g++ options: -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v213600K A13900K13900K R4080120160200180.97159.65159.98MIN: 168.23 / MAX: 207.33MIN: 152.69 / MAX: 177.88MIN: 153.41 / MAX: 175.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13600K A13900K13900K R102030405045.1839.8640.09

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13600K A13900K13900K R61218243022.1325.0824.94

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00013600K A13900K13900K R91827364537.4933.1633.541. (CC) gcc options: -O2 -lz

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 213600K Ai5-13600K13900K13900K R112233445547.8848.0742.5342.801. (CXX) g++ options: -O3 -fPIC -lm

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: Reactor13600K Ai5-13600K13900K13900K R4080120160200155.85163.26153.60144.46

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.113600K A13900K13900K R306090120150143.16127.00129.18MIN: 140.38 / MAX: 147.39MIN: 125.33 / MAX: 130.32MIN: 125 / MAX: 136.731. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNet13600K A13900K13900K R2040608010078.3888.3488.10

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-Only13600K A13900K60120180240300271.14240.62

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template13600K A13900K61218243024.021.3

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapi5-13600K13900K13900K R400800120016002000205919601828

Java Test: Tradesoap

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time13600K A13900K13900K R91827364539.3935.1935.001. RawTherapee, version 5.8, command line.

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite13600K A13900K300K600K900K1200K1500K14224511600324

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python13600K A13900K50100150200250216192

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Parallel13600K A13900K16003200480064008000736465481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go13600K A13900K306090120150128114

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes13600K A13900K132639526556.150.0

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU13600K A13900K13900K R112233445543.9349.0049.29

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13600K A13900K0.6031.2061.8092.4123.0152.392.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace13600K A13900K50100150200250232207

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case Slim13600K A13900K13900K R36912159.8138.7588.8261. OpenSCAD version 2021.01

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP313600K A13900K13900K R1.21662.43323.64984.86646.0835.4074.8984.8371. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX Case13600K A13900K13900K R61218243024.5521.9922.051. OpenSCAD version 2021.01

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performance13600K Ai5-13600K13900K13900K R500K1000K1500K2000K2500K19741141974114220311221943341. (CC) gcc options: -O3 -march=native

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib13600K A13900K36912159.728.71

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2html13600K A13900K13900K R0.01390.02780.04170.05560.06950.061793020.055380450.05562468

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v213600K A13900K13900K R91827364541.4737.3537.19MIN: 41.1 / MAX: 42.38MIN: 36.75 / MAX: 38.65MIN: 36.9 / MAX: 38.341. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack13600K A13900K369121511.9210.691. (CXX) g++ options: -rdynamic

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount Swivel13600K A13900K13900K R1.02312.04623.06934.09245.11554.5474.0784.0801. OpenSCAD version 2021.01

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile13600K A13900K2040608010090.481.1

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode13600K A13900K13900K R1.10612.21223.31834.42445.53054.9164.5004.4131. (CXX) g++ options: -fvisibility=hidden -logg -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float13600K A13900K13900K R300600900120015001450.811396.391553.51

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshield13600K Ai5-13600K13900K13900K R60120180240300262.82254.41241.69236.27

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010086.9684.4479.4878.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigen13600K Ai5-13600K13900K13900K R40080012001600200018771809171116891. (CXX) g++ options: -flto -pthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth13600K Ai5-13600K13900K13900K R11M22M33M44M55M47844093490535945261357753144036

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos13600K A13900K122436486051.446.3

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second13600K Ai5-13600K13900K13900K R150K300K450K600K750K645300.06642432.68709677.42712804.561. (CC) gcc options: -O2 -lrt" -lrt

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times13600K A13900K110220330440550519468

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro Car13600K A13900K13900K R0.56791.13581.70372.27162.83952.5242.2782.2761. OpenSCAD version 2021.01

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazeface13600K A13900K13900K R0.2520.5040.7561.0081.261.011.121.10MIN: 0.98 / MAX: 1.34MIN: 1.09 / MAX: 1.41MIN: 1.07 / MAX: 1.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13600K A13900K130260390520650533.42591.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads13600K A13900K369121513.412.1

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-Only13600K A13900K13900K R2040608010078.6571.1271.17

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third Run13600K A13900K13900K R60120180240300233.95258.67256.04MIN: 19.19 / MAX: 20000MIN: 20.42 / MAX: 20000MIN: 19.45 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Emily13600K A13900K50100150200250244.53221.17

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second Run13600K A13900K13900K R60120180240300235.86258.53260.69MIN: 18.93 / MAX: 30000MIN: 19.7 / MAX: 15000MIN: 19.6 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Pistol13600K A13900K13900K R122436486055.5950.4150.341. OpenSCAD version 2021.01

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-only13600K A13900K13900K R0.64691.29381.94072.58763.23452.8752.6052.621

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lg13600K A13900K13900K R4K8K12K16K20K188282076220568

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to313600K A13900K4080120160200172156

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam13600K Ai5-13600K13900K13900K R4080120160200161.93160.06147.02148.98

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless13600K Ai5-13600K13900K13900K R2468107.3427.2806.9216.6691. (CXX) g++ options: -O3 -fPIC -lm

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLAC13600K A13900K13900K R369121512.6211.4611.481. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNet13600K A13900K13900K R2040608010080.8388.8988.69

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody13600K A13900K153045607566.560.5

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed13600K Ai5-13600K13900K13900K R130026003900520065005727.65893.16287.26245.41. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010084.5882.8678.4777.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float13600K A13900K122436486051.647.1

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 113600K Ai5-13600K13900K13900K R153045607562.5263.1668.3968.10

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssd13600K A13900K13900K R369121510.119.6010.50MIN: 9.79 / MAX: 10.85MIN: 9.46 / MAX: 10.87MIN: 8.72 / MAX: 420.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13600K A13900K70140210280350320.78293.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4812162014.7614.6715.8916.021. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 8013600K Ai5-13600K13900K13900K R369121511.3511.4512.3912.391. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13600K A13900K2K4K6K8K10K9303.6610149.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material Tester13600K A13900K306090120150132.88121.88

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 8013600K Ai5-13600K13900K13900K R369121511.6611.7512.6712.711. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Forest13600K Ai5-13600K13900K13900K R90180270360450419.9421.8390.4387.1MIN: 381.62 / MAX: 498.68MIN: 383.07 / MAX: 499.39MIN: 361.83 / MAX: 476.17MIN: 360.05 / MAX: 452.67

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R4812162016.8816.6518.1318.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 9013600K Ai5-13600K13900K13900K R369121511.5111.5912.5412.531. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 9013600K Ai5-13600K13900K13900K R369121511.1811.2612.1412.171. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms13600K Ai5-13600K13900K13900K R0.20260.40520.60780.81041.0130.900610.899890.827640.83002

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13600K A13900K0.5851.171.7552.342.9252.392.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R36912159.069.029.819.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenet13600K A13900K13900K R2468108.168.127.51MIN: 7.91 / MAX: 8.94MIN: 8.02 / MAX: 11.48MIN: 7.2 / MAX: 46.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed13600K Ai5-13600K13900K13900K R122436486051.049.847.047.51. (CC) gcc options: -O3 -pthread -lz -llzma

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Read13600K A13900K20M40M60M80M100M1141178811051719191. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total Time13600K A13900K13900K R2040608010081.3687.9188.241. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenet13600K A13900K13900K R2468107.657.517.06MIN: 7.36 / MAX: 15.72MIN: 7.42 / MAX: 9.14MIN: 6.92 / MAX: 8.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark13600K A13900K13900K R61218243021.3123.0722.58

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile13600K Ai5-13600K13900K13900K R2040608010076.2976.6070.7672.42

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R2040608010081.8481.5688.2686.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALS13600K Ai5-13600K13900K13900K R50010001500200025002118.32104.02240.82271.9MIN: 2027.89 / MAX: 2219.93MIN: 2042.77 / MAX: 2160.8MIN: 2179.9 / MAX: 2313MIN: 2209.75 / MAX: 2350.07

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnet13600K A13900K13900K R1.082.163.244.325.44.454.634.80MIN: 4.32 / MAX: 5.23MIN: 4.56 / MAX: 7.22MIN: 4.72 / MAX: 6.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13600K A13900K13900K R81624324033.5331.1131.15

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13600K A13900K13900K R71421283529.8232.1432.10

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dotty13600K Ai5-13600K13900K13900K R100200300400500449.5457.4424.7427.2MIN: 382.14 / MAX: 928.71MIN: 386.83 / MAX: 876.54MIN: 357.3 / MAX: 748.16MIN: 357.58 / MAX: 745.84

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4K13600K Ai5-13600K13900K0.90231.80462.70693.60924.51153.763.734.011. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Tuning: 1 - Input: Bosphorus 4K

13900K R: The test quit with a non-zero exit status. E: height not found in y4m header

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimized13600K A13900K13900K R4080120160200195.23181.98182.80

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R0.06980.13960.20940.27920.3490.300.290.310.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R51015202521.3021.3422.7422.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootout13600K Ai5-13600K13900K13900K R50010001500200025002076.32134.62001.42011.3MIN: 1870.25 / MAX: 2410.84MIN: 1899.08 / MAX: 2195.05MIN: 1805.63 / MAX: 2241.18MIN: 1818.96 / MAX: 2312.63

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Material13600K A13900K306090120150125.70133.91

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R2468107.1517.1457.6077.5971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayes13600K Ai5-13600K13900K13900K R2004006008001000896.8901.6854.3847.0MIN: 656.87 / MAX: 896.82MIN: 652.2 / MAX: 901.64MIN: 627.97 / MAX: 854.34MIN: 629.37 / MAX: 847.01

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfig13600K Ai5-13600K13900K13900K R142842567061.8961.5358.3158.25

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 10013600K Ai5-13600K13900K13900K R0.22950.4590.68850.9181.14750.960.961.021.021. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R132639526554.1153.8757.1657.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load Test13600K Ai5-13600K13900K13900K R4K8K12K16K20K15871157711672116328

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standard13600K A13900K11002200330044005500493146531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile13600K Ai5-13600K13900K13900K R71421283531.3231.7730.0130.39

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13600K A13900K70140210280350294.41311.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed13600K Ai5-13600K13900K13900K R300600900120015001231.01236.71296.81292.51. (CC) gcc options: -O3 -pthread -lz -llzma

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 10013600K Ai5-13600K13900K13900K R0.2250.450.6750.91.1250.960.951.001.001. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13600K A13900K13900K R2040608010081.3377.9977.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13600K A13900K13900K R369121512.2912.8212.93

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotate13600K A13900K13900K R369121510.4629.94310.080

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar13600K A13900K2468107.5417.932

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel13600K A13900K20406080100971021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile13600K A13900K13900K R102030405045.5843.3543.62

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed13600K Ai5-13600K13900K13900K R110022003300440055004721.14708.54930.74907.11. (CC) gcc options: -O3 -pthread -lz -llzma

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write Random13600K A13900K600K1200K1800K2400K3000K262107027390611. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R0.20930.41860.62790.83721.04650.900.890.930.931. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R0.57041.14081.71122.28162.8522.4262.4362.5352.5141. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tiny13600K A13900K13900K R369121513.3112.9012.74MIN: 12.98 / MAX: 13.95MIN: 12.74 / MAX: 14.22MIN: 12.47 / MAX: 30.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans13600K Ai5-13600K13900K13900K R4008001200160020001785177817101774

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R81624324036.5335.0035.11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13600K A13900K369121512.8713.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Parallel13600K A13900K1102203304405505075281. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile13600K A13900K13900K R91827364539.9738.4238.801. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v213600K A13900K13900K R0.6481.2961.9442.5923.242.832.882.77MIN: 2.74 / MAX: 3.62MIN: 2.84 / MAX: 3.3MIN: 2.73 / MAX: 4.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed13600K Ai5-13600K13900K13900K R110022003300440055004849.14847.65032.85010.51. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010094.9896.4496.7398.591. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Default13600K A13900K13900K R4812162014.8914.3614.71

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed13600K Ai5-13600K13900K13900K R110022003300440055005157.45159.35349.25321.91. (CC) gcc options: -O3 -pthread -lz -llzma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRank13600K Ai5-13600K13900K13900K R50010001500200025002152.02158.32088.62082.1MIN: 1948.57 / MAX: 2198.38MIN: 1928.09 / MAX: 2193.86MIN: 1926.76 / MAX: 2153.25MIN: 1917.66 / MAX: 2144

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R122436486054.1852.4052.31

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard13600K A13900K2040608010089861. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless13600K Ai5-13600K13900K13900K R0.91081.82162.73243.64324.5544.0034.0483.9133.9841. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R91827364536.3837.6337.42

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed13600K Ai5-13600K13900K13900K R120024003600480060005408.05406.45589.65583.41. (CC) gcc options: -O3 -pthread -lz -llzma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lens13600K Ai5-13600K13900K13900K R2K4K6K8K10K7979.47807.47757.37738.7MIN: 7979.39 / MAX: 8676.9MAX: 8535.5MAX: 8498.32MAX: 8461.7

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levels13600K A13900K13900K R369121511.0010.7410.67

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Parallel13600K A13900K11002200330044005500502148751. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory13600K A13900K5K10K15K20K25K21115.2920515.631. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed13600K Ai5-13600K13900K13900K R130026003900520065005756.95746.55910.05887.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed13600K Ai5-13600K13900K13900K R120024003600480060005519.25520.55673.25664.41. (CC) gcc options: -O3 -pthread -lz -llzma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO13600K Ai5-13600K13900K13900K R100020003000400050004396.74473.74406.84358.2MAX: 5845.73MAX: 5993.22MIN: 4406.77 / MAX: 6186.68MIN: 4358.19 / MAX: 6116.36

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1B13600K A13900K13900K R71421283530.6031.2531.32

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13600K A13900K20040060080010001116.361092.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500M13600K A13900K13900K R4812162014.3614.0614.23

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resize13600K A13900K13900K R4812162013.7813.5813.49

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13600K A13900K102030405044.3945.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-mask13600K A13900K13900K R369121512.5112.3512.33

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfig13600K Ai5-13600K13900K13900K R150300450600750705.56708.18698.37699.22

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13600K A13900K13900K R2468108.19788.08768.1464

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13600K A13900K13900K R306090120150121.98123.64122.75

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K13900K R91827364538.4338.87

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K13900K R61218243026.0225.72

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

13600K A: The test quit with a non-zero exit status.

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.013600K A13900K13900K R1.12952.2593.38854.5185.64754.9954.9675.020

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2468108.49398.41348.4755

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13600K A13900K13900K R2468108.13098.06368.0998

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13600K A13900K13900K R306090120150122.99124.01123.46

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To Compile13900K13900K R142842567063.5363.18

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2468108.44568.40768.4399

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R20406080100107.82107.38107.73

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

13600K A: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

13600K A: The test quit with a non-zero exit status.

i5-13600K: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

Hash: MeowHash x86_64 AES-NI

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha0_aes_avx2 x86_64

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash32 x86_64 AVX

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha2_atonce

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash128

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: fasthash32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: Spooky32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: SHA3-256

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: wyhash

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

336 Results Shown

OpenRadioss
Perl Benchmarks
TensorFlow Lite
ONNX Runtime
TensorFlow Lite
NCNN
Darktable
NCNN
Sysbench
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
toyBrot Fractal Generator
OpenVINO
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
OpenVINO
Neural Magic DeepSparse
Mobile Neural Network
toyBrot Fractal Generator:
  OpenMP
  C++ Threads
Neural Magic DeepSparse
toyBrot Fractal Generator
Zstd Compression
OpenVINO
NCNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Face Detection FP16-INT8 - CPU
  Person Detection FP32 - CPU
QuadRay
Mobile Neural Network
OpenVINO
ONNX Runtime
spaCy
OpenVINO
Neural Magic DeepSparse
Mobile Neural Network
Embree:
  Pathtracer - Asian Dragon Obj
  Pathtracer - Asian Dragon
OpenVINO
QuadRay
OpenVINO
PyPerformance
NCNN
Embree
QuadRay
TensorFlow
QuadRay
OSPRay Studio
Embree
Renaissance
Embree
OSPRay Studio
QuadRay
OSPRay Studio
TensorFlow
QuadRay
LeelaChessZero
QuadRay
AOM AV1
Timed MrBayes Analysis
Mobile Neural Network
QuadRay
ONNX Runtime
Embree
NCNN
JPEG XL Decoding libjxl
TensorFlow
OpenVINO
TensorFlow
OSPRay Studio
Facebook RocksDB
NCNN
OSPRay Studio:
  2 - 4K - 1 - Path Tracer
  2 - 1080p - 16 - Path Tracer
  3 - 1080p - 16 - Path Tracer
  2 - 1080p - 1 - Path Tracer
  3 - 4K - 1 - Path Tracer
  2 - 4K - 32 - Path Tracer
TensorFlow
ASTC Encoder
TensorFlow
OSPRay Studio:
  3 - 4K - 32 - Path Tracer
  1 - 1080p - 1 - Path Tracer
  1 - 4K - 1 - Path Tracer
  3 - 4K - 16 - Path Tracer
  1 - 1080p - 16 - Path Tracer
  2 - 4K - 16 - Path Tracer
Darktable
Zstd Compression
SVT-VP9
OSPRay Studio
ASTC Encoder
AI Benchmark Alpha
OSPRay Studio
Mobile Neural Network
NCNN
OpenVINO
TensorFlow
SVT-VP9
AOM AV1
x265
Aircrack-ng
AI Benchmark Alpha
Facebook RocksDB
OSPRay
TensorFlow
OSPRay
TensorFlow
AI Benchmark Alpha
OSPRay
Primesieve
Timed Node.js Compilation
Natron
OSPRay
ASTC Encoder
Mobile Neural Network
OSPRay
ONNX Runtime
TensorFlow Lite
ASTC Encoder
Zstd Compression
SVT-VP9
Blender
OSPRay
NCNN
AOM AV1
OpenVINO
OpenRadioss
TensorFlow Lite
x264
SVT-HEVC
TensorFlow
Neural Magic DeepSparse
Mobile Neural Network
SVT-VP9
DaCapo Benchmark
TensorFlow Lite
x264
Blender
TensorFlow
OpenRadioss
ONNX Runtime
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
libavif avifenc
SVT-HEVC
Darktable
Timed LLVM Compilation
AOM AV1
ONNX Runtime
DaCapo Benchmark
NCNN
libavif avifenc
SVT-AV1
Stockfish
SVT-AV1
IndigoBench
TensorFlow
Renaissance
NCNN
SVT-AV1:
  Preset 10 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
OpenVINO
LAMMPS Molecular Dynamics Simulator
Timed LLVM Compilation
SVT-VP9
LAMMPS Molecular Dynamics Simulator
SVT-AV1
Mobile Neural Network
ClickHouse
Renaissance
SVT-HEVC
TNN
AOM AV1
SVT-HEVC
AOM AV1
SVT-AV1
Blender
Primesieve
TNN
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
SQLite Speedtest
libavif avifenc
Java Gradle Build
TNN
TensorFlow
Blender
PyPerformance
DaCapo Benchmark
RawTherapee
PHPBench
PyPerformance
ONNX Runtime
PyPerformance:
  go
  crypto_pyaes
DeepSpeech
OpenVINO
PyPerformance
OpenSCAD
LAME MP3 Encoding
OpenSCAD
TSCP
PyPerformance
Perl Benchmarks
TNN
WavPack Audio Encoding
OpenSCAD
PyPerformance
Opus Codec Encoding
TensorFlow Lite
OpenRadioss
AOM AV1
LeelaChessZero
asmFish
PyPerformance
Coremark
PyBench
OpenSCAD
NCNN
OpenVINO
PyPerformance
Blender
ClickHouse
Appleseed
ClickHouse
OpenSCAD
Darktable
spaCy
PyPerformance
OpenRadioss
libavif avifenc
FLAC Audio Encoding
TensorFlow
PyPerformance
Zstd Compression
AOM AV1
PyPerformance
JPEG XL Decoding libjxl
NCNN
OpenVINO
SVT-HEVC
JPEG XL libjxl
OpenVINO
Appleseed
JPEG XL libjxl
Renaissance
AOM AV1
JPEG XL libjxl:
  PNG - 90
  JPEG - 90
NAMD
OpenVINO
AOM AV1
NCNN
Zstd Compression
Facebook RocksDB
Tachyon
NCNN
Node.js V8 Web Tooling Benchmark
Timed Godot Game Engine Compilation
x265
Renaissance
NCNN
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
Renaissance
SVT-HEVC
Timed CPython Compilation
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 1080p
Renaissance
Appleseed
SVT-AV1
Renaissance
Timed Linux Kernel Compilation
JPEG XL libjxl
AOM AV1
Node.js Express HTTP Load Test
ONNX Runtime
Timed Mesa Compilation
OpenVINO
Zstd Compression
JPEG XL libjxl
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
GIMP
IndigoBench
ONNX Runtime
Timed PHP Compilation
Zstd Compression
Facebook RocksDB
AOM AV1
SVT-AV1
NCNN
DaCapo Benchmark
Neural Magic DeepSparse
OpenVINO
ONNX Runtime
Timed Wasmer Compilation
NCNN
Zstd Compression
SVT-VP9
Timed CPython Compilation
Zstd Compression
Renaissance
Neural Magic DeepSparse
ONNX Runtime
libavif avifenc
Neural Magic DeepSparse
Zstd Compression
Renaissance
GIMP
ONNX Runtime
Sysbench
Zstd Compression:
  8, Long Mode - Decompression Speed
  3, Long Mode - Decompression Speed
Renaissance
Y-Cruncher
OpenVINO
Y-Cruncher
GIMP
OpenVINO
GIMP
Timed Linux Kernel Compilation
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
GNU Octave Benchmark
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
Timed Erlang/OTP Compilation
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream