raptor lake extra

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210201-PTS-RAPTORLA85
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
AV1 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 9 Tests
C/C++ Compiler Tests 19 Tests
CPU Massive 29 Tests
Creator Workloads 30 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 11 Tests
Game Development 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 6 Tests
Java 3 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 12 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 35 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 2 Tests
Raytracing 4 Tests
Renderers 8 Tests
Scientific Computing 4 Tests
Server 7 Tests
Server CPU Tests 20 Tests
Single-Threaded 7 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
13600K A
October 16 2022
  8 Hours, 56 Minutes
i5-13600K
October 17 2022
  2 Hours, 43 Minutes
13900K
October 17 2022
  8 Hours, 19 Minutes
13900K R
October 18 2022
  6 Hours, 1 Minute
Invert Hiding All Results Option
  6 Hours, 30 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


raptor lake extraProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution13600K Ai5-13600K13900K13900K RIntel Core i5-13600K @ 5.10GHz (14 Cores / 20 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (1720 BIOS)Intel Device 7aa732GB2000GB Samsung SSD 980 PRO 2TBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (2004 BIOS)2000GB Samsung SSD 980 PRO 2TB + 2000GBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- 13600K A: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- i5-13600K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- 13900K: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108- 13900K R: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

13600K Ai5-13600K13900K13900K RResult OverviewPhoronix Test Suite100%123%146%169%192%SVT-HEVCtoyBrot Fractal GeneratorOpenRadiossQuadRayEmbreeTimed MrBayes AnalysisOSPRayLeelaChessZeroJPEG XL Decoding libjxlx264SVT-VP9Timed LLVM CompilationStockfishx265LAMMPS Molecular Dynamics SimulatorJava Gradle BuildSVT-AV1DaCapo BenchmarkTSCPasmFishlibavif avifencCoremarkZstd CompressionNAMDTimed Godot Game Engine CompilationJPEG XL libjxlNode.js Express HTTP Load TestTimed Mesa CompilationTimed Linux Kernel CompilationAOM AV1Renaissance

raptor lake extraopenradioss: Cell Phone Drop Testperl-benchmark: Interpretertensorflow-lite: NASNet Mobileonnx: ArcFace ResNet-100 - CPU - Standardtensorflow-lite: Inception ResNet V2ncnn: CPU - efficientnet-b0darktable: Server Rack - CPU-onlyncnn: CPU - resnet18sysbench: CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamtoybrot: TBBopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenvino: Vehicle Detection FP16 - CPUdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streammnn: mobilenet-v1-1.0toybrot: OpenMPtoybrot: C++ Threadsdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamtoybrot: C++ Taskscompress-zstd: 3, Long Mode - Compression Speedopenvino: Weld Porosity Detection FP16 - CPUncnn: CPU - regnety_400mopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUquadray: 5 - 4Kmnn: MobileNetV2_224openvino: Person Vehicle Bike Detection FP16 - CPUonnx: ArcFace ResNet-100 - CPU - Parallelspacy: en_core_web_trfopenvino: Person Detection FP16 - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streammnn: nasnetembree: Pathtracer - Asian Dragon Objembree: Pathtracer - Asian Dragonopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUquadray: 5 - 1080popenvino: Vehicle Detection FP16-INT8 - CPUpyperformance: python_startupncnn: CPU - vision_transformerembree: Pathtracer ISPC - Asian Dragon Objquadray: 1 - 4Ktensorflow: CPU - 64 - AlexNetquadray: 2 - 4Kospray-studio: 2 - 1080p - 32 - Path Tracerembree: Pathtracer ISPC - Asian Dragonrenaissance: Finagle HTTP Requestsembree: Pathtracer - Crownospray-studio: 1 - 1080p - 32 - Path Tracerquadray: 3 - 4Kospray-studio: 3 - 1080p - 32 - Path Tracertensorflow: CPU - 32 - AlexNetquadray: 3 - 1080plczero: BLASquadray: 1 - 1080paom-av1: Speed 6 Realtime - Bosphorus 1080pmrbayes: Primate Phylogeny Analysismnn: inception-v3quadray: 2 - 1080ponnx: bertsquad-12 - CPU - Standardembree: Pathtracer ISPC - Crownncnn: CPU - FastestDetjpegxl-decode: Alltensorflow: CPU - 16 - GoogLeNetopenvino: Face Detection FP16 - CPUtensorflow: CPU - 256 - AlexNetospray-studio: 3 - 1080p - 1 - Path Tracerrocksdb: Update Randncnn: CPU - mnasnetospray-studio: 2 - 4K - 1 - Path Tracerospray-studio: 2 - 1080p - 16 - Path Tracerospray-studio: 3 - 1080p - 16 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracerospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 2 - 4K - 32 - Path Tracertensorflow: CPU - 16 - ResNet-50astcenc: Fasttensorflow: CPU - 32 - GoogLeNetospray-studio: 3 - 4K - 32 - Path Tracerospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 2 - 4K - 16 - Path Tracerdarktable: Masskrug - CPU-onlycompress-zstd: 19 - Compression Speedsvt-vp9: Visual Quality Optimized - Bosphorus 1080pospray-studio: 1 - 4K - 32 - Path Tracerastcenc: Exhaustiveai-benchmark: Device Inference Scoreospray-studio: 1 - 4K - 16 - Path Tracermnn: SqueezeNetV1.0ncnn: CPU - vgg16openvino: Face Detection FP16 - CPUtensorflow: CPU - 32 - ResNet-50svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080px265: Bosphorus 4Kaircrack-ng: ai-benchmark: Device AI Scorerocksdb: Read While Writingospray: gravity_spheres_volume/dim_512/ao/real_timetensorflow: CPU - 16 - AlexNetospray: particle_volume/scivis/real_timetensorflow: CPU - 512 - AlexNetai-benchmark: Device Training Scoreospray: gravity_spheres_volume/dim_512/pathtracer/real_timeprimesieve: 1e12build-nodejs: Time To Compilenatron: Spaceshipospray: particle_volume/pathtracer/real_timeastcenc: Mediummnn: mobilenetV3ospray: particle_volume/ao/real_timeonnx: yolov4 - CPU - Standardtensorflow-lite: Inception V4astcenc: Thoroughcompress-zstd: 8 - Compression Speedsvt-vp9: VMAF Optimized - Bosphorus 1080pblender: Barbershop - CPU-Onlyospray: gravity_spheres_volume/dim_512/scivis/real_timencnn: CPU - shufflenet-v2aom-av1: Speed 9 Realtime - Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenradioss: Rubber O-Ring Seal Installationtensorflow-lite: Mobilenet Quantx264: Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Ktensorflow: CPU - 64 - ResNet-50deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streammnn: resnet-v2-50svt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kdacapobench: H2tensorflow-lite: SqueezeNetx264: Bosphorus 1080pblender: Classroom - CPU-Onlytensorflow: CPU - 64 - GoogLeNetopenradioss: INIVOL and Fluid Structure Interaction Drop Containeronnx: GPT-2 - CPU - Standarddeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamavifenc: 6svt-hevc: 10 - Bosphorus 1080pdarktable: Server Room - CPU-onlybuild-llvm: Unix Makefilesaom-av1: Speed 8 Realtime - Bosphorus 1080ponnx: bertsquad-12 - CPU - Paralleldacapobench: Jythonncnn: CPU-v3-v3 - mobilenet-v3avifenc: 0svt-av1: Preset 10 - Bosphorus 1080pstockfish: Total Timesvt-av1: Preset 8 - Bosphorus 4Kindigobench: CPU - Bedroomtensorflow: CPU - 256 - ResNet-50renaissance: Akka Unbalanced Cobwebbed Treencnn: CPU - resnet50svt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kopenvino: Vehicle Detection FP16-INT8 - CPUlammps: 20k Atomsbuild-llvm: Ninjasvt-vp9: Visual Quality Optimized - Bosphorus 4Klammps: Rhodopsin Proteinsvt-av1: Preset 12 - Bosphorus 1080pmnn: squeezenetv1.1clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacherenaissance: Genetic Algorithm Using Jenetics + Futuressvt-hevc: 7 - Bosphorus 4Ktnn: CPU - DenseNetaom-av1: Speed 8 Realtime - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 1080pblender: Fishy Cat - CPU-Onlyprimesieve: 1e13tnn: CPU - MobileNet v2deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamsqlite-speedtest: Timed Time - Size 1,000avifenc: 2java-gradle-perf: Reactortnn: CPU - SqueezeNet v1.1tensorflow: CPU - 256 - GoogLeNetblender: Pabellon Barcelona - CPU-Onlypyperformance: django_templatedacapobench: Tradesoaprawtherapee: Total Benchmark Timephpbench: PHP Benchmark Suitepyperformance: pickle_pure_pythononnx: GPT-2 - CPU - Parallelpyperformance: gopyperformance: crypto_pyaesdeepspeech: CPUopenvino: Person Detection FP16 - CPUpyperformance: raytraceopenscad: Leonardo Phone Case Slimencode-mp3: WAV To MP3openscad: Mini-ITX Casetscp: AI Chess Performancepyperformance: pathlibperl-benchmark: Pod2htmltnn: CPU - SqueezeNet v2encode-wavpack: WAV To WavPackopenscad: Projector Mount Swivelpyperformance: regex_compileencode-opus: WAV To Opus Encodetensorflow-lite: Mobilenet Floatopenradioss: Bird Strike on Windshieldaom-av1: Speed 10 Realtime - Bosphorus 4Klczero: Eigenasmfish: 1024 Hash Memory, 26 Depthpyperformance: chaoscoremark: CoreMark Size 666 - Iterations Per Secondpybench: Total For Average Test Timesopenscad: Retro Carncnn: CPU - blazefaceopenvino: Person Vehicle Bike Detection FP16 - CPUpyperformance: json_loadsblender: BMW27 - CPU-Onlyclickhouse: 100M Rows Web Analytics Dataset, Third Runappleseed: Emilyclickhouse: 100M Rows Web Analytics Dataset, Second Runopenscad: Pistoldarktable: Boat - CPU-onlyspacy: en_core_web_lgpyperformance: 2to3openradioss: Bumper Beamavifenc: 6, Losslessencode-flac: WAV To FLACtensorflow: CPU - 512 - GoogLeNetpyperformance: nbodycompress-zstd: 3 - Compression Speedaom-av1: Speed 9 Realtime - Bosphorus 4Kpyperformance: floatjpegxl-decode: 1ncnn: CPU - squeezenet_ssdopenvino: Vehicle Detection FP16 - CPUsvt-hevc: 1 - Bosphorus 1080pjpegxl: JPEG - 80openvino: Age Gender Recognition Retail 0013 FP16 - CPUappleseed: Material Testerjpegxl: PNG - 80renaissance: Rand Forestaom-av1: Speed 6 Two-Pass - Bosphorus 4Kjpegxl: PNG - 90jpegxl: JPEG - 90namd: ATPase Simulation - 327,506 Atomsopenvino: Person Detection FP32 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 4Kncnn: CPU - mobilenetcompress-zstd: 19, Long Mode - Compression Speedrocksdb: Rand Readtachyon: Total Timencnn: CPU - googlenetnode-web-tooling: build-godot: Time To Compilex265: Bosphorus 1080prenaissance: Apache Spark ALSncnn: CPU - alexnetdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamrenaissance: Scala Dottysvt-hevc: 1 - Bosphorus 4Kbuild-python: Released Build, PGO + LTO Optimizedaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 1080prenaissance: In-Memory Database Shootoutappleseed: Disney Materialsvt-av1: Preset 4 - Bosphorus 1080prenaissance: Apache Spark Bayesbuild-linux-kernel: defconfigjpegxl: PNG - 100aom-av1: Speed 6 Two-Pass - Bosphorus 1080pnode-express-loadtest: onnx: super-resolution-10 - CPU - Standardbuild-mesa: Time To Compileopenvino: Weld Porosity Detection FP16 - CPUcompress-zstd: 8, Long Mode - Compression Speedjpegxl: JPEG - 100deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamgimp: rotateindigobench: CPU - Supercaronnx: fcn-resnet101-11 - CPU - Parallelbuild-php: Time To Compilecompress-zstd: 19 - Decompression Speedrocksdb: Read Rand Write Randaom-av1: Speed 0 Two-Pass - Bosphorus 1080psvt-av1: Preset 4 - Bosphorus 4Kncnn: CPU - yolov4-tinydacapobench: Tradebeansdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamopenvino: Face Detection FP16-INT8 - CPUonnx: yolov4 - CPU - Parallelbuild-wasmer: Time To Compilencnn: CPU-v2-v2 - mobilenet-v2compress-zstd: 19, Long Mode - Decompression Speedsvt-vp9: VMAF Optimized - Bosphorus 4Kbuild-python: Defaultcompress-zstd: 3 - Decompression Speedrenaissance: Apache Spark PageRankdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamonnx: fcn-resnet101-11 - CPU - Standardavifenc: 10, Losslessdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamcompress-zstd: 8 - Decompression Speedrenaissance: ALS Movie Lensgimp: auto-levelsonnx: super-resolution-10 - CPU - Parallelsysbench: RAM / Memorycompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Decompression Speedrenaissance: Savina Reactors.IOy-cruncher: 1Bopenvino: Weld Porosity Detection FP16-INT8 - CPUy-cruncher: 500Mgimp: resizeopenvino: Machine Translation EN To DE FP16 - CPUgimp: unsharp-maskbuild-linux-kernel: allmodconfigdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamoctave-benchmark: deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streambuild-erlang: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamsmhasher: wyhash13600K Ai5-13600K13900K13900K R14.080.0028092319654317792592803.970.1395.8958743.01191.3668129.15342690712.47811.8983813.246115.5764.71173.8142783225488192.119525721938.947.467.14112.41.5387.92045.680.472.0239.3634515572047.6296.68377.05218.403820.43120.671.937.986.83168.1720.09166.73150.151.986358623.09441895.8621931.6473582133.926.41107326.1570.49111.97820.5527.6597218.39412.96326.0370.893.18170.8922045367122.87738229951350321869860824111324.58205.193370.8228138318367232142647292161222752.89655.3279.482361101.030213621196854.13328.521544.5224.15332.97187.224.8540912.359367229893673.67901111.76.79439186.1123104.8092716.739386.5064.8167.98176.96350.9636.8236960026166.310.12691182.9320.21886.553.589172.32178.3720877.33173.552420.7544.28135.524.4672.262517.659100.820191901.72190.53223.6674.19494.881057416.781459.57944.854445.12.134410.844150.4673920402.59101.86313.9584097472454.6922.64525.156675.811.91106.421163.677625.889.825392.84483.99.673555.5972.932215.541173.167.541764.50766.61213.1439.29145.759113.68190.761180.97445.177422.129537.48547.876155.845143.16478.38271.142439.3911422451216736412856.143.930262.392329.8135.40724.54719741149.720.0617930241.46911.9214.54790.44.9161450.81262.8286.9618774784409351.4645300.064535192.5241.01533.4213.478.65233.95244.528471235.8655.5862.87518828172161.937.34212.61680.8366.55727.684.5851.662.5210.11320.7814.7611.359303.66132.87714411.66419.916.8811.5111.180.900612.399.068.165111411788181.35887.6521.3176.29281.842118.34.4533.527729.8235449.53.76195.2330.321.32076.3125.6951457.151896.861.8920.9654.1115871493131.319294.4112310.9681.333612.291210.4627.5419745.5814721.126210700.92.42613.31178536.527212.8750739.9692.834849.194.9814.8935157.42152.054.1785894.00336.384854087979.410.997502121115.295756.95519.24396.730.61116.3614.35913.77744.3912.506705.5578.1978121.98084.9958.49398.1309122.98518.4456107.822107.65255982782225427255961044.70.4718.749620.18321.9220.59336.791.9722.63371918.517.19011.646.38108725.9671.99112.3727.6418.646330.9554.6278.8341.6184.3125.683.63556.771714.79907166.9976.820121148.2313.583.54717179.96173.6344.76135.26102.292124188.71495.034.963438.92426.473150.292035100.49319.5993988996354.4566688.6108.302160.5119.802396.82284.519.729548.5691226.967.4566.16212.6239.41142.66848.07163.26420591974114254.4184.44180949053594642432.678409160.067.285893.182.8663.1614.6711.4511.75421.816.6511.5911.260.899899.0249.876.60481.562104.0457.43.730.2921.342134.67.145901.661.5250.9653.871577131.771236.70.954708.50.892.43617784847.696.445159.32158.34.0485406.47807.45746.55520.54473.7708.178104.570.0005140361183.55343279224.090.2356.03105916.06341.3364228.33841525321.921420.52451420.814327.19111.46512.2181650415293317.8081155391531.2778.01177.352.36592.963020.10.682.92413.5149722392927.8137.78149.57926.025928.37180.942.6811.064.94122.127.33889.16204.282.684714730.51062518.123.2157463332.254855179.098.5181534.5493.76146.86826.27510.0973824.15163.84426.9692.664.13221.317076907882.67577723338273451476683618864231.36261.47890.182229881458576611298623080966212.28968.9351.311896021.28451707958895.16522.861926.1630.01413.76153.0730.8350724.199455037007034.48781137.798.35096229.3328435.8704913.609314.4385.9205.2594.5911.1818.3657149027731.212.36631401.2382.22728.24.312122.41150.2825289.52143.632080.6653.38158.1429.4687.027221.216118.3317692100.04225.84187.9788.36418.47888714.125970.7734.183520.381.807361.295130.3187017382.2687.065367.2454660436863.6083.07529.237757.210.25123.339185.831722.5511.193344.19396.6811.126631.4273.373247.911066.875.221543.69758.98241.9444.68162.19100.24168.049159.65439.863725.078333.15542.525153.601126.99588.34240.6221.3196035.194160032419265481145049.003362.682078.7584.89821.99422031128.710.0553804537.35110.6914.07881.14.51396.39241.6979.4817115261357746.3709677.4193554682.2781.12591.2212.171.12258.67221.167983258.5350.4112.60520762156147.026.92111.46288.8960.56287.278.4747.168.399.6293.615.8912.3910149.22121.88430312.67390.418.1312.5412.140.827642.69.818.124710517191987.90577.5123.0770.76488.262240.84.6331.112832.1375424.74.01181.9810.3122.742001.4133.9087887.607854.358.3121.0257.1616721465330.006311.281296.8177.993312.8179.9437.93210243.3544930.727390610.932.53512.9171035.00313.4152838.4162.885032.896.7314.3575349.22088.652.3987863.91337.6265589.67757.310.735487520515.6359105673.24406.831.2471092.3614.05913.58245.0712.349698.378.0876123.641438.425526.01964.9678.41348.0636124.009863.5258.4076107.380599.240.000520422465995910108.560.12111.43339.5123228.5693169151410.87991414.7852111.28132.3251653115256320.1917155411548.511.550.682.8612236137.70310.00726.076928.34182.68127.5727.50229.16204.242.674674630.64412561.822.9033462182.1954973179.448.4886134.3787.81148.87727.20110.0623.88172.93424.0992.71220.5717232.23574023676275231459673319067131.28259.695289.892212241447570511265823193967622.29867.9349.371877791.2915956325.15822.8929.82384.75150.6530.8350505.524.48167137.618.25976227.85.9044813.686315.727204.80993.94461.1838.3204931966.212.23361219.4378.334.316892.82148.08145.192007.8151.52131.229.4286.857421.25698.5418482282.79225.4187.6588.22415.7414.201170.39974.285517.691.803362.947127.6517922.2187.793361.3064013492662.87829.187625.110.4123.157183.49411.312346.58696.5211.139618.4713.354244.031078.577.521551.22158.43241.3539.7160.803100.11168.738159.97840.085424.939133.53842.8144.461129.17788.1182835.00249.286758.8264.83722.0521943340.0556246837.1874.084.4131553.51236.2778.23168953144036712804.5619492.2761.171.17256.04260.6950.342.62120568148.986.66911.47588.696245.477.1768.110.516.0212.3912.71387.118.1412.5312.170.830029.777.5147.588.24137.0622.5872.41786.342271.94.831.148632.1004427.2182.8040.3122.672011.37.597847.058.2511.0257.141632830.3871292.5177.287912.934110.0843.6244907.10.932.51412.74177435.114938.7992.775010.598.5914.7135321.92082.152.31133.98437.41715583.47738.710.6725887.55664.44358.231.32114.2313.49212.33699.2188.1464122.748738.869625.72245.028.47558.0998123.456363.1848.4399107.7274OpenBenchmarking.org

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testi5-13600K13900K R13900K13600K A20406080100107.6599.24104.5714.08

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Interpreter13900K R13900K13600K A0.00060.00120.00180.00240.0030.000520420.000514030.00280923

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile13900K R13900K13600K A50K100K150K200K250K246599.061183.5196543.0

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard13900K13600K A40080012001600200053417791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V213900K R13900K13600K A130K260K390K520K650K591010327922259280

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b013900K R13900K13600K A2468108.564.093.97MIN: 3.85 / MAX: 1157.46MIN: 4.04 / MAX: 4.64MIN: 3.83 / MAX: 4.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-only13900K R13900K13600K A0.05290.10580.15870.21160.26450.1210.2350.139

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet1813900K R13900K13600K A369121511.436.035.89MIN: 6.05 / MAX: 1070.41MIN: 5.96 / MAX: 7.7MIN: 5.68 / MAX: 6.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU13900K13600K A20K40K60K80K100K105916.0658743.011. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A70140210280350339.51341.34191.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A50100150200250228.57228.34129.15

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBi5-13600K13900K R13900K13600K A6K12K18K24K30K255981691515253269071. (CXX) g++ options: -O3 -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13900K13600K A51015202521.9212.47MIN: 12.55 / MAX: 43.63MIN: 9.36 / MAX: 29.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A300600900120015001410.881420.52811.90

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A300600900120015001414.791420.81813.25

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13900K13600K A61218243027.1915.57MIN: 14.86 / MAX: 50.77MIN: 11.78 / MAX: 28.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A20406080100111.28111.4764.71

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.013900K R13900K13600K A0.85821.71642.57463.43284.2912.3252.2183.814MIN: 2.13 / MAX: 26.22MIN: 2.18 / MAX: 5.03MIN: 3.74 / MAX: 10.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPi5-13600K13900K R13900K13600K A6K12K18K24K30K278221653116504278321. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threadsi5-13600K13900K R13900K13600K A5K10K15K20K25K254271525615293254881. (CXX) g++ options: -O3 -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A70140210280350320.19317.81192.12

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasksi5-13600K13900K R13900K13600K A6K12K18K24K30K255961554115539257211. (CXX) g++ options: -O3 -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speedi5-13600K13900K R13900K13600K A300600900120015001044.71548.51531.2938.91. (CC) gcc options: -O3 -pthread -lz -llzma

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13900K13600K A2040608010077.0047.46MIN: 43.7 / MAX: 99.09MIN: 24.39 / MAX: 65.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400m13900K R13900K13600K A369121511.558.017.14MIN: 7.99 / MAX: 760.89MIN: 7.87 / MAX: 9.45MIN: 6.93 / MAX: 7.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13900K13600K A4080120160200177.35112.40MIN: 136.29 / MAX: 238.95MIN: 91.63 / MAX: 160.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13900K13600K A0.5311.0621.5932.1242.6552.361.50MIN: 1.3 / MAX: 4MIN: 1.1 / MAX: 3.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13900K13600K A130260390520650592.96387.90MIN: 334.92 / MAX: 1081.53MIN: 290.34 / MAX: 782.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13900K13600K A60012001800240030003020.102045.68MIN: 2516.37 / MAX: 3733.4MIN: 1719.02 / MAX: 2706.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4Ki5-13600K13900K R13900K13600K A0.1530.3060.4590.6120.7650.470.680.680.471. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_22413900K R13900K13600K A0.65791.31581.97372.63163.28952.8612.9242.023MIN: 2.75 / MAX: 26.67MIN: 2.82 / MAX: 5.77MIN: 1.98 / MAX: 7.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13900K13600K A369121513.519.36MIN: 9.52 / MAX: 25.42MIN: 7.85 / MAX: 16.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel13900K13600K A1102203304405504973451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trf13900K R13900K13600K A5001000150020002500223622391557

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13900K13600K A60012001800240030002927.802047.62MIN: 2449.49 / MAX: 3630.7MIN: 1754.76 / MAX: 2684.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A306090120150137.70137.7896.68

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnet13900K R13900K13600K A369121510.0079.5797.052MIN: 9.22 / MAX: 33.96MIN: 9.06 / MAX: 12.8MIN: 6.87 / MAX: 13.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obji5-13600K13900K R13900K13600K A61218243018.7526.0826.0318.40MIN: 17.66 / MAX: 19.42MIN: 24.12 / MAX: 28.46MIN: 23.96 / MAX: 28.42MIN: 17.47 / MAX: 18.88

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragoni5-13600K13900K R13900K13600K A71421283520.1828.3428.3720.43MIN: 19.07 / MAX: 20.82MIN: 26.01 / MAX: 30.55MIN: 25.94 / MAX: 30.73MIN: 19.38 / MAX: 20.88

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13900K13600K A0.21150.4230.63450.8461.05750.940.67MIN: 0.52 / MAX: 3.79MIN: 0.51 / MAX: 3.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pi5-13600K13900K R13900K13600K A0.6031.2061.8092.4123.0151.922.682.681.931. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13900K13600K A369121511.067.98MIN: 6.09 / MAX: 56.79MIN: 6.36 / MAX: 17.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup13900K13600K A2468104.946.83

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformer13900K R13900K13600K A4080120160200127.57122.10168.17MIN: 120.18 / MAX: 251.2MIN: 120.07 / MAX: 169.61MIN: 162.92 / MAX: 453.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obji5-13600K13900K R13900K13600K A61218243020.5927.5027.3420.09MIN: 19.51 / MAX: 21.33MIN: 25.51 / MAX: 29.58MIN: 25.47 / MAX: 29.51MIN: 18.73 / MAX: 20.73

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4Ki5-13600K13900K R13900K13600K A36912156.799.169.166.731. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNet13900K R13900K13600K A4080120160200204.24204.28150.15

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4Ki5-13600K13900K R13900K13600K A0.6031.2061.8092.4123.0151.972.672.681.981. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A14K28K42K56K70K4674647147635861. (CXX) g++ options: -O3 -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragoni5-13600K13900K R13900K13600K A71421283522.6330.6430.5123.09MIN: 21.12 / MAX: 23.45MIN: 28.62 / MAX: 32.96MIN: 28.47 / MAX: 32.55MIN: 21.7 / MAX: 23.76

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requestsi5-13600K13900K R13900K13600K A50010001500200025001918.52561.82518.11895.8MIN: 1787.45 / MAX: 2020.03MIN: 2392.59 / MAX: 2600.24MIN: 2345.07 / MAX: 2554.71MIN: 1759.76 / MAX: 2097.29

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crowni5-13600K13900K R13900K61218243017.1922.9023.22MIN: 16.19 / MAX: 18.01MIN: 22.12 / MAX: 24.12MIN: 22.43 / MAX: 24.28

Binary: Pathtracer - Model: Crown

13600K A: The test quit with a non-zero exit status.

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A13K26K39K52K65K4621846333621931. (CXX) g++ options: -O3 -ldl

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4Ki5-13600K13900K R13900K13600K A0.4950.991.4851.982.4751.642.192.201.641. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A16K32K48K64K80K5497354855735821. (CXX) g++ options: -O3 -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNet13900K R13900K13600K A4080120160200179.44179.09133.92

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pi5-13600K13900K R13900K13600K A2468106.388.488.516.411. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASi5-13600K13900K R13900K13600K A2004006008001000108786181510731. (CXX) g++ options: -flto -pthread

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pi5-13600K13900K R13900K13600K A81624324025.9634.3734.5426.151. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A2040608010071.9987.8193.7670.491. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisi5-13600K13900K R13900K13600K A306090120150112.37148.88146.87111.981. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v313900K R13900K13600K A61218243027.2026.2820.55MIN: 25.48 / MAX: 84.52MIN: 25.74 / MAX: 34.85MIN: 19.85 / MAX: 33.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pi5-13600K13900K R13900K13600K A36912157.6410.0610.097.651. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standard13900K13600K A20040060080010007389721. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crowni5-13600K13900K R13900K13600K A61218243018.6523.8824.1518.39MIN: 17.42 / MAX: 19.66MIN: 22.95 / MAX: 25.18MIN: 23.21 / MAX: 25.4MIN: 17.16 / MAX: 19.46

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDet13900K R13900K13600K A0.8641.7282.5923.4564.322.933.842.96MIN: 2.88 / MAX: 3.62MIN: 3.79 / MAX: 5.75MIN: 2.88 / MAX: 3.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: Alli5-13600K13900K R13900K13600K A90180270360450330.95424.09426.96326.03

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNet13900K R13900K13600K A2040608010092.7192.6670.89

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13900K13600K A0.92931.85862.78793.71724.64654.133.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNet13900K R13900K13600K A50100150200250220.57221.30170.89

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A50010001500200025001723170722041. (CXX) g++ options: -O3 -ldl

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update Random13900K13600K A150K300K450K600K750K6907885367121. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnet13900K R13900K13600K A0.64581.29161.93742.58323.2292.232.672.87MIN: 2.19 / MAX: 2.92MIN: 2.63 / MAX: 3.05MIN: 2.78 / MAX: 3.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A160032004800640080005740577773821. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A6K12K18K24K30K2367623338299511. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A8K16K24K32K40K2752327345350321. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A4008001200160020001459147618691. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A2K4K6K8K10K6733683686081. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A50K100K150K200K250K1906711886422411131. (CXX) g++ options: -O3 -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-5013900K R13900K13600K A71421283531.2831.3624.58

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Fast13900K R13900K13600K A60120180240300259.70261.48205.191. (CXX) g++ options: -O3 -flto -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNet13900K R13900K13600K A2040608010089.8990.1870.82

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A60K120K180K240K300K2212242229882813831. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A4008001200160020001447145818361. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A150030004500600075005705576672321. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A30K60K90K120K150K1126581129861426471. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A6K12K18K24K30K2319323080292161. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A30K60K90K120K150K96762966211222751. (CXX) g++ options: -O3 -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-only13900K R13900K13600K A0.65161.30321.95482.60643.2582.2982.2892.896

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speedi5-13600K13900K R13900K13600K A153045607554.667.968.955.31. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A80160240320400278.80349.37351.31279.481. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A50K100K150K200K250K1877791896022361101. (CXX) g++ options: -O3 -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive13900K R13900K13600K A0.29060.58120.87181.16241.4531.29151.28451.03021. (CXX) g++ options: -O3 -flto -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score13900K13600K A40080012001600200017071362

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A30K60K90K120K150K95632958891196851. (CXX) g++ options: -O3 -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.013900K R13900K13600K A1.16212.32423.48634.64845.81055.1585.1654.133MIN: 4.89 / MAX: 30.5MIN: 5 / MAX: 8.1MIN: 4.06 / MAX: 9.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg1613900K R13900K13600K A71421283522.8922.8628.52MIN: 22.33 / MAX: 40.66MIN: 22.49 / MAX: 25.47MIN: 26.42 / MAX: 321.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13900K13600K A4008001200160020001926.161544.52MIN: 1766.06 / MAX: 2069.19MIN: 1481.77 / MAX: 1650.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-5013900K R13900K13600K A71421283529.8230.0124.15

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A90180270360450341.60384.75413.76332.971. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4080120160200184.31150.65153.07187.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A71421283525.6830.8330.8324.851. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.713900K R13900K13600K A11K22K33K44K55K50505.5250724.2040912.361. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score13900K13600K A1000200030004000500045503672

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While Writing13900K13600K A800K1600K2400K3200K4000K370070329893671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timei5-13600K13900K R13900K13600K A1.00982.01963.02944.03925.0493.635504.481674.487813.67901

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNet13900K R13900K13600K A306090120150137.61137.79111.70

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timei5-13600K13900K R13900K13600K A2468106.771718.259768.350966.79439

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNet13900K R13900K13600K A50100150200250227.80229.33186.11

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score13900K13600K A600120018002400300028432310

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timei5-13600K13900K R13900K13600K A1.32852.6573.98555.3146.64254.799075.904485.870494.80927

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1213900K R13900K13600K A4812162013.6913.6116.741. (CXX) g++ options: -O3

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile13900K R13900K13600K A80160240320400315.73314.44386.51

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: Spaceship13900K13600K A1.32752.6553.98255.316.63755.94.8

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timei5-13600K13900K R13900K13600K A50100150200250167.00204.81205.25167.98

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Medium13900K R13900K13600K A2040608010093.9494.5976.961. (CXX) g++ options: -O3 -flto -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV313900K R13900K13600K A0.26620.53240.79861.06481.3311.1831.1810.963MIN: 1.11 / MAX: 17.98MIN: 1.12 / MAX: 3.08MIN: 0.94 / MAX: 1.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timei5-13600K13900K R13900K13600K A2468106.820128.320498.365716.82369

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Standard13900K13600K A1302603905206504906001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V413900K R13900K13600K A7K14K21K28K35K31966.227731.226166.3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough13900K R13900K13600K A369121512.2312.3710.131. (CXX) g++ options: -O3 -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speedi5-13600K13900K R13900K13600K A300600900120015001148.21219.41401.21182.91. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A80160240320400313.58378.33382.22320.211. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-Only13900K13600K A2004006008001000728.20886.55

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timei5-13600K13900K R13900K13600K A0.97131.94262.91393.88524.85653.547174.316894.312123.58917

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v213900K R13900K13600K A0.63451.2691.90352.5383.17252.822.412.32MIN: 2.79 / MAX: 3.45MIN: 2.36 / MAX: 3.64MIN: 2.26 / MAX: 2.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4080120160200179.96148.08150.28178.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13900K13600K A5K10K15K20K25K25289.5220877.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationi5-13600K13900K R13900K13600K A4080120160200173.63145.19143.63173.55

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant13900K R13900K13600K A50010001500200025002007.812080.662420.75

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A122436486044.7651.5253.3844.281. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A306090120150135.26131.20158.14135.501. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-5013900K R13900K13600K A71421283529.4229.4624.46

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A2040608010086.8687.0372.26

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-5013900K R13900K13600K A51015202521.2621.2217.66MIN: 20.75 / MAX: 82.89MIN: 20.23 / MAX: 32.68MIN: 17.37 / MAX: 23.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A306090120150102.2998.54118.33100.801. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2i5-13600K13900K R13900K13600K A50010001500200025002124184817692019

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet13900K R13900K13600K A50010001500200025002282.792100.041901.72

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A50100150200250188.71225.40225.84190.531. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-Only13900K R13900K13600K A50100150200250187.65187.97223.66

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNet13900K R13900K13600K A2040608010088.2288.3674.19

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Containeri5-13600K13900K R13900K13600K A110220330440550495.03415.74418.47494.88

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard13900K13600K A2K4K6K8K10K8887105741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13900K R13900K13600K A4812162014.2014.1316.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13900K R13900K13600K A163248648070.4070.7759.58

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6i5-13600K13900K R13900K13600K A1.11672.23343.35014.46685.58354.9634.2854.1834.8541. (CXX) g++ options: -O3 -fPIC -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A110220330440550438.92517.69520.38445.101. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-only13900K R13900K13600K A0.48020.96041.44061.92082.4011.8031.8072.134

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix Makefilesi5-13600K13900K R13900K13600K A90180270360450426.47362.95361.30410.84

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A306090120150150.29127.65130.31150.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Parallel13900K13600K A20040060080010008707391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythoni5-13600K13900K R13900K13600K A4008001200160020002035179217382040

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v313900K R13900K13600K A0.58281.16561.74842.33122.9142.212.262.59MIN: 2.16 / MAX: 2.99MIN: 2.21 / MAX: 3.53MIN: 2.49 / MAX: 3.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0i5-13600K13900K R13900K13600K A20406080100100.4987.7987.07101.861. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A80160240320400319.60361.31367.25313.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total Timei5-13600K13900K R13900K13600K A10M20M30M40M50M398899634013492646604368409747241. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A142842567054.4662.8863.6154.691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom13900K13600K A0.69191.38382.07572.76763.45953.0752.645

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5013900K R13900K13600K A71421283529.1829.2325.15

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Treei5-13600K13900K R13900K13600K A170034005100680085006688.67625.17757.26675.8MIN: 5113.28 / MAX: 6688.65MIN: 5855.12MIN: 6001.17 / MAX: 7757.23MIN: 5051.51

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet5013900K R13900K13600K A369121510.4010.2511.91MIN: 10.2 / MAX: 20.06MIN: 10.14 / MAX: 11.58MIN: 11.62 / MAX: 12.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A306090120150108.30123.16123.34106.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A4080120160200160.51183.49185.83163.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13900K13600K A160320480640800722.55625.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atomsi5-13600K13900K R13900K13600K A36912159.80211.31211.1939.8251. (CXX) g++ options: -O3 -lm -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjai5-13600K13900K R13900K13600K A90180270360450396.82346.59344.19392.84

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010084.5196.5296.6883.901. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Proteini5-13600K13900K R13900K13600K A36912159.72911.13911.1269.6731. (CXX) g++ options: -O3 -lm -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A140280420560700548.57618.47631.43555.601. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.113900K R13900K13600K A0.75891.51782.27673.03563.79453.3543.3732.932MIN: 3.18 / MAX: 27.66MIN: 3.27 / MAX: 5.96MIN: 2.86 / MAX: 3.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold Cache13900K R13900K13600K A50100150200250244.03247.91215.54MIN: 19.19 / MAX: 12000MIN: 19.5 / MAX: 30000MIN: 17.58 / MAX: 150001. ClickHouse server version 22.5.4.19 (official build).

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futuresi5-13600K13900K R13900K13600K A300600900120015001226.91078.51066.81173.1MIN: 1201.44 / MAX: 1243.4MIN: 1048.78 / MAX: 1097.69MIN: 1025.04 / MAX: 1095.71MIN: 1080.27 / MAX: 1197.68

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010067.4577.5275.2267.541. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet13900K R13900K13600K A4008001200160020001551.221543.701764.51MIN: 1496.08 / MAX: 1635.13MIN: 1496.06 / MAX: 1636.76MIN: 1705.47 / MAX: 1870.651. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A153045607566.1658.4358.9866.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A50100150200250212.62241.35241.94213.141. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A102030405039.4139.7044.6839.291. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4080120160200142.67160.80162.19145.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-Only13900K R13900K13600K A306090120150100.11100.24113.68

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1313900K R13900K13600K A4080120160200168.74168.05190.761. (CXX) g++ options: -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v213900K R13900K13600K A4080120160200159.98159.65180.97MIN: 153.41 / MAX: 175.92MIN: 152.69 / MAX: 177.88MIN: 168.23 / MAX: 207.331. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13900K R13900K13600K A102030405040.0939.8645.18

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13900K R13900K13600K A61218243024.9425.0822.13

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00013900K R13900K13600K A91827364533.5433.1637.491. (CC) gcc options: -O2 -lz

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2i5-13600K13900K R13900K13600K A112233445548.0742.8042.5347.881. (CXX) g++ options: -O3 -fPIC -lm

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: Reactori5-13600K13900K R13900K13600K A4080120160200163.26144.46153.60155.85

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.113900K R13900K13600K A306090120150129.18127.00143.16MIN: 125 / MAX: 136.73MIN: 125.33 / MAX: 130.32MIN: 140.38 / MAX: 147.391. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNet13900K R13900K13600K A2040608010088.1088.3478.38

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-Only13900K13600K A60120180240300240.62271.14

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template13900K13600K A61218243021.324.0

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapi5-13600K13900K R13900K400800120016002000205918281960

Java Test: Tradesoap

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time13900K R13900K13600K A91827364535.0035.1939.391. RawTherapee, version 5.8, command line.

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite13900K13600K A300K600K900K1200K1500K16003241422451

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python13900K13600K A50100150200250192216

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Parallel13900K13600K A16003200480064008000654873641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go13900K13600K A306090120150114128

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes13900K13600K A132639526550.056.1

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU13900K R13900K13600K A112233445549.2949.0043.93

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13900K13600K A0.6031.2061.8092.4123.0152.682.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace13900K13600K A50100150200250207232

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case Slim13900K R13900K13600K A36912158.8268.7589.8131. OpenSCAD version 2021.01

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP313900K R13900K13600K A1.21662.43323.64984.86646.0834.8374.8985.4071. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX Case13900K R13900K13600K A61218243022.0521.9924.551. OpenSCAD version 2021.01

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancei5-13600K13900K R13900K13600K A500K1000K1500K2000K2500K19741142194334220311219741141. (CC) gcc options: -O3 -march=native

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib13900K13600K A36912158.719.72

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2html13900K R13900K13600K A0.01390.02780.04170.05560.06950.055624680.055380450.06179302

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v213900K R13900K13600K A91827364537.1937.3541.47MIN: 36.9 / MAX: 38.34MIN: 36.75 / MAX: 38.65MIN: 41.1 / MAX: 42.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack13900K13600K A369121510.6911.921. (CXX) g++ options: -rdynamic

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount Swivel13900K R13900K13600K A1.02312.04623.06934.09245.11554.0804.0784.5471. OpenSCAD version 2021.01

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile13900K13600K A2040608010081.190.4

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode13900K R13900K13600K A1.10612.21223.31834.42445.53054.4134.5004.9161. (CXX) g++ options: -fvisibility=hidden -logg -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float13900K R13900K13600K A300600900120015001553.511396.391450.81

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldi5-13600K13900K R13900K13600K A60120180240300254.41236.27241.69262.82

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010084.4478.2379.4886.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigeni5-13600K13900K R13900K13600K A40080012001600200018091689171118771. (CXX) g++ options: -flto -pthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthi5-13600K13900K R13900K13600K A11M22M33M44M55M49053594531440365261357747844093

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos13900K13600K A122436486046.351.4

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Secondi5-13600K13900K R13900K13600K A150K300K450K600K750K642432.68712804.56709677.42645300.061. (CC) gcc options: -O2 -lrt" -lrt

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times13900K13600K A110220330440550468519

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro Car13900K R13900K13600K A0.56791.13581.70372.27162.83952.2762.2782.5241. OpenSCAD version 2021.01

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazeface13900K R13900K13600K A0.2520.5040.7561.0081.261.101.121.01MIN: 1.07 / MAX: 1.41MIN: 1.09 / MAX: 1.41MIN: 0.98 / MAX: 1.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13900K13600K A130260390520650591.22533.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads13900K13600K A369121512.113.4

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-Only13900K R13900K13600K A2040608010071.1771.1278.65

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third Run13900K R13900K13600K A60120180240300256.04258.67233.95MIN: 19.45 / MAX: 12000MIN: 20.42 / MAX: 20000MIN: 19.19 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Emily13900K13600K A50100150200250221.17244.53

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second Run13900K R13900K13600K A60120180240300260.69258.53235.86MIN: 19.6 / MAX: 30000MIN: 19.7 / MAX: 15000MIN: 18.93 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Pistol13900K R13900K13600K A122436486050.3450.4155.591. OpenSCAD version 2021.01

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-only13900K R13900K13600K A0.64691.29381.94072.58763.23452.6212.6052.875

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lg13900K R13900K13600K A4K8K12K16K20K205682076218828

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to313900K13600K A4080120160200156172

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beami5-13600K13900K R13900K13600K A4080120160200160.06148.98147.02161.93

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Losslessi5-13600K13900K R13900K13600K A2468107.2806.6696.9217.3421. (CXX) g++ options: -O3 -fPIC -lm

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLAC13900K R13900K13600K A369121511.4811.4612.621. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNet13900K R13900K13600K A2040608010088.6988.8980.83

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody13900K13600K A153045607560.566.5

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speedi5-13600K13900K R13900K13600K A130026003900520065005893.16245.46287.25727.61. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010082.8677.1778.4784.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float13900K13600K A122436486047.151.6

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1i5-13600K13900K R13900K13600K A153045607563.1668.1068.3962.52

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssd13900K R13900K13600K A369121510.509.6010.11MIN: 8.72 / MAX: 420.85MIN: 9.46 / MAX: 10.87MIN: 9.79 / MAX: 10.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13900K13600K A70140210280350293.60320.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4812162014.6716.0215.8914.761. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80i5-13600K13900K R13900K13600K A369121511.4512.3912.3911.351. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13900K13600K A2K4K6K8K10K10149.229303.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material Tester13900K13600K A306090120150121.88132.88

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80i5-13600K13900K R13900K13600K A369121511.7512.7112.6711.661. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Foresti5-13600K13900K R13900K13600K A90180270360450421.8387.1390.4419.9MIN: 383.07 / MAX: 499.39MIN: 360.05 / MAX: 452.67MIN: 361.83 / MAX: 476.17MIN: 381.62 / MAX: 498.68

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A4812162016.6518.1418.1316.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90i5-13600K13900K R13900K13600K A369121511.5912.5312.5411.511. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90i5-13600K13900K R13900K13600K A369121511.2612.1712.1411.181. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsi5-13600K13900K R13900K13600K A0.20260.40520.60780.81041.0130.899890.830020.827640.90061

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13900K13600K A0.5851.171.7552.342.9252.602.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A36912159.029.779.819.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenet13900K R13900K13600K A2468107.518.128.16MIN: 7.2 / MAX: 46.15MIN: 8.02 / MAX: 11.48MIN: 7.91 / MAX: 8.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speedi5-13600K13900K R13900K13600K A122436486049.847.547.051.01. (CC) gcc options: -O3 -pthread -lz -llzma

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Read13900K13600K A20M40M60M80M100M1051719191141178811. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total Time13900K R13900K13600K A2040608010088.2487.9181.361. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenet13900K R13900K13600K A2468107.067.517.65MIN: 6.92 / MAX: 8.31MIN: 7.42 / MAX: 9.14MIN: 7.36 / MAX: 15.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark13900K R13900K13600K A61218243022.5823.0721.31

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compilei5-13600K13900K R13900K13600K A2040608010076.6072.4270.7676.29

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A2040608010081.5686.3488.2681.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSi5-13600K13900K R13900K13600K A50010001500200025002104.02271.92240.82118.3MIN: 2042.77 / MAX: 2160.8MIN: 2209.75 / MAX: 2350.07MIN: 2179.9 / MAX: 2313MIN: 2027.89 / MAX: 2219.93

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnet13900K R13900K13600K A1.082.163.244.325.44.804.634.45MIN: 4.72 / MAX: 6.19MIN: 4.56 / MAX: 7.22MIN: 4.32 / MAX: 5.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13900K R13900K13600K A81624324031.1531.1133.53

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13900K R13900K13600K A71421283532.1032.1429.82

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dottyi5-13600K13900K R13900K13600K A100200300400500457.4427.2424.7449.5MIN: 386.83 / MAX: 876.54MIN: 357.58 / MAX: 745.84MIN: 357.3 / MAX: 748.16MIN: 382.14 / MAX: 928.71

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4Ki5-13600K13900K13600K A0.90231.80462.70693.60924.51153.734.013.761. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Tuning: 1 - Input: Bosphorus 4K

13900K R: The test quit with a non-zero exit status. E: height not found in y4m header

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimized13900K R13900K13600K A4080120160200182.80181.98195.23

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A0.06980.13960.20940.27920.3490.290.310.310.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A51015202521.3422.6722.7421.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootouti5-13600K13900K R13900K13600K A50010001500200025002134.62011.32001.42076.3MIN: 1899.08 / MAX: 2195.05MIN: 1818.96 / MAX: 2312.63MIN: 1805.63 / MAX: 2241.18MIN: 1870.25 / MAX: 2410.84

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Material13900K13600K A306090120150133.91125.70

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A2468107.1457.5977.6077.1511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayesi5-13600K13900K R13900K13600K A2004006008001000901.6847.0854.3896.8MIN: 652.2 / MAX: 901.64MIN: 629.37 / MAX: 847.01MIN: 627.97 / MAX: 854.34MIN: 656.87 / MAX: 896.82

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigi5-13600K13900K R13900K13600K A142842567061.5358.2558.3161.89

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100i5-13600K13900K R13900K13600K A0.22950.4590.68850.9181.14750.961.021.020.961. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A132639526553.8757.1457.1654.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load Testi5-13600K13900K R13900K13600K A4K8K12K16K20K15771163281672115871

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standard13900K13600K A11002200330044005500465349311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compilei5-13600K13900K R13900K13600K A71421283531.7730.3930.0131.32

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13900K13600K A70140210280350311.28294.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speedi5-13600K13900K R13900K13600K A300600900120015001236.71292.51296.81231.01. (CC) gcc options: -O3 -pthread -lz -llzma

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100i5-13600K13900K R13900K13600K A0.2250.450.6750.91.1250.951.001.000.961. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13900K R13900K13600K A2040608010077.2977.9981.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13900K R13900K13600K A369121512.9312.8212.29

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotate13900K R13900K13600K A369121510.0809.94310.462

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar13900K13600K A2468107.9327.541

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel13900K13600K A20406080100102971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile13900K R13900K13600K A102030405043.6243.3545.58

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speedi5-13600K13900K R13900K13600K A110022003300440055004708.54907.14930.74721.11. (CC) gcc options: -O3 -pthread -lz -llzma

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write Random13900K13600K A600K1200K1800K2400K3000K273906126210701. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A0.20930.41860.62790.83721.04650.890.930.930.901. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A0.57041.14081.71122.28162.8522.4362.5142.5352.4261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tiny13900K R13900K13600K A369121512.7412.9013.31MIN: 12.47 / MAX: 30.93MIN: 12.74 / MAX: 14.22MIN: 12.98 / MAX: 13.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansi5-13600K13900K R13900K13600K A4008001200160020001778177417101785

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A81624324035.1135.0036.53

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13900K13600K A369121513.4112.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Parallel13900K13600K A1102203304405505285071. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile13900K R13900K13600K A91827364538.8038.4239.971. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v213900K R13900K13600K A0.6481.2961.9442.5923.242.772.882.83MIN: 2.73 / MAX: 4.07MIN: 2.84 / MAX: 3.3MIN: 2.74 / MAX: 3.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speedi5-13600K13900K R13900K13600K A110022003300440055004847.65010.55032.84849.11. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010096.4498.5996.7394.981. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Default13900K R13900K13600K A4812162014.7114.3614.89

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speedi5-13600K13900K R13900K13600K A110022003300440055005159.35321.95349.25157.41. (CC) gcc options: -O3 -pthread -lz -llzma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRanki5-13600K13900K R13900K13600K A50010001500200025002158.32082.12088.62152.0MIN: 1928.09 / MAX: 2193.86MIN: 1917.66 / MAX: 2144MIN: 1926.76 / MAX: 2153.25MIN: 1948.57 / MAX: 2198.38

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A122436486052.3152.4054.18

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard13900K13600K A2040608010086891. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Losslessi5-13600K13900K R13900K13600K A0.91081.82162.73243.64324.5544.0483.9843.9134.0031. (CXX) g++ options: -O3 -fPIC -lm

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A91827364537.4237.6336.38

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speedi5-13600K13900K R13900K13600K A120024003600480060005406.45583.45589.65408.01. (CC) gcc options: -O3 -pthread -lz -llzma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lensi5-13600K13900K R13900K13600K A2K4K6K8K10K7807.47738.77757.37979.4MAX: 8535.5MAX: 8461.7MAX: 8498.32MIN: 7979.39 / MAX: 8676.9

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levels13900K R13900K13600K A369121510.6710.7411.00

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Parallel13900K13600K A11002200330044005500487550211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory13900K13600K A5K10K15K20K25K20515.6321115.291. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speedi5-13600K13900K R13900K13600K A130026003900520065005746.55887.55910.05756.91. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speedi5-13600K13900K R13900K13600K A120024003600480060005520.55664.45673.25519.21. (CC) gcc options: -O3 -pthread -lz -llzma

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOi5-13600K13900K R13900K13600K A100020003000400050004473.74358.24406.84396.7MAX: 5993.22MIN: 4358.19 / MAX: 6116.36MIN: 4406.77 / MAX: 6186.68MAX: 5845.73

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1B13900K R13900K13600K A71421283531.3231.2530.60

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13900K13600K A20040060080010001092.361116.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500M13900K R13900K13600K A4812162014.2314.0614.36

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resize13900K R13900K13600K A4812162013.4913.5813.78

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13900K13600K A102030405045.0744.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-mask13900K R13900K13600K A369121512.3312.3512.51

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigi5-13600K13900K R13900K13600K A150300450600750708.18699.22698.37705.56

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13900K R13900K13600K A2468108.14648.08768.1978

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13900K R13900K13600K A306090120150122.75123.64121.98

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K R13900K91827364538.8738.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K R13900K61218243025.7226.02

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

13600K A: The test quit with a non-zero exit status.

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.013900K R13900K13600K A1.12952.2593.38854.5185.64755.0204.9674.995

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A2468108.47558.41348.4939

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13900K R13900K13600K A2468108.09988.06368.1309

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13900K R13900K13600K A306090120150123.46124.01122.99

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To Compile13900K R13900K142842567063.1863.53

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A2468108.43998.40768.4456

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A20406080100107.73107.38107.82

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

13600K A: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

13600K A: The test quit with a non-zero exit status.

i5-13600K: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

Hash: MeowHash x86_64 AES-NI

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha0_aes_avx2 x86_64

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash32 x86_64 AVX

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha2_atonce

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash128

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: fasthash32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: Spooky32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: SHA3-256

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: wyhash

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

336 Results Shown

OpenRadioss
Perl Benchmarks
TensorFlow Lite
ONNX Runtime
TensorFlow Lite
NCNN
Darktable
NCNN
Sysbench
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
toyBrot Fractal Generator
OpenVINO
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
OpenVINO
Neural Magic DeepSparse
Mobile Neural Network
toyBrot Fractal Generator:
  OpenMP
  C++ Threads
Neural Magic DeepSparse
toyBrot Fractal Generator
Zstd Compression
OpenVINO
NCNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Face Detection FP16-INT8 - CPU
  Person Detection FP32 - CPU
QuadRay
Mobile Neural Network
OpenVINO
ONNX Runtime
spaCy
OpenVINO
Neural Magic DeepSparse
Mobile Neural Network
Embree:
  Pathtracer - Asian Dragon Obj
  Pathtracer - Asian Dragon
OpenVINO
QuadRay
OpenVINO
PyPerformance
NCNN
Embree
QuadRay
TensorFlow
QuadRay
OSPRay Studio
Embree
Renaissance
Embree
OSPRay Studio
QuadRay
OSPRay Studio
TensorFlow
QuadRay
LeelaChessZero
QuadRay
AOM AV1
Timed MrBayes Analysis
Mobile Neural Network
QuadRay
ONNX Runtime
Embree
NCNN
JPEG XL Decoding libjxl
TensorFlow
OpenVINO
TensorFlow
OSPRay Studio
Facebook RocksDB
NCNN
OSPRay Studio:
  2 - 4K - 1 - Path Tracer
  2 - 1080p - 16 - Path Tracer
  3 - 1080p - 16 - Path Tracer
  2 - 1080p - 1 - Path Tracer
  3 - 4K - 1 - Path Tracer
  2 - 4K - 32 - Path Tracer
TensorFlow
ASTC Encoder
TensorFlow
OSPRay Studio:
  3 - 4K - 32 - Path Tracer
  1 - 1080p - 1 - Path Tracer
  1 - 4K - 1 - Path Tracer
  3 - 4K - 16 - Path Tracer
  1 - 1080p - 16 - Path Tracer
  2 - 4K - 16 - Path Tracer
Darktable
Zstd Compression
SVT-VP9
OSPRay Studio
ASTC Encoder
AI Benchmark Alpha
OSPRay Studio
Mobile Neural Network
NCNN
OpenVINO
TensorFlow
SVT-VP9
AOM AV1
x265
Aircrack-ng
AI Benchmark Alpha
Facebook RocksDB
OSPRay
TensorFlow
OSPRay
TensorFlow
AI Benchmark Alpha
OSPRay
Primesieve
Timed Node.js Compilation
Natron
OSPRay
ASTC Encoder
Mobile Neural Network
OSPRay
ONNX Runtime
TensorFlow Lite
ASTC Encoder
Zstd Compression
SVT-VP9
Blender
OSPRay
NCNN
AOM AV1
OpenVINO
OpenRadioss
TensorFlow Lite
x264
SVT-HEVC
TensorFlow
Neural Magic DeepSparse
Mobile Neural Network
SVT-VP9
DaCapo Benchmark
TensorFlow Lite
x264
Blender
TensorFlow
OpenRadioss
ONNX Runtime
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
libavif avifenc
SVT-HEVC
Darktable
Timed LLVM Compilation
AOM AV1
ONNX Runtime
DaCapo Benchmark
NCNN
libavif avifenc
SVT-AV1
Stockfish
SVT-AV1
IndigoBench
TensorFlow
Renaissance
NCNN
SVT-AV1:
  Preset 10 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
OpenVINO
LAMMPS Molecular Dynamics Simulator
Timed LLVM Compilation
SVT-VP9
LAMMPS Molecular Dynamics Simulator
SVT-AV1
Mobile Neural Network
ClickHouse
Renaissance
SVT-HEVC
TNN
AOM AV1
SVT-HEVC
AOM AV1
SVT-AV1
Blender
Primesieve
TNN
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
SQLite Speedtest
libavif avifenc
Java Gradle Build
TNN
TensorFlow
Blender
PyPerformance
DaCapo Benchmark
RawTherapee
PHPBench
PyPerformance
ONNX Runtime
PyPerformance:
  go
  crypto_pyaes
DeepSpeech
OpenVINO
PyPerformance
OpenSCAD
LAME MP3 Encoding
OpenSCAD
TSCP
PyPerformance
Perl Benchmarks
TNN
WavPack Audio Encoding
OpenSCAD
PyPerformance
Opus Codec Encoding
TensorFlow Lite
OpenRadioss
AOM AV1
LeelaChessZero
asmFish
PyPerformance
Coremark
PyBench
OpenSCAD
NCNN
OpenVINO
PyPerformance
Blender
ClickHouse
Appleseed
ClickHouse
OpenSCAD
Darktable
spaCy
PyPerformance
OpenRadioss
libavif avifenc
FLAC Audio Encoding
TensorFlow
PyPerformance
Zstd Compression
AOM AV1
PyPerformance
JPEG XL Decoding libjxl
NCNN
OpenVINO
SVT-HEVC
JPEG XL libjxl
OpenVINO
Appleseed
JPEG XL libjxl
Renaissance
AOM AV1
JPEG XL libjxl:
  PNG - 90
  JPEG - 90
NAMD
OpenVINO
AOM AV1
NCNN
Zstd Compression
Facebook RocksDB
Tachyon
NCNN
Node.js V8 Web Tooling Benchmark
Timed Godot Game Engine Compilation
x265
Renaissance
NCNN
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
Renaissance
SVT-HEVC
Timed CPython Compilation
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 1080p
Renaissance
Appleseed
SVT-AV1
Renaissance
Timed Linux Kernel Compilation
JPEG XL libjxl
AOM AV1
Node.js Express HTTP Load Test
ONNX Runtime
Timed Mesa Compilation
OpenVINO
Zstd Compression
JPEG XL libjxl
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
GIMP
IndigoBench
ONNX Runtime
Timed PHP Compilation
Zstd Compression
Facebook RocksDB
AOM AV1
SVT-AV1
NCNN
DaCapo Benchmark
Neural Magic DeepSparse
OpenVINO
ONNX Runtime
Timed Wasmer Compilation
NCNN
Zstd Compression
SVT-VP9
Timed CPython Compilation
Zstd Compression
Renaissance
Neural Magic DeepSparse
ONNX Runtime
libavif avifenc
Neural Magic DeepSparse
Zstd Compression
Renaissance
GIMP
ONNX Runtime
Sysbench
Zstd Compression:
  8, Long Mode - Decompression Speed
  3, Long Mode - Decompression Speed
Renaissance
Y-Cruncher
OpenVINO
Y-Cruncher
GIMP
OpenVINO
GIMP
Timed Linux Kernel Compilation
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    ms/batch
    items/sec
GNU Octave Benchmark
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
Timed Erlang/OTP Compilation
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream