raptor lake extra

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210201-PTS-RAPTORLA85
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
AV1 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 9 Tests
C/C++ Compiler Tests 19 Tests
CPU Massive 29 Tests
Creator Workloads 30 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 11 Tests
Game Development 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 6 Tests
Java 3 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 12 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 35 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 2 Tests
Raytracing 4 Tests
Renderers 8 Tests
Scientific Computing 4 Tests
Server 7 Tests
Server CPU Tests 20 Tests
Single-Threaded 7 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
13600K A
October 16 2022
  8 Hours, 56 Minutes
i5-13600K
October 17 2022
  2 Hours, 43 Minutes
13900K
October 17 2022
  8 Hours, 19 Minutes
13900K R
October 18 2022
  6 Hours, 1 Minute
Invert Hiding All Results Option
  6 Hours, 30 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


raptor lake extraProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution13600K Ai5-13600K13900K13900K RIntel Core i5-13600K @ 5.10GHz (14 Cores / 20 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (1720 BIOS)Intel Device 7aa732GB2000GB Samsung SSD 980 PRO 2TBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (2004 BIOS)2000GB Samsung SSD 980 PRO 2TB + 2000GBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- 13600K A: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- i5-13600K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- 13900K: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108- 13900K R: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

13600K Ai5-13600K13900K13900K RResult OverviewPhoronix Test Suite100%123%146%169%192%SVT-HEVCtoyBrot Fractal GeneratorOpenRadiossQuadRayEmbreeTimed MrBayes AnalysisOSPRayLeelaChessZeroJPEG XL Decoding libjxlx264SVT-VP9Timed LLVM CompilationStockfishx265LAMMPS Molecular Dynamics SimulatorJava Gradle BuildSVT-AV1DaCapo BenchmarkTSCPasmFishlibavif avifencCoremarkZstd CompressionNAMDTimed Godot Game Engine CompilationJPEG XL libjxlNode.js Express HTTP Load TestTimed Mesa CompilationTimed Linux Kernel CompilationAOM AV1Renaissance

raptor lake extratensorflow: CPU - 256 - ResNet-50ai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scorelammps: 20k Atomsblender: Barbershop - CPU-Onlybuild-linux-kernel: allmodconfigtensorflow: CPU - 512 - GoogLeNetappleseed: Emilyopenradioss: INIVOL and Fluid Structure Interaction Drop Containerbuild-llvm: Unix Makefilesospray: particle_volume/pathtracer/real_timebuild-llvm: Ninjalczero: BLASlczero: Eigenbuild-nodejs: Time To Compileospray: particle_volume/scivis/real_timetensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 512 - AlexNetappleseed: Disney Materialtensorflow: CPU - 64 - ResNet-50blender: Pabellon Barcelona - CPU-Onlyappleseed: Material Testeropenradioss: Bird Strike on Windshieldjpegxl: JPEG - 100ospray-studio: 3 - 4K - 32 - Path Tracerjpegxl: PNG - 100ospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerblender: Classroom - CPU-Onlyospray: particle_volume/ao/real_timebuild-python: Released Build, PGO + LTO Optimizedprimesieve: 1e13renaissance: ALS Movie Lensopenradioss: Rubber O-Ring Seal Installationsvt-hevc: 1 - Bosphorus 4Kopenradioss: Bumper Beamospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 2 - 4K - 1 - Path Tracerjava-gradle-perf: Reactormnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetospray-studio: 3 - 1080p - 32 - Path Tracerrenaissance: Akka Unbalanced Cobwebbed Treetensorflow: CPU - 256 - AlexNetospray-studio: 3 - 4K - 1 - Path Tracerasmfish: 1024 Hash Memory, 26 Depthmrbayes: Primate Phylogeny Analysistensorflow: CPU - 32 - ResNet-50ospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 2 - 1080p - 32 - Path Traceronnx: fcn-resnet101-11 - CPU - Parallelospray-studio: 1 - 1080p - 32 - Path Traceronnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardospray-studio: 3 - 1080p - 16 - Path Tracerospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray-studio: 3 - 1080p - 1 - Path Tracertnn: CPU - DenseNetospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 16 - Path Tracerospray-studio: 2 - 4K - 16 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 1 - 4K - 16 - Path Tracerospray: gravity_spheres_volume/dim_512/pathtracer/real_timencnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetblender: Fishy Cat - CPU-Onlyjpegxl: JPEG - 80jpegxl: PNG - 80avifenc: 0pyperformance: python_startupsysbench: CPUtensorflow: CPU - 64 - GoogLeNettachyon: Total Timeopenradioss: Cell Phone Drop Testjpegxl: JPEG - 90aom-av1: Speed 4 Two-Pass - Bosphorus 4Kjpegxl: PNG - 90stockfish: Total Timeblender: BMW27 - CPU-Onlybuild-godot: Time To Compileclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacherenaissance: Apache Spark ALSrenaissance: Apache Spark PageRankaom-av1: Speed 0 Two-Pass - Bosphorus 4Kopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUsvt-av1: Preset 4 - Bosphorus 4Ktensorflow-lite: NASNet Mobileopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUperl-benchmark: Pod2htmltensorflow: CPU - 16 - ResNet-50build-erlang: Time To Compiletensorflow-lite: Inception ResNet V2openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUnamd: ATPase Simulation - 327,506 Atomsopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUtensorflow-lite: Inception V4indigobench: CPU - Bedroomindigobench: CPU - Supercartensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Read Rand Write Randrocksdb: Update Randopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUrocksdb: Read While Writingrocksdb: Rand Readbuild-linux-kernel: defconfigrenaissance: Savina Reactors.IOrenaissance: Genetic Algorithm Using Jenetics + Futuresdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamperl-benchmark: Interpreterdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenscad: Pistolnode-web-tooling: spacy: en_core_web_trfspacy: en_core_web_lgdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamavifenc: 2tensorflow: CPU - 32 - GoogLeNetaom-av1: Speed 6 Two-Pass - Bosphorus 4Kbuild-php: Time To Compileastcenc: Exhaustivedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamjpegxl-decode: 1deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepspeech: CPUcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedtensorflow: CPU - 64 - AlexNetsvt-hevc: 1 - Bosphorus 1080pdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streambuild-wasmer: Time To Compileembree: Pathtracer - Asian Dragon Objdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streampyperformance: raytracerawtherapee: Total Benchmark Timedeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamrenaissance: In-Memory Database Shootoutdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streampyperformance: django_templateembree: Pathtracer ISPC - Asian Dragon Objcompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speeddeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamcompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedsqlite-speedtest: Timed Time - Size 1,000renaissance: Scala Dottycompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedaom-av1: Speed 4 Two-Pass - Bosphorus 1080py-cruncher: 1Bpyperformance: regex_compileembree: Pathtracer ISPC - Crownbuild-mesa: Time To Compileaircrack-ng: pyperformance: 2to3embree: Pathtracer - Asian Dragonrenaissance: Finagle HTTP Requestsrenaissance: Apache Spark Bayespyperformance: pathlibcoremark: CoreMark Size 666 - Iterations Per Secondpyperformance: crypto_pyaespyperformance: goembree: Pathtracer ISPC - Asian Dragonpyperformance: chaospyperformance: floatastcenc: Thoroughembree: Pathtracer - Crowntensorflow: CPU - 32 - AlexNetopenscad: Mini-ITX Casepyperformance: pickle_pure_pythonpyperformance: json_loadstensorflow: CPU - 16 - GoogLeNetsvt-av1: Preset 4 - Bosphorus 1080ptoybrot: OpenMPaom-av1: Speed 0 Two-Pass - Bosphorus 1080prenaissance: Rand Forestpyperformance: nbodyx265: Bosphorus 4Ktoybrot: TBBtoybrot: C++ Taskstoybrot: C++ Threadsnatron: Spaceshipquadray: 3 - 4Kquadray: 5 - 4Kquadray: 2 - 4Kquadray: 1 - 4Kdacapobench: Tradesoapquadray: 5 - 1080pquadray: 2 - 1080pquadray: 3 - 1080pquadray: 1 - 1080pjpegxl-decode: Allastcenc: Fasttensorflow: CPU - 16 - AlexNetaom-av1: Speed 6 Realtime - Bosphorus 4Ky-cruncher: 500Maom-av1: Speed 6 Two-Pass - Bosphorus 1080pprimesieve: 1e12build-python: Defaultgimp: resizephpbench: PHP Benchmark Suitex264: Bosphorus 4Kgimp: unsharp-maskencode-flac: WAV To FLACtnn: CPU - MobileNet v2pybench: Total For Average Test Timesencode-wavpack: WAV To WavPacksvt-av1: Preset 8 - Bosphorus 4Kgimp: auto-levelsaom-av1: Speed 8 Realtime - Bosphorus 4Kgimp: rotatetnn: CPU - SqueezeNet v1.1openscad: Leonardo Phone Case Slimsvt-hevc: 7 - Bosphorus 4Kastcenc: Mediumaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 4Kx265: Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 4Kavifenc: 6, Losslesssvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Knode-express-loadtest: svt-av1: Preset 10 - Bosphorus 4Kdacapobench: Tradebeansencode-mp3: WAV To MP3octave-benchmark: sysbench: RAM / Memorysvt-hevc: 10 - Bosphorus 4Kencode-opus: WAV To Opus Encodeavifenc: 6aom-av1: Speed 8 Realtime - Bosphorus 1080popenscad: Projector Mount Swivelsvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kdacapobench: H2avifenc: 10, Losslessaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pdarktable: Boat - CPU-onlydarktable: Masskrug - CPU-onlydarktable: Server Room - CPU-onlyx264: Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pdacapobench: Jythontnn: CPU - SqueezeNet v2openscad: Retro Carlammps: Rhodopsin Proteinsvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-av1: Preset 10 - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pdarktable: Server Rack - CPU-onlytscp: AI Chess Performancesmhasher: SHA3-25613600K Ai5-13600K13900K13900K R25.153672231013629.825886.55705.55780.83244.528471494.88410.844167.981392.84410731877386.5066.7943978.38186.11125.69514524.46271.14132.877144262.820.962813830.96241113236110223.666.82369195.233190.7617979.4173.553.76161.9372327382155.84520.5523.8142.0234.13317.6592.9320.9637.052735826675.8170.89860847844093111.97824.1514264763586976219389345736473917795071057497260050214931350323.589173.6790122041764.5071836186929951122275292161196854.809272.96168.177.1410.1113.3111.914.455.8928.527.651.013.972.872.322.592.838.16113.6811.3511.66101.866.8358743.0174.1981.358814.0811.189.0611.514097472478.6576.292233.95235.86215.542118.32152.00.32045.682.392047.622.392.4261965431544.523.180.0617930224.58259280387.912.870.90061112.444.3926166.32.6457.5411901.721450.812420.759.36533.4247.46294.417.98625.8815.57320.7812.471116.360.6720877.3326210705367121.59303.66298936711411788161.8924396.71173.1811.89838.4939813.24618.44560.00280923192.119536.384855.58621.31155718828191.366836.527247.87670.8216.8845.5811.030296.683772.2625121.98088.1978122.98518.130933.527729.823562.5243.930264849.151150.1514.76129.153454.178539.96918.403864.7117107.82216.781459.579423239.39122.129545.17742076.312.291281.33362420.09164721.155.35157.45727.65519.2938.95756.9123137.485449.554081182.921.330.690.418.394131.31940912.35917220.43121895.8896.89.72645300.0645356.112823.094451.451.610.1269133.9224.54721613.470.897.151278320.9419.966.524.852690725721254884.81.640.471.986.731.937.656.4126.15326.03205.1933111.739.2914.35954.1116.73914.89313.777142245144.2812.50612.616180.97451911.92154.69210.99766.6110.462143.1649.81367.5476.963584.5886.9670.4983.981.8494.987.342100.815871106.42117855.4074.99521115.29135.54.9164.854150.464.547145.759163.67720194.003178.37187.22.8752.8962.134190.53213.14204041.4692.5249.673279.48313.958320.21332.97445.1555.5970.13919741149.802708.178495.03426.473166.997396.822108718096.77171254.410.950.966.820127807.4173.633.73160.06163.2646688.649053594112.3723.547173.63554.7990711.4511.75100.49107.6511.269.0211.593988996376.6042104.02158.30.292.4360.8998961.5254473.71226.948.0716.6563.164847.649.814.6718.74962134.620.59334708.554.65159.35893.15520.51044.75746.51236.7457.45406.41148.221.3418.64631.7720.18321918.5901.6642432.67840922.633717.19017.145278220.89421.825.682559825596254271.640.471.976.7920591.927.646.3825.96330.9539.4153.8744.7654.45666.1667.4582.8684.4471.9984.5181.5696.447.28102.2915771108.3021778135.264.963150.29142.668160.51121244.048179.96184.31188.71212.6220359.729278.8319.599313.58341.6438.92548.569197411429.2345502843170711.193728.2698.3788.89221.167983418.47361.295205.25344.1938151711314.4388.3509688.34229.33133.90878829.46240.62121.884303241.6912229881.02188642189602187.978.36571181.981168.0497757.3143.634.01147.0257665777153.60126.2752.2182.9245.16521.2163.3731.1819.579548557757.2221.3683652613577146.86830.011129864714710246333864976548870534528888773849048754653273454.312124.4878117071543.69714581476233389662123080958895.870493.84122.18.019.612.910.254.636.0322.867.511.124.092.672.412.262.888.12100.2412.3912.6787.0654.94105916.0688.3687.9057104.5712.149.8112.544660436871.1270.764258.67258.53247.912240.82088.60.313020.12.62927.82.682.53561183.51926.164.130.0553804531.3663.525327922592.9613.410.82764177.3545.0727731.23.0757.9322100.041396.392080.6613.51591.2277311.2811.06722.5527.19293.621.921092.360.9425289.5227390616907882.3610149.22370070310517191958.3124406.81066.81420.52458.41341420.81438.40760.00051403317.808137.62650.41123.07223920762341.336435.00342.52590.1818.1343.3541.2845137.781487.0272123.64148.0876124.00988.063631.112832.137568.3938.425549.003365032.847204.2815.89228.338452.398738.41626.0259111.4651107.380514.125970.77320735.19425.078339.86372001.412.81777.993321.327.33884930.768.926.01965349.26287.25673.21531.259101296.833.155424.75589.61401.222.7431.24781.124.151630.00650724.19915628.37182518.1854.38.71709677.4193555011430.510646.347.112.366323.2157179.0921.99419212.192.667.607165040.93390.460.530.831525315539152935.92.20.682.689.1619602.6810.098.5134.54426.96261.478137.7944.6814.05957.1613.60914.35713.582160032453.3812.34911.462159.65446810.69163.60810.73558.989.943126.9958.75875.2294.59178.4779.4893.7696.6888.2696.736.921118.3316721123.33917104.8984.96720515.63158.144.54.183130.314.078162.19185.83117693.913150.28153.072.6052.2891.807225.84241.94173837.3512.27811.126351.31367.245382.22413.76520.38631.4270.235220311229.1811.312699.21888.69415.74362.947204.809346.5868611689315.7278.2597688.1227.829.42236.2712212241.02190671187779187.658.32049182.804168.7387738.7145.19148.9857055740144.46127.2012.3252.8615.15821.2563.3541.18310.007549737625.1220.57673353144036148.87729.821126584674646218275234.316894.4816717231551.22114471459236769676223193956325.904482.93127.5711.5510.512.7410.44.811.4322.897.061.18.562.232.822.212.777.51100.1112.3912.7187.79388.2288.241399.2412.179.7712.534013492671.1772.417256.04260.69244.032271.92082.10.312.5142465990.0556246831.2863.1845910100.8300231966.22282.791553.512007.8158.2514358.21078.51410.87998.47551414.78528.43990.00052042320.191737.417150.3422.58223620568339.512335.114942.889.8918.1443.6241.2915137.70386.8574122.74878.1464123.45638.099831.148632.100468.138.869649.286755010.547.5204.2416.02228.569352.311338.79926.0769111.2813107.727414.201170.399735.00224.939140.08542011.312.934177.287927.50224907.167.925.72245321.96245.45664.41548.55887.51292.533.538427.25583.41219.422.6731.32123.881730.38750505.5228.34182561.8847.0712804.56194930.644112.233622.9033179.4422.0592.717.597165310.93387.130.831691515541152562.190.682.679.1618282.6810.068.4834.37424.09259.6952137.6139.714.2357.1413.68614.71313.49251.5212.3311.475159.97862.87810.67258.4310.08129.1778.82677.5293.944677.1778.2387.8196.5286.3498.596.66998.5416328123.15717744.8375.02131.24.4134.285127.654.08160.803183.49418483.984148.08150.652.6212.2981.803225.4241.35179237.1872.27611.139349.37361.306378.33384.75517.69618.4710.1212194334OpenBenchmarking.org

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5013900K R13900K13600K A71421283529.1829.2325.15

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score13900K13600K A1000200030004000500045503672

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score13900K13600K A600120018002400300028432310

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score13900K13600K A40080012001600200017071362

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atomsi5-13600K13900K R13900K13600K A36912159.80211.31211.1939.8251. (CXX) g++ options: -O3 -lm -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-Only13900K13600K A2004006008001000728.20886.55

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigi5-13600K13900K R13900K13600K A150300450600750708.18699.22698.37705.56

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNet13900K R13900K13600K A2040608010088.6988.8980.83

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Emily13900K13600K A50100150200250221.17244.53

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Containeri5-13600K13900K R13900K13600K A110220330440550495.03415.74418.47494.88

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix Makefilesi5-13600K13900K R13900K13600K A90180270360450426.47362.95361.30410.84

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timei5-13600K13900K R13900K13600K A50100150200250167.00204.81205.25167.98

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninjai5-13600K13900K R13900K13600K A90180270360450396.82346.59344.19392.84

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASi5-13600K13900K R13900K13600K A2004006008001000108786181510731. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigeni5-13600K13900K R13900K13600K A40080012001600200018091689171118771. (CXX) g++ options: -flto -pthread

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile13900K R13900K13600K A80160240320400315.73314.44386.51

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timei5-13600K13900K R13900K13600K A2468106.771718.259768.350966.79439

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNet13900K R13900K13600K A2040608010088.1088.3478.38

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNet13900K R13900K13600K A50100150200250227.80229.33186.11

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Material13900K13600K A306090120150133.91125.70

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-5013900K R13900K13600K A71421283529.4229.4624.46

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-Only13900K13600K A60120180240300240.62271.14

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material Tester13900K13600K A306090120150121.88132.88

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshieldi5-13600K13900K R13900K13600K A60120180240300254.41236.27241.69262.82

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100i5-13600K13900K R13900K13600K A0.2250.450.6750.91.1250.951.001.000.961. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A60K120K180K240K300K2212242229882813831. (CXX) g++ options: -O3 -ldl

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100i5-13600K13900K R13900K13600K A0.22950.4590.68850.9181.14750.961.021.020.961. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A50K100K150K200K250K1906711886422411131. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A50K100K150K200K250K1877791896022361101. (CXX) g++ options: -O3 -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-Only13900K R13900K13600K A50100150200250187.65187.97223.66

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timei5-13600K13900K R13900K13600K A2468106.820128.320498.365716.82369

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimized13900K R13900K13600K A4080120160200182.80181.98195.23

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1313900K R13900K13600K A4080120160200168.74168.05190.761. (CXX) g++ options: -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lensi5-13600K13900K R13900K13600K A2K4K6K8K10K7807.47738.77757.37979.4MAX: 8535.5MAX: 8461.7MAX: 8498.32MIN: 7979.39 / MAX: 8676.9

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installationi5-13600K13900K R13900K13600K A4080120160200173.63145.19143.63173.55

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4Ki5-13600K13900K13600K A0.90231.80462.70693.60924.51153.734.013.761. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Tuning: 1 - Input: Bosphorus 4K

13900K R: The test quit with a non-zero exit status. E: height not found in y4m header

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beami5-13600K13900K R13900K13600K A4080120160200160.06148.98147.02161.93

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A150030004500600075005705576672321. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A160032004800640080005740577773821. (CXX) g++ options: -O3 -ldl

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: Reactori5-13600K13900K R13900K13600K A4080120160200163.26144.46153.60155.85

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v313900K R13900K13600K A61218243027.2026.2820.55MIN: 25.48 / MAX: 84.52MIN: 25.74 / MAX: 34.85MIN: 19.85 / MAX: 33.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.013900K R13900K13600K A0.85821.71642.57463.43284.2912.3252.2183.814MIN: 2.13 / MAX: 26.22MIN: 2.18 / MAX: 5.03MIN: 3.74 / MAX: 10.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_22413900K R13900K13600K A0.65791.31581.97372.63163.28952.8612.9242.023MIN: 2.75 / MAX: 26.67MIN: 2.82 / MAX: 5.77MIN: 1.98 / MAX: 7.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.013900K R13900K13600K A1.16212.32423.48634.64845.81055.1585.1654.133MIN: 4.89 / MAX: 30.5MIN: 5 / MAX: 8.1MIN: 4.06 / MAX: 9.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-5013900K R13900K13600K A51015202521.2621.2217.66MIN: 20.75 / MAX: 82.89MIN: 20.23 / MAX: 32.68MIN: 17.37 / MAX: 23.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.113900K R13900K13600K A0.75891.51782.27673.03563.79453.3543.3732.932MIN: 3.18 / MAX: 27.66MIN: 3.27 / MAX: 5.96MIN: 2.86 / MAX: 3.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV313900K R13900K13600K A0.26620.53240.79861.06481.3311.1831.1810.963MIN: 1.11 / MAX: 17.98MIN: 1.12 / MAX: 3.08MIN: 0.94 / MAX: 1.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnet13900K R13900K13600K A369121510.0079.5797.052MIN: 9.22 / MAX: 33.96MIN: 9.06 / MAX: 12.8MIN: 6.87 / MAX: 13.41. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A16K32K48K64K80K5497354855735821. (CXX) g++ options: -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Treei5-13600K13900K R13900K13600K A170034005100680085006688.67625.17757.26675.8MIN: 5113.28 / MAX: 6688.65MIN: 5855.12MIN: 6001.17 / MAX: 7757.23MIN: 5051.51

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNet13900K R13900K13600K A50100150200250220.57221.30170.89

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A2K4K6K8K10K6733683686081. (CXX) g++ options: -O3 -ldl

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthi5-13600K13900K R13900K13600K A11M22M33M44M55M49053594531440365261357747844093

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysisi5-13600K13900K R13900K13600K A306090120150112.37148.88146.87111.981. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-5013900K R13900K13600K A71421283529.8230.0124.15

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A30K60K90K120K150K1126581129861426471. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A14K28K42K56K70K4674647147635861. (CXX) g++ options: -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel13900K13600K A20406080100102971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13900K R13900K13600K A13K26K39K52K65K4621846333621931. (CXX) g++ options: -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard13900K13600K A2040608010086891. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel13900K13600K A1102203304405504973451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Parallel13900K13600K A16003200480064008000654873641. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Parallel13900K13600K A20040060080010008707391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard13900K13600K A40080012001600200053417791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Parallel13900K13600K A1102203304405505285071. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard13900K13600K A2K4K6K8K10K8887105741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standard13900K13600K A20040060080010007389721. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Standard13900K13600K A1302603905206504906001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Parallel13900K13600K A11002200330044005500487550211. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standard13900K13600K A11002200330044005500465349311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A8K16K24K32K40K2752327345350321. (CXX) g++ options: -O3 -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timei5-13600K13900K R13900K13600K A0.97131.94262.91393.88524.85653.547174.316894.312123.58917

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timei5-13600K13900K R13900K13600K A1.00982.01963.02944.03925.0493.635504.481674.487813.67901

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A50010001500200025001723170722041. (CXX) g++ options: -O3 -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet13900K R13900K13600K A4008001200160020001551.221543.701764.51MIN: 1496.08 / MAX: 1635.13MIN: 1496.06 / MAX: 1636.76MIN: 1705.47 / MAX: 1870.651. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A4008001200160020001447145818361. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13900K R13900K13600K A4008001200160020001459147618691. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A6K12K18K24K30K2367623338299511. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A30K60K90K120K150K96762966211222751. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A6K12K18K24K30K2319323080292161. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13900K R13900K13600K A30K60K90K120K150K95632958891196851. (CXX) g++ options: -O3 -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timei5-13600K13900K R13900K13600K A1.32852.6573.98555.3146.64254.799075.904485.870494.80927

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDet13900K R13900K13600K A0.8641.7282.5923.4564.322.933.842.96MIN: 2.88 / MAX: 3.62MIN: 3.79 / MAX: 5.75MIN: 2.88 / MAX: 3.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformer13900K R13900K13600K A4080120160200127.57122.10168.17MIN: 120.18 / MAX: 251.2MIN: 120.07 / MAX: 169.61MIN: 162.92 / MAX: 453.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400m13900K R13900K13600K A369121511.558.017.14MIN: 7.99 / MAX: 760.89MIN: 7.87 / MAX: 9.45MIN: 6.93 / MAX: 7.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssd13900K R13900K13600K A369121510.509.6010.11MIN: 8.72 / MAX: 420.85MIN: 9.46 / MAX: 10.87MIN: 9.79 / MAX: 10.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tiny13900K R13900K13600K A369121512.7412.9013.31MIN: 12.47 / MAX: 30.93MIN: 12.74 / MAX: 14.22MIN: 12.98 / MAX: 13.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet5013900K R13900K13600K A369121510.4010.2511.91MIN: 10.2 / MAX: 20.06MIN: 10.14 / MAX: 11.58MIN: 11.62 / MAX: 12.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnet13900K R13900K13600K A1.082.163.244.325.44.804.634.45MIN: 4.72 / MAX: 6.19MIN: 4.56 / MAX: 7.22MIN: 4.32 / MAX: 5.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet1813900K R13900K13600K A369121511.436.035.89MIN: 6.05 / MAX: 1070.41MIN: 5.96 / MAX: 7.7MIN: 5.68 / MAX: 6.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg1613900K R13900K13600K A71421283522.8922.8628.52MIN: 22.33 / MAX: 40.66MIN: 22.49 / MAX: 25.47MIN: 26.42 / MAX: 321.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenet13900K R13900K13600K A2468107.067.517.65MIN: 6.92 / MAX: 8.31MIN: 7.42 / MAX: 9.14MIN: 7.36 / MAX: 15.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazeface13900K R13900K13600K A0.2520.5040.7561.0081.261.101.121.01MIN: 1.07 / MAX: 1.41MIN: 1.09 / MAX: 1.41MIN: 0.98 / MAX: 1.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b013900K R13900K13600K A2468108.564.093.97MIN: 3.85 / MAX: 1157.46MIN: 4.04 / MAX: 4.64MIN: 3.83 / MAX: 4.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnet13900K R13900K13600K A0.64581.29161.93742.58323.2292.232.672.87MIN: 2.19 / MAX: 2.92MIN: 2.63 / MAX: 3.05MIN: 2.78 / MAX: 3.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v213900K R13900K13600K A0.63451.2691.90352.5383.17252.822.412.32MIN: 2.79 / MAX: 3.45MIN: 2.36 / MAX: 3.64MIN: 2.26 / MAX: 2.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v313900K R13900K13600K A0.58281.16561.74842.33122.9142.212.262.59MIN: 2.16 / MAX: 2.99MIN: 2.21 / MAX: 3.53MIN: 2.49 / MAX: 3.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v213900K R13900K13600K A0.6481.2961.9442.5923.242.772.882.83MIN: 2.73 / MAX: 4.07MIN: 2.84 / MAX: 3.3MIN: 2.74 / MAX: 3.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenet13900K R13900K13600K A2468107.518.128.16MIN: 7.2 / MAX: 46.15MIN: 8.02 / MAX: 11.48MIN: 7.91 / MAX: 8.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-Only13900K R13900K13600K A306090120150100.11100.24113.68

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80i5-13600K13900K R13900K13600K A369121511.4512.3912.3911.351. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80i5-13600K13900K R13900K13600K A369121511.7512.7112.6711.661. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0i5-13600K13900K R13900K13600K A20406080100100.4987.7987.07101.861. (CXX) g++ options: -O3 -fPIC -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup13900K13600K A2468104.946.83

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU13900K13600K A20K40K60K80K100K105916.0658743.011. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNet13900K R13900K13600K A2040608010088.2288.3674.19

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total Time13900K R13900K13600K A2040608010088.2487.9181.361. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Testi5-13600K13900K R13900K13600K A20406080100107.6599.24104.5714.08

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90i5-13600K13900K R13900K13600K A369121511.2612.1712.1411.181. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A36912159.029.779.819.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90i5-13600K13900K R13900K13600K A369121511.5912.5312.5411.511. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total Timei5-13600K13900K R13900K13600K A10M20M30M40M50M398899634013492646604368409747241. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-Only13900K R13900K13600K A2040608010071.1771.1278.65

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compilei5-13600K13900K R13900K13600K A2040608010076.6072.4270.7676.29

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third Run13900K R13900K13600K A60120180240300256.04258.67233.95MIN: 19.45 / MAX: 12000MIN: 20.42 / MAX: 20000MIN: 19.19 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second Run13900K R13900K13600K A60120180240300260.69258.53235.86MIN: 19.6 / MAX: 30000MIN: 19.7 / MAX: 15000MIN: 18.93 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold Cache13900K R13900K13600K A50100150200250244.03247.91215.54MIN: 19.19 / MAX: 12000MIN: 19.5 / MAX: 30000MIN: 17.58 / MAX: 150001. ClickHouse server version 22.5.4.19 (official build).

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALSi5-13600K13900K R13900K13600K A50010001500200025002104.02271.92240.82118.3MIN: 2042.77 / MAX: 2160.8MIN: 2209.75 / MAX: 2350.07MIN: 2179.9 / MAX: 2313MIN: 2027.89 / MAX: 2219.93

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRanki5-13600K13900K R13900K13600K A50010001500200025002158.32082.12088.62152.0MIN: 1928.09 / MAX: 2193.86MIN: 1917.66 / MAX: 2144MIN: 1926.76 / MAX: 2153.25MIN: 1948.57 / MAX: 2198.38

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A0.06980.13960.20940.27920.3490.290.310.310.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13900K13600K A60012001800240030003020.102045.68MIN: 2516.37 / MAX: 3733.4MIN: 1719.02 / MAX: 2706.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13900K13600K A0.5851.171.7552.342.9252.602.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13900K13600K A60012001800240030002927.802047.62MIN: 2449.49 / MAX: 3630.7MIN: 1754.76 / MAX: 2684.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13900K13600K A0.6031.2061.8092.4123.0152.682.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A0.57041.14081.71122.28162.8522.4362.5142.5352.4261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile13900K R13900K13600K A50K100K150K200K250K246599.061183.5196543.0

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13900K13600K A4008001200160020001926.161544.52MIN: 1766.06 / MAX: 2069.19MIN: 1481.77 / MAX: 1650.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13900K13600K A0.92931.85862.78793.71724.64654.133.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2html13900K R13900K13600K A0.01390.02780.04170.05560.06950.055624680.055380450.06179302

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-5013900K R13900K13600K A71421283531.2831.3624.58

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To Compile13900K R13900K142842567063.1863.53

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V213900K R13900K13600K A130K260K390K520K650K591010327922259280

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13900K13600K A130260390520650592.96387.90MIN: 334.92 / MAX: 1081.53MIN: 290.34 / MAX: 782.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13900K13600K A369121513.4112.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomsi5-13600K13900K R13900K13600K A0.20260.40520.60780.81041.0130.899890.830020.827640.90061

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13900K13600K A4080120160200177.35112.40MIN: 136.29 / MAX: 238.95MIN: 91.63 / MAX: 160.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13900K13600K A102030405045.0744.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V413900K R13900K13600K A7K14K21K28K35K31966.227731.226166.3

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom13900K13600K A0.69191.38382.07572.76763.45953.0752.645

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar13900K13600K A2468107.9327.541

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet13900K R13900K13600K A50010001500200025002282.792100.041901.72

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float13900K R13900K13600K A300600900120015001553.511396.391450.81

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant13900K R13900K13600K A50010001500200025002007.812080.662420.75

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13900K13600K A369121513.519.36MIN: 9.52 / MAX: 25.42MIN: 7.85 / MAX: 16.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13900K13600K A130260390520650591.22533.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13900K13600K A2040608010077.0047.46MIN: 43.7 / MAX: 99.09MIN: 24.39 / MAX: 65.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13900K13600K A70140210280350311.28294.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13900K13600K A369121511.067.98MIN: 6.09 / MAX: 56.79MIN: 6.36 / MAX: 17.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13900K13600K A160320480640800722.55625.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13900K13600K A61218243027.1915.57MIN: 14.86 / MAX: 50.77MIN: 11.78 / MAX: 28.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13900K13600K A70140210280350293.60320.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13900K13600K A51015202521.9212.47MIN: 12.55 / MAX: 43.63MIN: 9.36 / MAX: 29.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13900K13600K A20040060080010001092.361116.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13900K13600K A0.21150.4230.63450.8461.05750.940.67MIN: 0.52 / MAX: 3.79MIN: 0.51 / MAX: 3.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13900K13600K A5K10K15K20K25K25289.5220877.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write Random13900K13600K A600K1200K1800K2400K3000K273906126210701. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update Random13900K13600K A150K300K450K600K750K6907885367121. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13900K13600K A0.5311.0621.5932.1242.6552.361.50MIN: 1.3 / MAX: 4MIN: 1.1 / MAX: 3.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13900K13600K A2K4K6K8K10K10149.229303.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While Writing13900K13600K A800K1600K2400K3200K4000K370070329893671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Read13900K13600K A20M40M60M80M100M1051719191141178811. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigi5-13600K13900K R13900K13600K A142842567061.5358.2558.3161.89

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOi5-13600K13900K R13900K13600K A100020003000400050004473.74358.24406.84396.7MAX: 5993.22MIN: 4358.19 / MAX: 6116.36MIN: 4406.77 / MAX: 6186.68MAX: 5845.73

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futuresi5-13600K13900K R13900K13600K A300600900120015001226.91078.51066.81173.1MIN: 1201.44 / MAX: 1243.4MIN: 1048.78 / MAX: 1097.69MIN: 1025.04 / MAX: 1095.71MIN: 1080.27 / MAX: 1197.68

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A300600900120015001410.881420.52811.90

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A2468108.47558.41348.4939

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A300600900120015001414.791420.81813.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A2468108.43998.40768.4456

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Interpreter13900K R13900K13600K A0.00060.00120.00180.00240.0030.000520420.000514030.00280923

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A70140210280350320.19317.81192.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A91827364537.4237.6336.38

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Pistol13900K R13900K13600K A122436486050.3450.4155.591. OpenSCAD version 2021.01

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark13900K R13900K13600K A61218243022.5823.0721.31

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trf13900K R13900K13600K A5001000150020002500223622391557

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lg13900K R13900K13600K A4K8K12K16K20K205682076218828

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A70140210280350339.51341.34191.37

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A81624324035.1135.0036.53

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2i5-13600K13900K R13900K13600K A112233445548.0742.8042.5347.881. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNet13900K R13900K13600K A2040608010089.8990.1870.82

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A4812162016.6518.1418.1316.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile13900K R13900K13600K A102030405043.6243.3545.58

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive13900K R13900K13600K A0.29060.58120.87181.16241.4531.29151.28451.03021. (CXX) g++ options: -O3 -flto -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A306090120150137.70137.7896.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A2040608010086.8687.0372.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13900K R13900K13600K A306090120150122.75123.64121.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13900K R13900K13600K A2468108.14648.08768.1978

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13900K R13900K13600K A306090120150123.46124.01122.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13900K R13900K13600K A2468108.09988.06368.1309

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13900K R13900K13600K A81624324031.1531.1133.53

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13900K R13900K13600K A71421283532.1032.1429.82

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1i5-13600K13900K R13900K13600K A153045607563.1668.1068.3962.52

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K R13900K91827364538.8738.43

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU13900K R13900K13600K A112233445549.2949.0043.93

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speedi5-13600K13900K R13900K13600K A110022003300440055004847.65010.55032.84849.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speedi5-13600K13900K R13900K13600K A122436486049.847.547.051.01. (CC) gcc options: -O3 -pthread -lz -llzma

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNet13900K R13900K13600K A4080120160200204.24204.28150.15

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4812162014.6716.0215.8914.761. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A50100150200250228.57228.34129.15

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A122436486052.3152.4054.18

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile13900K R13900K13600K A91827364538.8038.4239.971. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obji5-13600K13900K R13900K13600K A61218243018.7526.0826.0318.40MIN: 17.66 / MAX: 19.42MIN: 24.12 / MAX: 28.46MIN: 23.96 / MAX: 28.42MIN: 17.47 / MAX: 18.88

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A20406080100111.28111.4764.71

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13900K R13900K13600K A20406080100107.73107.38107.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13900K R13900K13600K A4812162014.2014.1316.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13900K R13900K13600K A163248648070.4070.7759.58

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace13900K13600K A50100150200250207232

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time13900K R13900K13600K A91827364535.0035.1939.391. RawTherapee, version 5.8, command line.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13900K R13900K13600K A61218243024.9425.0822.13

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13900K R13900K13600K A102030405040.0939.8645.18

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootouti5-13600K13900K R13900K13600K A50010001500200025002134.62011.32001.42076.3MIN: 1899.08 / MAX: 2195.05MIN: 1818.96 / MAX: 2312.63MIN: 1805.63 / MAX: 2241.18MIN: 1870.25 / MAX: 2410.84

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13900K R13900K13600K A369121512.9312.8212.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13900K R13900K13600K A2040608010077.2977.9981.33

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template13900K13600K A61218243021.324.0

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obji5-13600K13900K R13900K13600K A61218243020.5927.5027.3420.09MIN: 19.51 / MAX: 21.33MIN: 25.51 / MAX: 29.58MIN: 25.47 / MAX: 29.51MIN: 18.73 / MAX: 20.73

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speedi5-13600K13900K R13900K13600K A110022003300440055004708.54907.14930.74721.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speedi5-13600K13900K R13900K13600K A153045607554.667.968.955.31. (CC) gcc options: -O3 -pthread -lz -llzma

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K R13900K61218243025.7226.02

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

13600K A: The test quit with a non-zero exit status.

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speedi5-13600K13900K R13900K13600K A110022003300440055005159.35321.95349.25157.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speedi5-13600K13900K R13900K13600K A130026003900520065005893.16245.46287.25727.61. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speedi5-13600K13900K R13900K13600K A120024003600480060005520.55664.45673.25519.21. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speedi5-13600K13900K R13900K13600K A300600900120015001044.71548.51531.2938.91. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speedi5-13600K13900K R13900K13600K A130026003900520065005746.55887.55910.05756.91. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speedi5-13600K13900K R13900K13600K A300600900120015001236.71292.51296.81231.01. (CC) gcc options: -O3 -pthread -lz -llzma

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00013900K R13900K13600K A91827364533.5433.1637.491. (CC) gcc options: -O2 -lz

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dottyi5-13600K13900K R13900K13600K A100200300400500457.4427.2424.7449.5MIN: 386.83 / MAX: 876.54MIN: 357.58 / MAX: 745.84MIN: 357.3 / MAX: 748.16MIN: 382.14 / MAX: 928.71

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speedi5-13600K13900K R13900K13600K A120024003600480060005406.45583.45589.65408.01. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speedi5-13600K13900K R13900K13600K A300600900120015001148.21219.41401.21182.91. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A51015202521.3422.6722.7421.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1B13900K R13900K13600K A71421283531.3231.2530.60

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile13900K13600K A2040608010081.190.4

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crowni5-13600K13900K R13900K13600K A61218243018.6523.8824.1518.39MIN: 17.42 / MAX: 19.66MIN: 22.95 / MAX: 25.18MIN: 23.21 / MAX: 25.4MIN: 17.16 / MAX: 19.46

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compilei5-13600K13900K R13900K13600K A71421283531.7730.3930.0131.32

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.713900K R13900K13600K A11K22K33K44K55K50505.5250724.2040912.361. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to313900K13600K A4080120160200156172

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragoni5-13600K13900K R13900K13600K A71421283520.1828.3428.3720.43MIN: 19.07 / MAX: 20.82MIN: 26.01 / MAX: 30.55MIN: 25.94 / MAX: 30.73MIN: 19.38 / MAX: 20.88

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requestsi5-13600K13900K R13900K13600K A50010001500200025001918.52561.82518.11895.8MIN: 1787.45 / MAX: 2020.03MIN: 2392.59 / MAX: 2600.24MIN: 2345.07 / MAX: 2554.71MIN: 1759.76 / MAX: 2097.29

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayesi5-13600K13900K R13900K13600K A2004006008001000901.6847.0854.3896.8MIN: 652.2 / MAX: 901.64MIN: 629.37 / MAX: 847.01MIN: 627.97 / MAX: 854.34MIN: 656.87 / MAX: 896.82

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib13900K13600K A36912158.719.72

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Secondi5-13600K13900K R13900K13600K A150K300K450K600K750K642432.68712804.56709677.42645300.061. (CC) gcc options: -O2 -lrt" -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes13900K13600K A132639526550.056.1

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go13900K13600K A306090120150114128

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragoni5-13600K13900K R13900K13600K A71421283522.6330.6430.5123.09MIN: 21.12 / MAX: 23.45MIN: 28.62 / MAX: 32.96MIN: 28.47 / MAX: 32.55MIN: 21.7 / MAX: 23.76

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos13900K13600K A122436486046.351.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float13900K13600K A122436486047.151.6

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough13900K R13900K13600K A369121512.2312.3710.131. (CXX) g++ options: -O3 -flto -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crowni5-13600K13900K R13900K61218243017.1922.9023.22MIN: 16.19 / MAX: 18.01MIN: 22.12 / MAX: 24.12MIN: 22.43 / MAX: 24.28

Binary: Pathtracer - Model: Crown

13600K A: The test quit with a non-zero exit status.

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNet13900K R13900K13600K A4080120160200179.44179.09133.92

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX Case13900K R13900K13600K A61218243022.0521.9924.551. OpenSCAD version 2021.01

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python13900K13600K A50100150200250192216

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads13900K13600K A369121512.113.4

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNet13900K R13900K13600K A2040608010092.7192.6670.89

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A2468107.1457.5977.6077.1511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPi5-13600K13900K R13900K13600K A6K12K18K24K30K278221653116504278321. (CXX) g++ options: -O3 -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A0.20930.41860.62790.83721.04650.890.930.930.901. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Foresti5-13600K13900K R13900K13600K A90180270360450421.8387.1390.4419.9MIN: 383.07 / MAX: 499.39MIN: 360.05 / MAX: 452.67MIN: 361.83 / MAX: 476.17MIN: 381.62 / MAX: 498.68

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody13900K13600K A153045607560.566.5

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A71421283525.6830.8330.8324.851. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBi5-13600K13900K R13900K13600K A6K12K18K24K30K255981691515253269071. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasksi5-13600K13900K R13900K13600K A6K12K18K24K30K255961554115539257211. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threadsi5-13600K13900K R13900K13600K A5K10K15K20K25K254271525615293254881. (CXX) g++ options: -O3 -lpthread

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: Spaceship13900K13600K A1.32752.6553.98255.316.63755.94.8

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4Ki5-13600K13900K R13900K13600K A0.4950.991.4851.982.4751.642.192.201.641. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4Ki5-13600K13900K R13900K13600K A0.1530.3060.4590.6120.7650.470.680.680.471. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4Ki5-13600K13900K R13900K13600K A0.6031.2061.8092.4123.0151.972.672.681.981. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4Ki5-13600K13900K R13900K13600K A36912156.799.169.166.731. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapi5-13600K13900K R13900K400800120016002000205918281960

Java Test: Tradesoap

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pi5-13600K13900K R13900K13600K A0.6031.2061.8092.4123.0151.922.682.681.931. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pi5-13600K13900K R13900K13600K A36912157.6410.0610.097.651. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pi5-13600K13900K R13900K13600K A2468106.388.488.516.411. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pi5-13600K13900K R13900K13600K A81624324025.9634.3734.5426.151. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: Alli5-13600K13900K R13900K13600K A90180270360450330.95424.09426.96326.03

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Fast13900K R13900K13600K A60120180240300259.70261.48205.191. (CXX) g++ options: -O3 -flto -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNet13900K R13900K13600K A306090120150137.61137.79111.70

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A102030405039.4139.7044.6839.291. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500M13900K R13900K13600K A4812162014.2314.0614.36

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A132639526553.8757.1457.1654.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1213900K R13900K13600K A4812162013.6913.6116.741. (CXX) g++ options: -O3

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Default13900K R13900K13600K A4812162014.7114.3614.89

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resize13900K R13900K13600K A4812162013.4913.5813.78

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite13900K13600K A300K600K900K1200K1500K16003241422451

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A122436486044.7651.5253.3844.281. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-mask13900K R13900K13600K A369121512.3312.3512.51

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLAC13900K R13900K13600K A369121511.4811.4612.621. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v213900K R13900K13600K A4080120160200159.98159.65180.97MIN: 153.41 / MAX: 175.92MIN: 152.69 / MAX: 177.88MIN: 168.23 / MAX: 207.331. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times13900K13600K A110220330440550468519

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack13900K13600K A369121510.6911.921. (CXX) g++ options: -rdynamic

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A142842567054.4662.8863.6154.691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levels13900K R13900K13600K A369121510.6710.7411.00

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A153045607566.1658.4358.9866.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotate13900K R13900K13600K A369121510.0809.94310.462

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.113900K R13900K13600K A306090120150129.18127.00143.16MIN: 125 / MAX: 136.73MIN: 125.33 / MAX: 130.32MIN: 140.38 / MAX: 147.391. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

13600K A: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case Slim13900K R13900K13600K A36912158.8268.7589.8131. OpenSCAD version 2021.01

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010067.4577.5275.2267.541. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Medium13900K R13900K13600K A2040608010093.9494.5976.961. (CXX) g++ options: -O3 -flto -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010082.8677.1778.4784.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010084.4478.2379.4886.961. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A2040608010071.9987.8193.7670.491. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010084.5196.5296.6883.901. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A2040608010081.5686.3488.2681.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A2040608010096.4498.5996.7394.981. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Losslessi5-13600K13900K R13900K13600K A2468107.2806.6696.9217.3421. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A306090120150102.2998.54118.33100.801. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load Testi5-13600K13900K R13900K13600K A4K8K12K16K20K15771163281672115871

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A306090120150108.30123.16123.34106.421. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeansi5-13600K13900K R13900K13600K A4008001200160020001778177417101785

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP313900K R13900K13600K A1.21662.43323.64984.86646.0834.8374.8985.4071. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.013900K R13900K13600K A1.12952.2593.38854.5185.64755.0204.9674.995

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory13900K13600K A5K10K15K20K25K20515.6321115.291. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A306090120150135.26131.20158.14135.501. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode13900K R13900K13600K A1.10612.21223.31834.42445.53054.4134.5004.9161. (CXX) g++ options: -fvisibility=hidden -logg -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6i5-13600K13900K R13900K13600K A1.11672.23343.35014.46685.58354.9634.2854.1834.8541. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A306090120150150.29127.65130.31150.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount Swivel13900K R13900K13600K A1.02312.04623.06934.09245.11554.0804.0784.5471. OpenSCAD version 2021.01

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4080120160200142.67160.80162.19145.761. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4Ki5-13600K13900K R13900K13600K A4080120160200160.51183.49185.83163.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2i5-13600K13900K R13900K13600K A50010001500200025002124184817692019

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Losslessi5-13600K13900K R13900K13600K A0.91081.82162.73243.64324.5544.0483.9843.9134.0031. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4080120160200179.96148.08150.28178.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A4080120160200184.31150.65153.07187.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-only13900K R13900K13600K A0.64691.29381.94072.58763.23452.6212.6052.875

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-only13900K R13900K13600K A0.65161.30321.95482.60643.2582.2982.2892.896

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-only13900K R13900K13600K A0.48020.96041.44061.92082.4011.8031.8072.134

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A50100150200250188.71225.40225.84190.531. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A50100150200250212.62241.35241.94213.141. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythoni5-13600K13900K R13900K13600K A4008001200160020002035179217382040

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v213900K R13900K13600K A91827364537.1937.3541.47MIN: 36.9 / MAX: 38.34MIN: 36.75 / MAX: 38.65MIN: 41.1 / MAX: 42.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro Car13900K R13900K13600K A0.56791.13581.70372.27162.83952.2762.2782.5241. OpenSCAD version 2021.01

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Proteini5-13600K13900K R13900K13600K A36912159.72911.13911.1269.6731. (CXX) g++ options: -O3 -lm -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A80160240320400278.80349.37351.31279.481. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A80160240320400319.60361.31367.25313.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A80160240320400313.58378.33382.22320.211. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A90180270360450341.60384.75413.76332.971. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A110220330440550438.92517.69520.38445.101. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pi5-13600K13900K R13900K13600K A140280420560700548.57618.47631.43555.601. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-only13900K R13900K13600K A0.05290.10580.15870.21160.26450.1210.2350.139

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performancei5-13600K13900K R13900K13600K A500K1000K1500K2000K2500K19741142194334220311219741141. (CC) gcc options: -O3 -march=native

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

13600K A: The test quit with a non-zero exit status.

i5-13600K: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

Hash: wyhash

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash128

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: MeowHash x86_64 AES-NI

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha0_aes_avx2 x86_64

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash32 x86_64 AVX

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha2_atonce

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: fasthash32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: Spooky32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: SHA3-256

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

336 Results Shown

TensorFlow
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
LAMMPS Molecular Dynamics Simulator
Blender
Timed Linux Kernel Compilation
TensorFlow
Appleseed
OpenRadioss
Timed LLVM Compilation
OSPRay
Timed LLVM Compilation
LeelaChessZero:
  BLAS
  Eigen
Timed Node.js Compilation
OSPRay
TensorFlow:
  CPU - 256 - GoogLeNet
  CPU - 512 - AlexNet
Appleseed
TensorFlow
Blender
Appleseed
OpenRadioss
JPEG XL libjxl
OSPRay Studio
JPEG XL libjxl
OSPRay Studio:
  2 - 4K - 32 - Path Tracer
  1 - 4K - 32 - Path Tracer
Blender
OSPRay
Timed CPython Compilation
Primesieve
Renaissance
OpenRadioss
SVT-HEVC
OpenRadioss
OSPRay Studio:
  1 - 4K - 1 - Path Tracer
  2 - 4K - 1 - Path Tracer
Java Gradle Build
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
  nasnet
OSPRay Studio
Renaissance
TensorFlow
OSPRay Studio
asmFish
Timed MrBayes Analysis
TensorFlow
OSPRay Studio:
  3 - 4K - 16 - Path Tracer
  2 - 1080p - 32 - Path Tracer
ONNX Runtime
OSPRay Studio
ONNX Runtime:
  fcn-resnet101-11 - CPU - Standard
  ArcFace ResNet-100 - CPU - Parallel
  GPT-2 - CPU - Parallel
  bertsquad-12 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Standard
  yolov4 - CPU - Parallel
  GPT-2 - CPU - Standard
  bertsquad-12 - CPU - Standard
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Parallel
  super-resolution-10 - CPU - Standard
OSPRay Studio
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
OSPRay Studio
TNN
OSPRay Studio:
  1 - 1080p - 1 - Path Tracer
  2 - 1080p - 1 - Path Tracer
  2 - 1080p - 16 - Path Tracer
  2 - 4K - 16 - Path Tracer
  1 - 1080p - 16 - Path Tracer
  1 - 4K - 16 - Path Tracer
OSPRay
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Blender
JPEG XL libjxl:
  JPEG - 80
  PNG - 80
libavif avifenc
PyPerformance
Sysbench
TensorFlow
Tachyon
OpenRadioss
JPEG XL libjxl
AOM AV1
JPEG XL libjxl
Stockfish
Blender
Timed Godot Game Engine Compilation
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
Renaissance:
  Apache Spark ALS
  Apache Spark PageRank
AOM AV1
OpenVINO:
  Person Detection FP32 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
SVT-AV1
TensorFlow Lite
OpenVINO:
  Face Detection FP16 - CPU:
    ms
    FPS
Perl Benchmarks
TensorFlow
Timed Erlang/OTP Compilation
TensorFlow Lite
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
NAMD
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
TensorFlow Lite
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
TensorFlow Lite:
  SqueezeNet
  Mobilenet Float
  Mobilenet Quant
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
Facebook RocksDB:
  Read Rand Write Rand
  Update Rand
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
Facebook RocksDB:
  Read While Writing
  Rand Read
Timed Linux Kernel Compilation
Renaissance:
  Savina Reactors.IO
  Genetic Algorithm Using Jenetics + Futures
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Perl Benchmarks
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenSCAD
Node.js V8 Web Tooling Benchmark
spaCy:
  en_core_web_trf
  en_core_web_lg
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
libavif avifenc
TensorFlow
AOM AV1
Timed PHP Compilation
ASTC Encoder
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG XL Decoding libjxl
Neural Magic DeepSparse
DeepSpeech
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
TensorFlow
SVT-HEVC
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Timed Wasmer Compilation
Embree
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
PyPerformance
RawTherapee
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
Renaissance
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
PyPerformance
Embree
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
Neural Magic DeepSparse
Zstd Compression:
  3 - Decompression Speed
  3 - Compression Speed
  3, Long Mode - Decompression Speed
  3, Long Mode - Compression Speed
  8, Long Mode - Decompression Speed
  8, Long Mode - Compression Speed
SQLite Speedtest
Renaissance
Zstd Compression:
  8 - Decompression Speed
  8 - Compression Speed
AOM AV1
Y-Cruncher
PyPerformance
Embree
Timed Mesa Compilation
Aircrack-ng
PyPerformance
Embree
Renaissance:
  Finagle HTTP Requests
  Apache Spark Bayes
PyPerformance
Coremark
PyPerformance:
  crypto_pyaes
  go
Embree
PyPerformance:
  chaos
  float
ASTC Encoder
Embree
TensorFlow
OpenSCAD
PyPerformance:
  pickle_pure_python
  json_loads
TensorFlow
SVT-AV1
toyBrot Fractal Generator
AOM AV1
Renaissance
PyPerformance
x265
toyBrot Fractal Generator:
  TBB
  C++ Tasks
  C++ Threads
Natron
QuadRay:
  3 - 4K
  5 - 4K
  2 - 4K
  1 - 4K
DaCapo Benchmark
QuadRay:
  5 - 1080p
  2 - 1080p
  3 - 1080p
  1 - 1080p
JPEG XL Decoding libjxl
ASTC Encoder
TensorFlow
AOM AV1
Y-Cruncher
AOM AV1
Primesieve
Timed CPython Compilation
GIMP
PHPBench
x264
GIMP
FLAC Audio Encoding
TNN
PyBench
WavPack Audio Encoding
SVT-AV1
GIMP
AOM AV1
GIMP
TNN
OpenSCAD
SVT-HEVC
ASTC Encoder
AOM AV1:
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 1080p
SVT-VP9
x265
SVT-VP9
libavif avifenc
SVT-VP9
Node.js Express HTTP Load Test
SVT-AV1
DaCapo Benchmark
LAME MP3 Encoding
GNU Octave Benchmark
Sysbench
SVT-HEVC
Opus Codec Encoding
libavif avifenc
AOM AV1
OpenSCAD
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 4K
DaCapo Benchmark
libavif avifenc
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
Darktable:
  Boat - CPU-only
  Masskrug - CPU-only
  Server Room - CPU-only
x264
SVT-HEVC
DaCapo Benchmark
TNN
OpenSCAD
LAMMPS Molecular Dynamics Simulator
SVT-VP9
SVT-AV1
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
SVT-HEVC
SVT-AV1
Darktable
TSCP