raptor lake extra

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210201-PTS-RAPTORLA85
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
AV1 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 9 Tests
C/C++ Compiler Tests 19 Tests
CPU Massive 29 Tests
Creator Workloads 30 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 11 Tests
Game Development 3 Tests
HPC - High Performance Computing 17 Tests
Imaging 6 Tests
Java 3 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 12 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 35 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 2 Tests
Raytracing 4 Tests
Renderers 8 Tests
Scientific Computing 4 Tests
Server 7 Tests
Server CPU Tests 20 Tests
Single-Threaded 7 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
13600K A
October 16 2022
  8 Hours, 56 Minutes
i5-13600K
October 17 2022
  2 Hours, 43 Minutes
13900K
October 17 2022
  8 Hours, 19 Minutes
13900K R
October 18 2022
  6 Hours, 1 Minute
Invert Hiding All Results Option
  6 Hours, 30 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


raptor lake extraProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution13600K Ai5-13600K13900K13900K RIntel Core i5-13600K @ 5.10GHz (14 Cores / 20 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (1720 BIOS)Intel Device 7aa732GB2000GB Samsung SSD 980 PRO 2TBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160Intel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (2004 BIOS)2000GB Samsung SSD 980 PRO 2TB + 2000GBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- 13600K A: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- i5-13600K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x107- 13900K: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108- 13900K R: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x108Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

13600K Ai5-13600K13900K13900K RResult OverviewPhoronix Test Suite100%123%146%169%192%SVT-HEVCtoyBrot Fractal GeneratorOpenRadiossQuadRayEmbreeTimed MrBayes AnalysisOSPRayLeelaChessZeroJPEG XL Decoding libjxlx264SVT-VP9Timed LLVM CompilationStockfishx265LAMMPS Molecular Dynamics SimulatorJava Gradle BuildSVT-AV1DaCapo BenchmarkTSCPasmFishlibavif avifencCoremarkZstd CompressionNAMDTimed Godot Game Engine CompilationJPEG XL libjxlNode.js Express HTTP Load TestTimed Mesa CompilationTimed Linux Kernel CompilationAOM AV1Renaissance

raptor lake extratensorflow: CPU - 256 - ResNet-50ai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scorelammps: 20k Atomsblender: Barbershop - CPU-Onlybuild-linux-kernel: allmodconfigtensorflow: CPU - 512 - GoogLeNetappleseed: Emilyopenradioss: INIVOL and Fluid Structure Interaction Drop Containerbuild-llvm: Unix Makefilesospray: particle_volume/pathtracer/real_timebuild-llvm: Ninjalczero: BLASlczero: Eigenbuild-nodejs: Time To Compileospray: particle_volume/scivis/real_timetensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 512 - AlexNetappleseed: Disney Materialtensorflow: CPU - 64 - ResNet-50blender: Pabellon Barcelona - CPU-Onlyappleseed: Material Testeropenradioss: Bird Strike on Windshieldjpegxl: JPEG - 100ospray-studio: 3 - 4K - 32 - Path Tracerjpegxl: PNG - 100ospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerblender: Classroom - CPU-Onlyospray: particle_volume/ao/real_timebuild-python: Released Build, PGO + LTO Optimizedprimesieve: 1e13renaissance: ALS Movie Lensopenradioss: Rubber O-Ring Seal Installationsvt-hevc: 1 - Bosphorus 4Kopenradioss: Bumper Beamospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 2 - 4K - 1 - Path Tracerjava-gradle-perf: Reactormnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnetospray-studio: 3 - 1080p - 32 - Path Tracerrenaissance: Akka Unbalanced Cobwebbed Treetensorflow: CPU - 256 - AlexNetospray-studio: 3 - 4K - 1 - Path Tracerasmfish: 1024 Hash Memory, 26 Depthmrbayes: Primate Phylogeny Analysistensorflow: CPU - 32 - ResNet-50ospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 2 - 1080p - 32 - Path Traceronnx: fcn-resnet101-11 - CPU - Parallelospray-studio: 1 - 1080p - 32 - Path Traceronnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardospray-studio: 3 - 1080p - 16 - Path Tracerospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray-studio: 3 - 1080p - 1 - Path Tracertnn: CPU - DenseNetospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 16 - Path Tracerospray-studio: 2 - 4K - 16 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 1 - 4K - 16 - Path Tracerospray: gravity_spheres_volume/dim_512/pathtracer/real_timencnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetblender: Fishy Cat - CPU-Onlyjpegxl: JPEG - 80jpegxl: PNG - 80avifenc: 0pyperformance: python_startupsysbench: CPUtensorflow: CPU - 64 - GoogLeNettachyon: Total Timeopenradioss: Cell Phone Drop Testjpegxl: JPEG - 90aom-av1: Speed 4 Two-Pass - Bosphorus 4Kjpegxl: PNG - 90stockfish: Total Timeblender: BMW27 - CPU-Onlybuild-godot: Time To Compileclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacherenaissance: Apache Spark ALSrenaissance: Apache Spark PageRankaom-av1: Speed 0 Two-Pass - Bosphorus 4Kopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUsvt-av1: Preset 4 - Bosphorus 4Ktensorflow-lite: NASNet Mobileopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUperl-benchmark: Pod2htmltensorflow: CPU - 16 - ResNet-50build-erlang: Time To Compiletensorflow-lite: Inception ResNet V2openvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUnamd: ATPase Simulation - 327,506 Atomsopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUtensorflow-lite: Inception V4indigobench: CPU - Bedroomindigobench: CPU - Supercartensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Read Rand Write Randrocksdb: Update Randopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUrocksdb: Read While Writingrocksdb: Rand Readbuild-linux-kernel: defconfigrenaissance: Savina Reactors.IOrenaissance: Genetic Algorithm Using Jenetics + Futuresdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamperl-benchmark: Interpreterdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenscad: Pistolnode-web-tooling: spacy: en_core_web_trfspacy: en_core_web_lgdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamavifenc: 2tensorflow: CPU - 32 - GoogLeNetaom-av1: Speed 6 Two-Pass - Bosphorus 4Kbuild-php: Time To Compileastcenc: Exhaustivedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamjpegxl-decode: 1deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepspeech: CPUcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedtensorflow: CPU - 64 - AlexNetsvt-hevc: 1 - Bosphorus 1080pdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streambuild-wasmer: Time To Compileembree: Pathtracer - Asian Dragon Objdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streampyperformance: raytracerawtherapee: Total Benchmark Timedeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamrenaissance: In-Memory Database Shootoutdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streampyperformance: django_templateembree: Pathtracer ISPC - Asian Dragon Objcompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speeddeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamcompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedsqlite-speedtest: Timed Time - Size 1,000renaissance: Scala Dottycompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedaom-av1: Speed 4 Two-Pass - Bosphorus 1080py-cruncher: 1Bpyperformance: regex_compileembree: Pathtracer ISPC - Crownbuild-mesa: Time To Compileaircrack-ng: pyperformance: 2to3embree: Pathtracer - Asian Dragonrenaissance: Finagle HTTP Requestsrenaissance: Apache Spark Bayespyperformance: pathlibcoremark: CoreMark Size 666 - Iterations Per Secondpyperformance: crypto_pyaespyperformance: goembree: Pathtracer ISPC - Asian Dragonpyperformance: chaospyperformance: floatastcenc: Thoroughembree: Pathtracer - Crowntensorflow: CPU - 32 - AlexNetopenscad: Mini-ITX Casepyperformance: pickle_pure_pythonpyperformance: json_loadstensorflow: CPU - 16 - GoogLeNetsvt-av1: Preset 4 - Bosphorus 1080ptoybrot: OpenMPaom-av1: Speed 0 Two-Pass - Bosphorus 1080prenaissance: Rand Forestpyperformance: nbodyx265: Bosphorus 4Ktoybrot: TBBtoybrot: C++ Taskstoybrot: C++ Threadsnatron: Spaceshipquadray: 3 - 4Kquadray: 5 - 4Kquadray: 2 - 4Kquadray: 1 - 4Kdacapobench: Tradesoapquadray: 5 - 1080pquadray: 2 - 1080pquadray: 3 - 1080pquadray: 1 - 1080pjpegxl-decode: Allastcenc: Fasttensorflow: CPU - 16 - AlexNetaom-av1: Speed 6 Realtime - Bosphorus 4Ky-cruncher: 500Maom-av1: Speed 6 Two-Pass - Bosphorus 1080pprimesieve: 1e12build-python: Defaultgimp: resizephpbench: PHP Benchmark Suitex264: Bosphorus 4Kgimp: unsharp-maskencode-flac: WAV To FLACtnn: CPU - MobileNet v2pybench: Total For Average Test Timesencode-wavpack: WAV To WavPacksvt-av1: Preset 8 - Bosphorus 4Kgimp: auto-levelsaom-av1: Speed 8 Realtime - Bosphorus 4Kgimp: rotatetnn: CPU - SqueezeNet v1.1openscad: Leonardo Phone Case Slimsvt-hevc: 7 - Bosphorus 4Kastcenc: Mediumaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 4Kx265: Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 4Kavifenc: 6, Losslesssvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Knode-express-loadtest: svt-av1: Preset 10 - Bosphorus 4Kdacapobench: Tradebeansencode-mp3: WAV To MP3octave-benchmark: sysbench: RAM / Memorysvt-hevc: 10 - Bosphorus 4Kencode-opus: WAV To Opus Encodeavifenc: 6aom-av1: Speed 8 Realtime - Bosphorus 1080popenscad: Projector Mount Swivelsvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kdacapobench: H2avifenc: 10, Losslessaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pdarktable: Boat - CPU-onlydarktable: Masskrug - CPU-onlydarktable: Server Room - CPU-onlyx264: Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pdacapobench: Jythontnn: CPU - SqueezeNet v2openscad: Retro Carlammps: Rhodopsin Proteinsvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-av1: Preset 10 - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pdarktable: Server Rack - CPU-onlytscp: AI Chess Performancesmhasher: SHA3-25613600K Ai5-13600K13900K13900K R25.153672231013629.825886.55705.55780.83244.528471494.88410.844167.981392.84410731877386.5066.7943978.38186.11125.69514524.46271.14132.877144262.820.962813830.96241113236110223.666.82369195.233190.7617979.4173.553.76161.9372327382155.84520.5523.8142.0234.13317.6592.9320.9637.052735826675.8170.89860847844093111.97824.1514264763586976219389345736473917795071057497260050214931350323.589173.6790122041764.5071836186929951122275292161196854.809272.96168.177.1410.1113.3111.914.455.8928.527.651.013.972.872.322.592.838.16113.6811.3511.66101.866.8358743.0174.1981.358814.0811.189.0611.514097472478.6576.292233.95235.86215.542118.32152.00.32045.682.392047.622.392.4261965431544.523.180.0617930224.58259280387.912.870.90061112.444.3926166.32.6457.5411901.721450.812420.759.36533.4247.46294.417.98625.8815.57320.7812.471116.360.6720877.3326210705367121.59303.66298936711411788161.8924396.71173.1811.89838.4939813.24618.44560.00280923192.119536.384855.58621.31155718828191.366836.527247.87670.8216.8845.5811.030296.683772.2625121.98088.1978122.98518.130933.527729.823562.5243.930264849.151150.1514.76129.153454.178539.96918.403864.7117107.82216.781459.579423239.39122.129545.17742076.312.291281.33362420.09164721.155.35157.45727.65519.2938.95756.9123137.485449.554081182.921.330.690.418.394131.31940912.35917220.43121895.8896.89.72645300.0645356.112823.094451.451.610.1269133.9224.54721613.470.897.151278320.9419.966.524.852690725721254884.81.640.471.986.731.937.656.4126.15326.03205.1933111.739.2914.35954.1116.73914.89313.777142245144.2812.50612.616180.97451911.92154.69210.99766.6110.462143.1649.81367.5476.963584.5886.9670.4983.981.8494.987.342100.815871106.42117855.4074.99521115.29135.54.9164.854150.464.547145.759163.67720194.003178.37187.22.8752.8962.134190.53213.14204041.4692.5249.673279.48313.958320.21332.97445.1555.5970.13919741149.802708.178495.03426.473166.997396.822108718096.77171254.410.950.966.820127807.4173.633.73160.06163.2646688.649053594112.3723.547173.63554.7990711.4511.75100.49107.6511.269.0211.593988996376.6042104.02158.30.292.4360.8998961.5254473.71226.948.0716.6563.164847.649.814.6718.74962134.620.59334708.554.65159.35893.15520.51044.75746.51236.7457.45406.41148.221.3418.64631.7720.18321918.5901.6642432.67840922.633717.19017.145278220.89421.825.682559825596254271.640.471.976.7920591.927.646.3825.96330.9539.4153.8744.7654.45666.1667.4582.8684.4471.9984.5181.5696.447.28102.2915771108.3021778135.264.963150.29142.668160.51121244.048179.96184.31188.71212.6220359.729278.8319.599313.58341.6438.92548.569197411429.2345502843170711.193728.2698.3788.89221.167983418.47361.295205.25344.1938151711314.4388.3509688.34229.33133.90878829.46240.62121.884303241.6912229881.02188642189602187.978.36571181.981168.0497757.3143.634.01147.0257665777153.60126.2752.2182.9245.16521.2163.3731.1819.579548557757.2221.3683652613577146.86830.011129864714710246333864976548870534528888773849048754653273454.312124.4878117071543.69714581476233389662123080958895.870493.84122.18.019.612.910.254.636.0322.867.511.124.092.672.412.262.888.12100.2412.3912.6787.0654.94105916.0688.3687.9057104.5712.149.8112.544660436871.1270.764258.67258.53247.912240.82088.60.313020.12.62927.82.682.53561183.51926.164.130.0553804531.3663.525327922592.9613.410.82764177.3545.0727731.23.0757.9322100.041396.392080.6613.51591.2277311.2811.06722.5527.19293.621.921092.360.9425289.5227390616907882.3610149.22370070310517191958.3124406.81066.81420.52458.41341420.81438.40760.00051403317.808137.62650.41123.07223920762341.336435.00342.52590.1818.1343.3541.2845137.781487.0272123.64148.0876124.00988.063631.112832.137568.3938.425549.003365032.847204.2815.89228.338452.398738.41626.0259111.4651107.380514.125970.77320735.19425.078339.86372001.412.81777.993321.327.33884930.768.926.01965349.26287.25673.21531.259101296.833.155424.75589.61401.222.7431.24781.124.151630.00650724.19915628.37182518.1854.38.71709677.4193555011430.510646.347.112.366323.2157179.0921.99419212.192.667.607165040.93390.460.530.831525315539152935.92.20.682.689.1619602.6810.098.5134.54426.96261.478137.7944.6814.05957.1613.60914.35713.582160032453.3812.34911.462159.65446810.69163.60810.73558.989.943126.9958.75875.2294.59178.4779.4893.7696.6888.2696.736.921118.3316721123.33917104.8984.96720515.63158.144.54.183130.314.078162.19185.83117693.913150.28153.072.6052.2891.807225.84241.94173837.3512.27811.126351.31367.245382.22413.76520.38631.4270.235220311229.1811.312699.21888.69415.74362.947204.809346.5868611689315.7278.2597688.1227.829.42236.2712212241.02190671187779187.658.32049182.804168.7387738.7145.19148.9857055740144.46127.2012.3252.8615.15821.2563.3541.18310.007549737625.1220.57673353144036148.87729.821126584674646218275234.316894.4816717231551.22114471459236769676223193956325.904482.93127.5711.5510.512.7410.44.811.4322.897.061.18.562.232.822.212.777.51100.1112.3912.7187.79388.2288.241399.2412.179.7712.534013492671.1772.417256.04260.69244.032271.92082.10.312.5142465990.0556246831.2863.1845910100.8300231966.22282.791553.512007.8158.2514358.21078.51410.87998.47551414.78528.43990.00052042320.191737.417150.3422.58223620568339.512335.114942.889.8918.1443.6241.2915137.70386.8574122.74878.1464123.45638.099831.148632.100468.138.869649.286755010.547.5204.2416.02228.569352.311338.79926.0769111.2813107.727414.201170.399735.00224.939140.08542011.312.934177.287927.50224907.167.925.72245321.96245.45664.41548.55887.51292.533.538427.25583.41219.422.6731.32123.881730.38750505.5228.34182561.8847.0712804.56194930.644112.233622.9033179.4422.0592.717.597165310.93387.130.831691515541152562.190.682.679.1618282.6810.068.4834.37424.09259.6952137.6139.714.2357.1413.68614.71313.49251.5212.3311.475159.97862.87810.67258.4310.08129.1778.82677.5293.944677.1778.2387.8196.5286.3498.596.66998.5416328123.15717744.8375.02131.24.4134.285127.654.08160.803183.49418483.984148.08150.652.6212.2981.803225.4241.35179237.1872.27611.139349.37361.306378.33384.75517.69618.4710.1212194334OpenBenchmarking.org

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5013600K A13900K13900K R71421283525.1529.2329.18

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score13600K A13900K1000200030004000500036724550

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score13600K A13900K600120018002400300023102843

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score13600K A13900K40080012001600200013621707

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k Atoms13600K Ai5-13600K13900K13900K R36912159.8259.80211.19311.3121. (CXX) g++ options: -O3 -lm -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Barbershop - Compute: CPU-Only13600K A13900K2004006008001000886.55728.20

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfig13600K Ai5-13600K13900K13900K R150300450600750705.56708.18698.37699.22

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNet13600K A13900K13900K R2040608010080.8388.8988.69

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Emily13600K A13900K50100150200250244.53221.17

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container13600K Ai5-13600K13900K13900K R110220330440550494.88495.03418.47415.74

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix Makefiles13600K Ai5-13600K13900K13900K R90180270360450410.84426.47361.30362.95

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_time13600K Ai5-13600K13900K13900K R50100150200250167.98167.00205.25204.81

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja13600K Ai5-13600K13900K13900K R90180270360450392.84396.82344.19346.59

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAS13600K Ai5-13600K13900K13900K R2004006008001000107310878158611. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigen13600K Ai5-13600K13900K13900K R40080012001600200018771809171116891. (CXX) g++ options: -flto -pthread

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile13600K A13900K13900K R80160240320400386.51314.44315.73

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_time13600K Ai5-13600K13900K13900K R2468106.794396.771718.350968.25976

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNet13600K A13900K13900K R2040608010078.3888.3488.10

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNet13600K A13900K13900K R50100150200250186.11229.33227.80

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Material13600K A13900K306090120150125.70133.91

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-5013600K A13900K13900K R71421283524.4629.4629.42

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Pabellon Barcelona - Compute: CPU-Only13600K A13900K60120180240300271.14240.62

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material Tester13600K A13900K306090120150132.88121.88

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshield13600K Ai5-13600K13900K13900K R60120180240300262.82254.41241.69236.27

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 10013600K Ai5-13600K13900K13900K R0.2250.450.6750.91.1250.960.951.001.001. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R60K120K180K240K300K2813832229882212241. (CXX) g++ options: -O3 -ldl

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 10013600K Ai5-13600K13900K13900K R0.22950.4590.68850.9181.14750.960.961.021.021. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R50K100K150K200K250K2411131886421906711. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R50K100K150K200K250K2361101896021877791. (CXX) g++ options: -O3 -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Classroom - Compute: CPU-Only13600K A13900K13900K R50100150200250223.66187.97187.65

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_time13600K Ai5-13600K13900K13900K R2468106.823696.820128.365718.32049

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimized13600K A13900K13900K R4080120160200195.23181.98182.80

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1313600K A13900K13900K R4080120160200190.76168.05168.741. (CXX) g++ options: -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie Lens13600K Ai5-13600K13900K13900K R2K4K6K8K10K7979.47807.47757.37738.7MIN: 7979.39 / MAX: 8676.9MAX: 8535.5MAX: 8498.32MAX: 8461.7

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal Installation13600K Ai5-13600K13900K13900K R4080120160200173.55173.63143.63145.19

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4K13600K Ai5-13600K13900K0.90231.80462.70693.60924.51153.763.734.011. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Tuning: 1 - Input: Bosphorus 4K

13900K R: The test quit with a non-zero exit status. E: height not found in y4m header

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam13600K Ai5-13600K13900K13900K R4080120160200161.93160.06147.02148.98

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R150030004500600075007232576657051. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R160032004800640080007382577757401. (CXX) g++ options: -O3 -ldl

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: Reactor13600K Ai5-13600K13900K13900K R4080120160200155.85163.26153.60144.46

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v313600K A13900K13900K R61218243020.5526.2827.20MIN: 19.85 / MAX: 33.62MIN: 25.74 / MAX: 34.85MIN: 25.48 / MAX: 84.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.013600K A13900K13900K R0.85821.71642.57463.43284.2913.8142.2182.325MIN: 3.74 / MAX: 10.16MIN: 2.18 / MAX: 5.03MIN: 2.13 / MAX: 26.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_22413600K A13900K13900K R0.65791.31581.97372.63163.28952.0232.9242.861MIN: 1.98 / MAX: 7.95MIN: 2.82 / MAX: 5.77MIN: 2.75 / MAX: 26.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.013600K A13900K13900K R1.16212.32423.48634.64845.81054.1335.1655.158MIN: 4.06 / MAX: 9.73MIN: 5 / MAX: 8.1MIN: 4.89 / MAX: 30.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-5013600K A13900K13900K R51015202517.6621.2221.26MIN: 17.37 / MAX: 23.46MIN: 20.23 / MAX: 32.68MIN: 20.75 / MAX: 82.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.113600K A13900K13900K R0.75891.51782.27673.03563.79452.9323.3733.354MIN: 2.86 / MAX: 3.73MIN: 3.27 / MAX: 5.96MIN: 3.18 / MAX: 27.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV313600K A13900K13900K R0.26620.53240.79861.06481.3310.9631.1811.183MIN: 0.94 / MAX: 1.39MIN: 1.12 / MAX: 3.08MIN: 1.11 / MAX: 17.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnet13600K A13900K13900K R36912157.0529.57910.007MIN: 6.87 / MAX: 13.4MIN: 9.06 / MAX: 12.8MIN: 9.22 / MAX: 33.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R16K32K48K64K80K7358254855549731. (CXX) g++ options: -O3 -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed Tree13600K Ai5-13600K13900K13900K R170034005100680085006675.86688.67757.27625.1MIN: 5051.51MIN: 5113.28 / MAX: 6688.65MIN: 6001.17 / MAX: 7757.23MIN: 5855.12

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNet13600K A13900K13900K R50100150200250170.89221.30220.57

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R2K4K6K8K10K8608683667331. (CXX) g++ options: -O3 -ldl

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth13600K Ai5-13600K13900K13900K R11M22M33M44M55M47844093490535945261357753144036

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis13600K Ai5-13600K13900K13900K R306090120150111.98112.37146.87148.881. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm -lreadline

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-5013600K A13900K13900K R71421283524.1530.0129.82

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R30K60K90K120K150K1426471129861126581. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R14K28K42K56K70K6358647147467461. (CXX) g++ options: -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel13600K A13900K20406080100971021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer13600K A13900K13900K R13K26K39K52K65K6219346333462181. (CXX) g++ options: -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard13600K A13900K2040608010089861. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel13600K A13900K1102203304405503454971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Parallel13600K A13900K16003200480064008000736465481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Parallel13600K A13900K20040060080010007398701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard13600K A13900K40080012001600200017795341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Parallel13600K A13900K1102203304405505075281. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard13600K A13900K2K4K6K8K10K1057488871. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standard13600K A13900K20040060080010009727381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Standard13600K A13900K1302603905206506004901. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Parallel13600K A13900K11002200330044005500502148751. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standard13600K A13900K11002200330044005500493146531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R8K16K24K32K40K3503227345275231. (CXX) g++ options: -O3 -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_time13600K Ai5-13600K13900K13900K R0.97131.94262.91393.88524.85653.589173.547174.312124.31689

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_time13600K Ai5-13600K13900K13900K R1.00982.01963.02944.03925.0493.679013.635504.487814.48167

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R50010001500200025002204170717231. (CXX) g++ options: -O3 -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet13600K A13900K13900K R4008001200160020001764.511543.701551.22MIN: 1705.47 / MAX: 1870.65MIN: 1496.06 / MAX: 1636.76MIN: 1496.08 / MAX: 1635.131. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R4008001200160020001836145814471. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer13600K A13900K13900K R4008001200160020001869147614591. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R6K12K18K24K30K2995123338236761. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R30K60K90K120K150K12227596621967621. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R6K12K18K24K30K2921623080231931. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer13600K A13900K13900K R30K60K90K120K150K11968595889956321. (CXX) g++ options: -O3 -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time13600K Ai5-13600K13900K13900K R1.32852.6573.98555.3146.64254.809274.799075.870495.90448

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDet13600K A13900K13900K R0.8641.7282.5923.4564.322.963.842.93MIN: 2.88 / MAX: 3.35MIN: 3.79 / MAX: 5.75MIN: 2.88 / MAX: 3.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformer13600K A13900K13900K R4080120160200168.17122.10127.57MIN: 162.92 / MAX: 453.72MIN: 120.07 / MAX: 169.61MIN: 120.18 / MAX: 251.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400m13600K A13900K13900K R36912157.148.0111.55MIN: 6.93 / MAX: 7.88MIN: 7.87 / MAX: 9.45MIN: 7.99 / MAX: 760.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssd13600K A13900K13900K R369121510.119.6010.50MIN: 9.79 / MAX: 10.85MIN: 9.46 / MAX: 10.87MIN: 8.72 / MAX: 420.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tiny13600K A13900K13900K R369121513.3112.9012.74MIN: 12.98 / MAX: 13.95MIN: 12.74 / MAX: 14.22MIN: 12.47 / MAX: 30.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet5013600K A13900K13900K R369121511.9110.2510.40MIN: 11.62 / MAX: 12.51MIN: 10.14 / MAX: 11.58MIN: 10.2 / MAX: 20.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnet13600K A13900K13900K R1.082.163.244.325.44.454.634.80MIN: 4.32 / MAX: 5.23MIN: 4.56 / MAX: 7.22MIN: 4.72 / MAX: 6.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet1813600K A13900K13900K R36912155.896.0311.43MIN: 5.68 / MAX: 6.8MIN: 5.96 / MAX: 7.7MIN: 6.05 / MAX: 1070.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg1613600K A13900K13900K R71421283528.5222.8622.89MIN: 26.42 / MAX: 321.9MIN: 22.49 / MAX: 25.47MIN: 22.33 / MAX: 40.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenet13600K A13900K13900K R2468107.657.517.06MIN: 7.36 / MAX: 15.72MIN: 7.42 / MAX: 9.14MIN: 6.92 / MAX: 8.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazeface13600K A13900K13900K R0.2520.5040.7561.0081.261.011.121.10MIN: 0.98 / MAX: 1.34MIN: 1.09 / MAX: 1.41MIN: 1.07 / MAX: 1.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b013600K A13900K13900K R2468103.974.098.56MIN: 3.83 / MAX: 4.8MIN: 4.04 / MAX: 4.64MIN: 3.85 / MAX: 1157.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnet13600K A13900K13900K R0.64581.29161.93742.58323.2292.872.672.23MIN: 2.78 / MAX: 3.5MIN: 2.63 / MAX: 3.05MIN: 2.19 / MAX: 2.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v213600K A13900K13900K R0.63451.2691.90352.5383.17252.322.412.82MIN: 2.26 / MAX: 2.82MIN: 2.36 / MAX: 3.64MIN: 2.79 / MAX: 3.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v313600K A13900K13900K R0.58281.16561.74842.33122.9142.592.262.21MIN: 2.49 / MAX: 3.35MIN: 2.21 / MAX: 3.53MIN: 2.16 / MAX: 2.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v213600K A13900K13900K R0.6481.2961.9442.5923.242.832.882.77MIN: 2.74 / MAX: 3.62MIN: 2.84 / MAX: 3.3MIN: 2.73 / MAX: 4.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenet13600K A13900K13900K R2468108.168.127.51MIN: 7.91 / MAX: 8.94MIN: 8.02 / MAX: 11.48MIN: 7.2 / MAX: 46.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: Fishy Cat - Compute: CPU-Only13600K A13900K13900K R306090120150113.68100.24100.11

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 8013600K Ai5-13600K13900K13900K R369121511.3511.4512.3912.391. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 8013600K Ai5-13600K13900K13900K R369121511.6611.7512.6712.711. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 013600K Ai5-13600K13900K13900K R20406080100101.86100.4987.0787.791. (CXX) g++ options: -O3 -fPIC -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup13600K A13900K2468106.834.94

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU13600K A13900K20K40K60K80K100K58743.01105916.061. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNet13600K A13900K13900K R2040608010074.1988.3688.22

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total Time13600K A13900K13900K R2040608010081.3687.9188.241. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop Test13600K Ai5-13600K13900K13900K R2040608010014.08107.65104.5799.24

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 9013600K Ai5-13600K13900K13900K R369121511.1811.2612.1412.171. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R36912159.069.029.819.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 9013600K Ai5-13600K13900K13900K R369121511.5111.5912.5412.531. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total Time13600K Ai5-13600K13900K13900K R10M20M30M40M50M409747243988996346604368401349261. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.3Blend File: BMW27 - Compute: CPU-Only13600K A13900K13900K R2040608010078.6571.1271.17

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile13600K Ai5-13600K13900K13900K R2040608010076.2976.6070.7672.42

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third Run13600K A13900K13900K R60120180240300233.95258.67256.04MIN: 19.19 / MAX: 20000MIN: 20.42 / MAX: 20000MIN: 19.45 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second Run13600K A13900K13900K R60120180240300235.86258.53260.69MIN: 18.93 / MAX: 30000MIN: 19.7 / MAX: 15000MIN: 19.6 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold Cache13600K A13900K13900K R50100150200250215.54247.91244.03MIN: 17.58 / MAX: 15000MIN: 19.5 / MAX: 30000MIN: 19.19 / MAX: 120001. ClickHouse server version 22.5.4.19 (official build).

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark ALS13600K Ai5-13600K13900K13900K R50010001500200025002118.32104.02240.82271.9MIN: 2027.89 / MAX: 2219.93MIN: 2042.77 / MAX: 2160.8MIN: 2179.9 / MAX: 2313MIN: 2209.75 / MAX: 2350.07

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRank13600K Ai5-13600K13900K13900K R50010001500200025002152.02158.32088.62082.1MIN: 1948.57 / MAX: 2198.38MIN: 1928.09 / MAX: 2193.86MIN: 1926.76 / MAX: 2153.25MIN: 1917.66 / MAX: 2144

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R0.06980.13960.20940.27920.3490.300.290.310.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13600K A13900K60012001800240030002045.683020.10MIN: 1719.02 / MAX: 2706.1MIN: 2516.37 / MAX: 3733.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPU13600K A13900K0.5851.171.7552.342.9252.392.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13600K A13900K60012001800240030002047.622927.80MIN: 1754.76 / MAX: 2684.24MIN: 2449.49 / MAX: 3630.71. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPU13600K A13900K0.6031.2061.8092.4123.0152.392.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R0.57041.14081.71122.28162.8522.4262.4362.5352.5141. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobile13600K A13900K13900K R50K100K150K200K250K196543.061183.5246599.0

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13600K A13900K4008001200160020001544.521926.16MIN: 1481.77 / MAX: 1650.78MIN: 1766.06 / MAX: 2069.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPU13600K A13900K0.92931.85862.78793.71724.64653.184.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2html13600K A13900K13900K R0.01390.02780.04170.05560.06950.061793020.055380450.05562468

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-5013600K A13900K13900K R71421283524.5831.3631.28

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 25.0Time To Compile13900K13900K R142842567063.5363.18

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V213600K A13900K13900K R130K260K390K520K650K259280327922591010

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13600K A13900K130260390520650387.90592.96MIN: 290.34 / MAX: 782.61MIN: 334.92 / MAX: 1081.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPU13600K A13900K369121512.8713.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms13600K Ai5-13600K13900K13900K R0.20260.40520.60780.81041.0130.900610.899890.827640.83002

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13600K A13900K4080120160200112.40177.35MIN: 91.63 / MAX: 160.94MIN: 136.29 / MAX: 238.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPU13600K A13900K102030405044.3945.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V413600K A13900K13900K R7K14K21K28K35K26166.327731.231966.2

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom13600K A13900K0.69191.38382.07572.76763.45952.6453.075

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar13600K A13900K2468107.5417.932

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNet13600K A13900K13900K R50010001500200025001901.722100.042282.79

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Float13600K A13900K13900K R300600900120015001450.811396.391553.51

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quant13600K A13900K13900K R50010001500200025002420.752080.662007.81

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13600K A13900K36912159.3613.51MIN: 7.85 / MAX: 16.55MIN: 9.52 / MAX: 25.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU13600K A13900K130260390520650533.42591.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13600K A13900K2040608010047.4677.00MIN: 24.39 / MAX: 65.06MIN: 43.7 / MAX: 99.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPU13600K A13900K70140210280350294.41311.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13600K A13900K36912157.9811.06MIN: 6.36 / MAX: 17.98MIN: 6.09 / MAX: 56.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU13600K A13900K160320480640800625.88722.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13600K A13900K61218243015.5727.19MIN: 11.78 / MAX: 28.06MIN: 14.86 / MAX: 50.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPU13600K A13900K70140210280350320.78293.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13600K A13900K51015202512.4721.92MIN: 9.36 / MAX: 29.42MIN: 12.55 / MAX: 43.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU13600K A13900K20040060080010001116.361092.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13600K A13900K0.21150.4230.63450.8461.05750.670.94MIN: 0.51 / MAX: 3.31MIN: 0.52 / MAX: 3.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU13600K A13900K5K10K15K20K25K20877.3325289.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read Random Write Random13600K A13900K600K1200K1800K2400K3000K262107027390611. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Update Random13600K A13900K150K300K450K600K750K5367126907881. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13600K A13900K0.5311.0621.5932.1242.6551.502.36MIN: 1.1 / MAX: 3.03MIN: 1.3 / MAX: 41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU13600K A13900K2K4K6K8K10K9303.6610149.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared

Facebook RocksDB

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Read While Writing13600K A13900K800K1600K2400K3200K4000K298936737007031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.5.3Test: Random Read13600K A13900K20M40M60M80M100M1141178811051719191. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfig13600K Ai5-13600K13900K13900K R142842567061.8961.5358.3158.25

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IO13600K Ai5-13600K13900K13900K R100020003000400050004396.74473.74406.84358.2MAX: 5845.73MAX: 5993.22MIN: 4406.77 / MAX: 6186.68MIN: 4358.19 / MAX: 6116.36

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Genetic Algorithm Using Jenetics + Futures13600K Ai5-13600K13900K13900K R300600900120015001173.11226.91066.81078.5MIN: 1080.27 / MAX: 1197.68MIN: 1201.44 / MAX: 1243.4MIN: 1025.04 / MAX: 1095.71MIN: 1048.78 / MAX: 1097.69

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R30060090012001500811.901420.521410.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2468108.49398.41348.4755

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R30060090012001500813.251420.811414.79

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2468108.44568.40768.4399

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Interpreter13600K A13900K13900K R0.00060.00120.00180.00240.0030.002809230.000514030.00052042

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R70140210280350192.12317.81320.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R91827364536.3837.6337.42

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Pistol13600K A13900K13900K R122436486055.5950.4150.341. OpenSCAD version 2021.01

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark13600K A13900K13900K R61218243021.3123.0722.58

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trf13600K A13900K13900K R5001000150020002500155722392236

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lg13600K A13900K13900K R4K8K12K16K20K188282076220568

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R70140210280350191.37341.34339.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R81624324036.5335.0035.11

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 213600K Ai5-13600K13900K13900K R112233445547.8848.0742.5342.801. (CXX) g++ options: -O3 -fPIC -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNet13600K A13900K13900K R2040608010070.8290.1889.89

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R4812162016.8816.6518.1318.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile13600K A13900K13900K R102030405045.5843.3543.62

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive13600K A13900K13900K R0.29060.58120.87181.16241.4531.03021.28451.29151. (CXX) g++ options: -O3 -flto -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R30609012015096.68137.78137.70

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2040608010072.2687.0386.86

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13600K A13900K13900K R306090120150121.98123.64122.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream13600K A13900K13900K R2468108.19788.08768.1464

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13600K A13900K13900K R306090120150122.99124.01123.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream13600K A13900K13900K R2468108.13098.06368.0998

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13600K A13900K13900K R81624324033.5331.1131.15

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream13600K A13900K13900K R71421283529.8232.1432.10

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 113600K Ai5-13600K13900K13900K R153045607562.5263.1668.3968.10

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K13900K R91827364538.4338.87

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU13600K A13900K13900K R112233445543.9349.0049.29

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed13600K Ai5-13600K13900K13900K R110022003300440055004849.14847.65032.85010.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed13600K Ai5-13600K13900K13900K R122436486051.049.847.047.51. (CC) gcc options: -O3 -pthread -lz -llzma

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNet13600K A13900K13900K R4080120160200150.15204.28204.24

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4812162014.7614.6715.8916.021. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R50100150200250129.15228.34228.57

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R122436486054.1852.4052.31

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile13600K A13900K13900K R91827364539.9738.4238.801. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon Obj13600K Ai5-13600K13900K13900K R61218243018.4018.7526.0326.08MIN: 17.47 / MAX: 18.88MIN: 17.66 / MAX: 19.42MIN: 23.96 / MAX: 28.42MIN: 24.12 / MAX: 28.46

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R2040608010064.71111.47111.28

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream13600K A13900K13900K R20406080100107.82107.38107.73

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13600K A13900K13900K R4812162016.7814.1314.20

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream13600K A13900K13900K R163248648059.5870.7770.40

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace13600K A13900K50100150200250232207

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time13600K A13900K13900K R91827364539.3935.1935.001. RawTherapee, version 5.8, command line.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13600K A13900K13900K R61218243022.1325.0824.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream13600K A13900K13900K R102030405045.1839.8640.09

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database Shootout13600K Ai5-13600K13900K13900K R50010001500200025002076.32134.62001.42011.3MIN: 1870.25 / MAX: 2410.84MIN: 1899.08 / MAX: 2195.05MIN: 1805.63 / MAX: 2241.18MIN: 1818.96 / MAX: 2312.63

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13600K A13900K13900K R369121512.2912.8212.93

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream13600K A13900K13900K R2040608010081.3377.9977.29

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template13600K A13900K61218243024.021.3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon Obj13600K Ai5-13600K13900K13900K R61218243020.0920.5927.3427.50MIN: 18.73 / MAX: 20.73MIN: 19.51 / MAX: 21.33MIN: 25.47 / MAX: 29.51MIN: 25.51 / MAX: 29.58

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed13600K Ai5-13600K13900K13900K R110022003300440055004721.14708.54930.74907.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed13600K Ai5-13600K13900K13900K R153045607555.354.668.967.91. (CC) gcc options: -O3 -pthread -lz -llzma

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream13900K13900K R61218243026.0225.72

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

13600K A: The test quit with a non-zero exit status.

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed13600K Ai5-13600K13900K13900K R110022003300440055005157.45159.35349.25321.91. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed13600K Ai5-13600K13900K13900K R130026003900520065005727.65893.16287.26245.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed13600K Ai5-13600K13900K13900K R120024003600480060005519.25520.55673.25664.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed13600K Ai5-13600K13900K13900K R30060090012001500938.91044.71531.21548.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed13600K Ai5-13600K13900K13900K R130026003900520065005756.95746.55910.05887.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed13600K Ai5-13600K13900K13900K R300600900120015001231.01236.71296.81292.51. (CC) gcc options: -O3 -pthread -lz -llzma

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00013600K A13900K13900K R91827364537.4933.1633.541. (CC) gcc options: -O2 -lz

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala Dotty13600K Ai5-13600K13900K13900K R100200300400500449.5457.4424.7427.2MIN: 382.14 / MAX: 928.71MIN: 386.83 / MAX: 876.54MIN: 357.3 / MAX: 748.16MIN: 357.58 / MAX: 745.84

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed13600K Ai5-13600K13900K13900K R120024003600480060005408.05406.45589.65583.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed13600K Ai5-13600K13900K13900K R300600900120015001182.91148.21401.21219.41. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R51015202521.3021.3422.7422.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1B13600K A13900K13900K R71421283530.6031.2531.32

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile13600K A13900K2040608010090.481.1

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown13600K Ai5-13600K13900K13900K R61218243018.3918.6524.1523.88MIN: 17.16 / MAX: 19.46MIN: 17.42 / MAX: 19.66MIN: 23.21 / MAX: 25.4MIN: 22.95 / MAX: 25.18

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile13600K Ai5-13600K13900K13900K R71421283531.3231.7730.0130.39

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.713600K A13900K13900K R11K22K33K44K55K40912.3650724.2050505.521. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to313600K A13900K4080120160200172156

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon13600K Ai5-13600K13900K13900K R71421283520.4320.1828.3728.34MIN: 19.38 / MAX: 20.88MIN: 19.07 / MAX: 20.82MIN: 25.94 / MAX: 30.73MIN: 26.01 / MAX: 30.55

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP Requests13600K Ai5-13600K13900K13900K R50010001500200025001895.81918.52518.12561.8MIN: 1759.76 / MAX: 2097.29MIN: 1787.45 / MAX: 2020.03MIN: 2345.07 / MAX: 2554.71MIN: 2392.59 / MAX: 2600.24

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark Bayes13600K Ai5-13600K13900K13900K R2004006008001000896.8901.6854.3847.0MIN: 656.87 / MAX: 896.82MIN: 652.2 / MAX: 901.64MIN: 627.97 / MAX: 854.34MIN: 629.37 / MAX: 847.01

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib13600K A13900K36912159.728.71

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second13600K Ai5-13600K13900K13900K R150K300K450K600K750K645300.06642432.68709677.42712804.561. (CC) gcc options: -O2 -lrt" -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes13600K A13900K132639526556.150.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go13600K A13900K306090120150128114

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon13600K Ai5-13600K13900K13900K R71421283523.0922.6330.5130.64MIN: 21.7 / MAX: 23.76MIN: 21.12 / MAX: 23.45MIN: 28.47 / MAX: 32.55MIN: 28.62 / MAX: 32.96

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos13600K A13900K122436486051.446.3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float13600K A13900K122436486051.647.1

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough13600K A13900K13900K R369121510.1312.3712.231. (CXX) g++ options: -O3 -flto -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crowni5-13600K13900K13900K R61218243017.1923.2222.90MIN: 16.19 / MAX: 18.01MIN: 22.43 / MAX: 24.28MIN: 22.12 / MAX: 24.12

Binary: Pathtracer - Model: Crown

13600K A: The test quit with a non-zero exit status.

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNet13600K A13900K13900K R4080120160200133.92179.09179.44

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX Case13600K A13900K13900K R61218243024.5521.9922.051. OpenSCAD version 2021.01

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python13600K A13900K50100150200250216192

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads13600K A13900K369121513.412.1

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNet13600K A13900K13900K R2040608010070.8992.6692.71

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R2468107.1517.1457.6077.5971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMP13600K Ai5-13600K13900K13900K R6K12K18K24K30K278322782216504165311. (CXX) g++ options: -O3 -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R0.20930.41860.62790.83721.04650.900.890.930.931. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random Forest13600K Ai5-13600K13900K13900K R90180270360450419.9421.8390.4387.1MIN: 381.62 / MAX: 498.68MIN: 383.07 / MAX: 499.39MIN: 361.83 / MAX: 476.17MIN: 360.05 / MAX: 452.67

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody13600K A13900K153045607566.560.5

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R71421283524.8525.6830.8330.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBB13600K Ai5-13600K13900K13900K R6K12K18K24K30K269072559815253169151. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasks13600K Ai5-13600K13900K13900K R6K12K18K24K30K257212559615539155411. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threads13600K Ai5-13600K13900K13900K R5K10K15K20K25K254882542715293152561. (CXX) g++ options: -O3 -lpthread

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: Spaceship13600K A13900K1.32752.6553.98255.316.63754.85.9

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4K13600K Ai5-13600K13900K13900K R0.4950.991.4851.982.4751.641.642.202.191. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4K13600K Ai5-13600K13900K13900K R0.1530.3060.4590.6120.7650.470.470.680.681. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4K13600K Ai5-13600K13900K13900K R0.6031.2061.8092.4123.0151.981.972.682.671. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4K13600K Ai5-13600K13900K13900K R36912156.736.799.169.161. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoapi5-13600K13900K13900K R400800120016002000205919601828

Java Test: Tradesoap

13600K A: The test quit with a non-zero exit status. E: # A fatal error has been detected by the Java Runtime Environment:

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080p13600K Ai5-13600K13900K13900K R0.6031.2061.8092.4123.0151.931.922.682.681. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080p13600K Ai5-13600K13900K13900K R36912157.657.6410.0910.061. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080p13600K Ai5-13600K13900K13900K R2468106.416.388.518.481. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080p13600K Ai5-13600K13900K13900K R81624324026.1525.9634.5434.371. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: All13600K Ai5-13600K13900K13900K R90180270360450326.03330.95426.96424.09

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Fast13600K A13900K13900K R60120180240300205.19261.48259.701. (CXX) g++ options: -O3 -flto -pthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNet13600K A13900K13900K R306090120150111.70137.79137.61

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R102030405039.2939.4144.6839.701. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500M13600K A13900K13900K R4812162014.3614.0614.23

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R132639526554.1153.8757.1657.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1213600K A13900K13900K R4812162016.7413.6113.691. (CXX) g++ options: -O3

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Default13600K A13900K13900K R4812162014.8914.3614.71

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resize13600K A13900K13900K R4812162013.7813.5813.49

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite13600K A13900K300K600K900K1200K1500K14224511600324

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R122436486044.2844.7653.3851.521. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-mask13600K A13900K13900K R369121512.5112.3512.33

FLAC Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLAC13600K A13900K13900K R369121512.6211.4611.481. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v213600K A13900K13900K R4080120160200180.97159.65159.98MIN: 168.23 / MAX: 207.33MIN: 152.69 / MAX: 177.88MIN: 153.41 / MAX: 175.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times13600K A13900K110220330440550519468

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack13600K A13900K369121511.9210.691. (CXX) g++ options: -rdynamic

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R142842567054.6954.4663.6162.881. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levels13600K A13900K13900K R369121511.0010.7410.67

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R153045607566.6166.1658.9858.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotate13600K A13900K13900K R369121510.4629.94310.080

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.113600K A13900K13900K R306090120150143.16127.00129.18MIN: 140.38 / MAX: 147.39MIN: 125.33 / MAX: 130.32MIN: 125 / MAX: 136.731. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

13600K A: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case Slim13600K A13900K13900K R36912159.8138.7588.8261. OpenSCAD version 2021.01

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010067.5467.4575.2277.521. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Medium13600K A13900K13900K R2040608010076.9694.5993.941. (CXX) g++ options: -O3 -flto -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010084.5882.8678.4777.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010086.9684.4479.4878.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R2040608010070.4971.9993.7687.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010083.9084.5196.6896.521. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R2040608010081.8481.5688.2686.341. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R2040608010094.9896.4496.7398.591. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless13600K Ai5-13600K13900K13900K R2468107.3427.2806.9216.6691. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R306090120150100.80102.29118.3398.541. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load Test13600K Ai5-13600K13900K13900K R4K8K12K16K20K15871157711672116328

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R306090120150106.42108.30123.34123.161. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans13600K Ai5-13600K13900K13900K R4008001200160020001785177817101774

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP313600K A13900K13900K R1.21662.43323.64984.86646.0835.4074.8984.8371. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.013600K A13900K13900K R1.12952.2593.38854.5185.64754.9954.9675.020

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory13600K A13900K5K10K15K20K25K21115.2920515.631. (CC) gcc options: -O2 -funroll-loops -rdynamic -ldl -laio -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R306090120150135.50135.26158.14131.201. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode13600K A13900K13900K R1.10612.21223.31834.42445.53054.9164.5004.4131. (CXX) g++ options: -fvisibility=hidden -logg -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 613600K Ai5-13600K13900K13900K R1.11672.23343.35014.46685.58354.8544.9634.1834.2851. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R306090120150150.46150.29130.31127.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount Swivel13600K A13900K13900K R1.02312.04623.06934.09245.11554.5474.0784.0801. OpenSCAD version 2021.01

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4080120160200145.76142.67162.19160.801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4K13600K Ai5-13600K13900K13900K R4080120160200163.68160.51185.83183.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H213600K Ai5-13600K13900K13900K R50010001500200025002019212417691848

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless13600K Ai5-13600K13900K13900K R0.91081.82162.73243.64324.5544.0034.0483.9133.9841. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4080120160200178.37179.96150.28148.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R4080120160200187.20184.31153.07150.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-only13600K A13900K13900K R0.64691.29381.94072.58763.23452.8752.6052.621

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-only13600K A13900K13900K R0.65161.30321.95482.60643.2582.8962.2892.298

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-only13600K A13900K13900K R0.48020.96041.44061.92082.4012.1341.8071.803

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R50100150200250190.53188.71225.84225.401. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R50100150200250213.14212.62241.94241.351. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython13600K Ai5-13600K13900K13900K R4008001200160020002040203517381792

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v213600K A13900K13900K R91827364541.4737.3537.19MIN: 41.1 / MAX: 42.38MIN: 36.75 / MAX: 38.65MIN: 36.9 / MAX: 38.341. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro Car13600K A13900K13900K R0.56791.13581.70372.27162.83952.5242.2782.2761. OpenSCAD version 2021.01

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein13600K Ai5-13600K13900K13900K R36912159.6739.72911.12611.1391. (CXX) g++ options: -O3 -lm -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R80160240320400279.48278.80351.31349.371. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R80160240320400313.96319.60367.25361.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R80160240320400320.21313.58382.22378.331. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R90180270360450332.97341.60413.76384.751. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R110220330440550445.10438.92520.38517.691. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080p13600K Ai5-13600K13900K13900K R140280420560700555.60548.57631.43618.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-only13600K A13900K13900K R0.05290.10580.15870.21160.26450.1390.2350.121

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess Performance13600K Ai5-13600K13900K13900K R500K1000K1500K2000K2500K19741141974114220311221943341. (CC) gcc options: -O3 -march=native

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

13600K A: The test quit with a non-zero exit status.

i5-13600K: The test quit with a non-zero exit status.

13900K: The test quit with a non-zero exit status.

13900K R: The test quit with a non-zero exit status.

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

Hash: wyhash

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash128

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: MeowHash x86_64 AES-NI

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha0_aes_avx2 x86_64

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: FarmHash32 x86_64 AVX

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: t1ha2_atonce

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: fasthash32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: Spooky32

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

Hash: SHA3-256

13600K A: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

i5-13600K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

13900K R: The test quit with a non-zero exit status. E: ./smhasher: 3: ./SMHasher: not found

336 Results Shown

TensorFlow
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
LAMMPS Molecular Dynamics Simulator
Blender
Timed Linux Kernel Compilation
TensorFlow
Appleseed
OpenRadioss
Timed LLVM Compilation
OSPRay
Timed LLVM Compilation
LeelaChessZero:
  BLAS
  Eigen
Timed Node.js Compilation
OSPRay
TensorFlow:
  CPU - 256 - GoogLeNet
  CPU - 512 - AlexNet
Appleseed
TensorFlow
Blender
Appleseed
OpenRadioss
JPEG XL libjxl
OSPRay Studio
JPEG XL libjxl
OSPRay Studio:
  2 - 4K - 32 - Path Tracer
  1 - 4K - 32 - Path Tracer
Blender
OSPRay
Timed CPython Compilation
Primesieve
Renaissance
OpenRadioss
SVT-HEVC
OpenRadioss
OSPRay Studio:
  1 - 4K - 1 - Path Tracer
  2 - 4K - 1 - Path Tracer
Java Gradle Build
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
  nasnet
OSPRay Studio
Renaissance
TensorFlow
OSPRay Studio
asmFish
Timed MrBayes Analysis
TensorFlow
OSPRay Studio:
  3 - 4K - 16 - Path Tracer
  2 - 1080p - 32 - Path Tracer
ONNX Runtime
OSPRay Studio
ONNX Runtime:
  fcn-resnet101-11 - CPU - Standard
  ArcFace ResNet-100 - CPU - Parallel
  GPT-2 - CPU - Parallel
  bertsquad-12 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Standard
  yolov4 - CPU - Parallel
  GPT-2 - CPU - Standard
  bertsquad-12 - CPU - Standard
  yolov4 - CPU - Standard
  super-resolution-10 - CPU - Parallel
  super-resolution-10 - CPU - Standard
OSPRay Studio
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
OSPRay Studio
TNN
OSPRay Studio:
  1 - 1080p - 1 - Path Tracer
  2 - 1080p - 1 - Path Tracer
  2 - 1080p - 16 - Path Tracer
  2 - 4K - 16 - Path Tracer
  1 - 1080p - 16 - Path Tracer
  1 - 4K - 16 - Path Tracer
OSPRay
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Blender
JPEG XL libjxl:
  JPEG - 80
  PNG - 80
libavif avifenc
PyPerformance
Sysbench
TensorFlow
Tachyon
OpenRadioss
JPEG XL libjxl
AOM AV1
JPEG XL libjxl
Stockfish
Blender
Timed Godot Game Engine Compilation
ClickHouse:
  100M Rows Web Analytics Dataset, Third Run
  100M Rows Web Analytics Dataset, Second Run
  100M Rows Web Analytics Dataset, First Run / Cold Cache
Renaissance:
  Apache Spark ALS
  Apache Spark PageRank
AOM AV1
OpenVINO:
  Person Detection FP32 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
SVT-AV1
TensorFlow Lite
OpenVINO:
  Face Detection FP16 - CPU:
    ms
    FPS
Perl Benchmarks
TensorFlow
Timed Erlang/OTP Compilation
TensorFlow Lite
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
NAMD
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
TensorFlow Lite
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
TensorFlow Lite:
  SqueezeNet
  Mobilenet Float
  Mobilenet Quant
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
Facebook RocksDB:
  Read Rand Write Rand
  Update Rand
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
Facebook RocksDB:
  Read While Writing
  Rand Read
Timed Linux Kernel Compilation
Renaissance:
  Savina Reactors.IO
  Genetic Algorithm Using Jenetics + Futures
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Perl Benchmarks
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenSCAD
Node.js V8 Web Tooling Benchmark
spaCy:
  en_core_web_trf
  en_core_web_lg
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
libavif avifenc
TensorFlow
AOM AV1
Timed PHP Compilation
ASTC Encoder
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG XL Decoding libjxl
Neural Magic DeepSparse
DeepSpeech
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
TensorFlow
SVT-HEVC
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Timed Wasmer Compilation
Embree
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
PyPerformance
RawTherapee
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
Renaissance
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
PyPerformance
Embree
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
Neural Magic DeepSparse
Zstd Compression:
  3 - Decompression Speed
  3 - Compression Speed
  3, Long Mode - Decompression Speed
  3, Long Mode - Compression Speed
  8, Long Mode - Decompression Speed
  8, Long Mode - Compression Speed
SQLite Speedtest
Renaissance
Zstd Compression:
  8 - Decompression Speed
  8 - Compression Speed
AOM AV1
Y-Cruncher
PyPerformance
Embree
Timed Mesa Compilation
Aircrack-ng
PyPerformance
Embree
Renaissance:
  Finagle HTTP Requests
  Apache Spark Bayes
PyPerformance
Coremark
PyPerformance:
  crypto_pyaes
  go
Embree
PyPerformance:
  chaos
  float
ASTC Encoder
Embree
TensorFlow
OpenSCAD
PyPerformance:
  pickle_pure_python
  json_loads
TensorFlow
SVT-AV1
toyBrot Fractal Generator
AOM AV1
Renaissance
PyPerformance
x265
toyBrot Fractal Generator:
  TBB
  C++ Tasks
  C++ Threads
Natron
QuadRay:
  3 - 4K
  5 - 4K
  2 - 4K
  1 - 4K
DaCapo Benchmark
QuadRay:
  5 - 1080p
  2 - 1080p
  3 - 1080p
  1 - 1080p
JPEG XL Decoding libjxl
ASTC Encoder
TensorFlow
AOM AV1
Y-Cruncher
AOM AV1
Primesieve
Timed CPython Compilation
GIMP
PHPBench
x264
GIMP
FLAC Audio Encoding
TNN
PyBench
WavPack Audio Encoding
SVT-AV1
GIMP
AOM AV1
GIMP
TNN
OpenSCAD
SVT-HEVC
ASTC Encoder
AOM AV1:
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 1080p
SVT-VP9
x265
SVT-VP9
libavif avifenc
SVT-VP9
Node.js Express HTTP Load Test
SVT-AV1
DaCapo Benchmark
LAME MP3 Encoding
GNU Octave Benchmark
Sysbench
SVT-HEVC
Opus Codec Encoding
libavif avifenc
AOM AV1
OpenSCAD
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 4K
DaCapo Benchmark
libavif avifenc
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
Darktable:
  Boat - CPU-only
  Masskrug - CPU-only
  Server Room - CPU-only
x264
SVT-HEVC
DaCapo Benchmark
TNN
OpenSCAD
LAMMPS Molecular Dynamics Simulator
SVT-VP9
SVT-AV1
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
SVT-HEVC
SVT-AV1
Darktable
TSCP