Intel Core i9 13900K Linux CPU Performance Benchmarks

Benchmark for future article..

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210266-PTS-RAPTORRE40
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 5 Tests
AV1 4 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 4 Tests
Web Browsers 1 Tests
Chess Test Suite 6 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 24 Tests
Compression Tests 5 Tests
CPU Massive 43 Tests
Creator Workloads 49 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 8 Tests
Database Test Suite 3 Tests
Encoding 15 Tests
Finance 2 Tests
Fortran Tests 6 Tests
Game Development 7 Tests
HPC - High Performance Computing 25 Tests
Imaging 11 Tests
Java 2 Tests
Common Kernel Benchmarks 5 Tests
Machine Learning 8 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 9 Tests
Multi-Core 49 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 9 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 13 Tests
Productivity 4 Tests
Programmer / Developer System Benchmarks 16 Tests
Python 3 Tests
Raytracing 3 Tests
Renderers 10 Tests
Scientific Computing 12 Tests
Software Defined Radio 3 Tests
Server 8 Tests
Server CPU Tests 27 Tests
Single-Threaded 12 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 3 Tests
Video Encoding 10 Tests
Common Workstation Benchmarks 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i9 13900K
October 24 2022
  1 Day, 2 Hours, 26 Minutes
13900K
October 25 2022
  6 Hours, 44 Minutes
i9-13900K
October 26 2022
  6 Hours, 43 Minutes
Invert Hiding All Results Option
  13 Hours, 18 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel Core i9 13900K Linux CPU Performance BenchmarksOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-13900K @ 4.00GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (0602 BIOS)Intel Device 7a2732GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)Realtek ALC897ASUS VP28URealtek RTL8125 2.5GbE + Intel Device 7a70Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionIntel Core I9 13900K Linux CPU Performance BenchmarksSystem Logs- Transparent Huge Pages: madvise- Core i9 13900K: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x10e- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)- Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Core i9 13900K13900Ki9-13900KResult OverviewPhoronix Test Suite100%113%125%138%150%OSPRay StudioCloverLeafTimed Mesa CompilationRedisNatronx264QuantLibSVT-VP9PennantRawTherapeeGraphicsMagickVOSK Speech Recognition ToolkitLeelaChessZeroNCNNNode.js V8 Web Tooling BenchmarkGNU Octave BenchmarkUnpacking The Linux KernelGoogle DracoGzip CompressionSVT-AV1Bork File EncrypterUnpacking FirefoxTimed MrBayes Analysisx265ClickHouse

Intel Core i9 13900K Linux CPU Performance Benchmarkssvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kcloverleaf: Lagrangian-Eulerian Hydrodynamicssvt-av1: Preset 12 - Bosphorus 4Kncnn: CPU - mnasnetbuild-mesa: Time To Compilex264: Bosphorus 4Kpennant: sedovbignatron: Spaceshipsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 4Kx264: Bosphorus 1080pquantlib: ncnn: CPU - googlenetgraphics-magick: HWB Color Spacenpb: CG.Cclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachecompress-lz4: 9 - Decompression Speedncnn: CPU - vision_transformerncnn: CPU - squeezenet_ssddacapobench: H2tnn: CPU - DenseNettensorflow-lite: SqueezeNetdraco: Lionrawtherapee: Total Benchmark Timelczero: BLASgraphics-magick: Resizinggmic: 3D Elevated Function In Rand Colors, 100 Timesx265: Bosphorus 4Ktnn: CPU - MobileNet v2svt-av1: Preset 12 - Bosphorus 1080pcompress-lz4: 1 - Decompression Speedlczero: Eigengraphics-magick: Noise-Gaussiannpb: SP.Bvosk: node-web-tooling: octave-benchmark: npb: IS.Dncnn: CPU - vgg16gmic: 2D Function Plotting, 1000 Timesunpack-linux: linux-5.19.tar.xztensorflow-lite: Inception V4compress-lz4: 3 - Decompression Speedsvt-vp9: Visual Quality Optimized - Bosphorus 1080prodinia: OpenMP CFD Solversvt-av1: Preset 10 - Bosphorus 4Kcompress-gzip: Linux Source Tree Archiving To .tar.gznettle: poly1305-aesavifenc: 10, Losslessunpack-firefox: firefox-84.0.source.tar.xzbork: File Encryption Timedacapobench: Tradebeansnpb: MG.Cluxcorerender: Rainbow Colors and Prism - CPUncnn: CPU - alexnetmrbayes: Primate Phylogeny Analysissvt-hevc: 7 - Bosphorus 1080pdraco: Church Facadepyhpc: CPU - Numpy - 4194304 - Equation of Stateavifenc: 6, Losslessgimp: resizetnn: CPU - SqueezeNet v2aom-av1: Speed 9 Realtime - Bosphorus 1080pncnn: CPU - resnet50aom-av1: Speed 10 Realtime - Bosphorus 1080pincompact3d: input.i3d 193 Cells Per Directionclickhouse: 100M Rows Web Analytics Dataset, Second Runaskap: tConvolve OpenMP - Griddingbuild-llvm: Unix Makefilesgimp: rotatecompress-zstd: 19, Long Mode - Compression Speedredis: GET - 50x265: Bosphorus 1080pngspice: C2670gnuradio: Signal Source (Cosine)node-express-loadtest: nettle: sha512luaradio: Hilbert Transformgimp: auto-levelsembree: Pathtracer ISPC - Crownrodinia: OpenMP Leukocytesvt-hevc: 7 - Bosphorus 4Kgegl: Antialiasttsiod-renderer: Phong Rendering With Soft-Shadow Mappingchia-vdf: Square Assembly Optimizedstress-ng: NUMAembree: Pathtracer - Asian Dragonincompact3d: input.i3d 129 Cells Per Directionembree: Pathtracer - Crownncnn: CPU - resnet18sqlite-speedtest: Timed Time - Size 1,000gegl: Reflectgimp: unsharp-maskasmfish: 1024 Hash Memory, 26 Depthpyperformance: raytraceocrmypdf: Processing 60 Page PDF Documentgnuradio: Five Back to Back FIR Filterssvt-av1: Preset 10 - Bosphorus 1080pwireguard: simdjson: Kostyabuild-mplayer: Time To Compilecompress-zstd: 19 - Compression Speedopenscad: Leonardo Phone Case Slimospray-studio: 3 - 4K - 32 - Path Tracerwebp2: Quality 100, Lossless Compressioncompress-lz4: 1 - Compression Speedaom-av1: Speed 8 Realtime - Bosphorus 4Kcompress-zstd: 19, Long Mode - Decompression Speedospray-studio: 1 - 4K - 16 - Path Tracersvt-av1: Preset 4 - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kgegl: Rotate 90 Degreeslammps: 20k Atomsencode-opus: WAV To Opus Encodesvt-av1: Preset 8 - Bosphorus 1080pwebp2: Quality 95, Compression Effort 7svt-av1: Preset 8 - Bosphorus 4Kembree: Pathtracer ISPC - Asian Dragonnpb: LU.Cpyperformance: regex_compileblosc: blosclz bitshufflepyperformance: nbodyavifenc: 6aom-av1: Speed 8 Realtime - Bosphorus 1080pwebp2: Quality 75, Compression Effort 7pyperformance: 2to3onednn: Recurrent Neural Network Inference - f32 - CPUtnn: CPU - SqueezeNet v1.1compress-zstd: 19 - Decompression Speedpyperformance: pathlibaskap: Hogbom Clean OpenMPospray-studio: 1 - 4K - 1 - Path Traceropenvkl: vklBenchmark Scalarliquid-dsp: 1 - 256 - 57blender: BMW27 - CPU-Onlylibgav1: Summer Nature 4Kopenvkl: vklBenchmark ISPCgegl: Scaledacapobench: Jythonsvt-hevc: 10 - Bosphorus 1080pgnuradio: FIR Filterbuild-linux-kernel: defconfigpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixinghint: FLOATappleseed: Disney Materialkvazaar: Bosphorus 1080p - Ultra Faststargate: 44100 - 512pyperformance: pickle_pure_pythonstargate: 480000 - 512avifenc: 0vpxenc: Speed 0 - Bosphorus 1080plibraw: Post-Processing Benchmarkpyperformance: python_startuphmmer: Pfam Database Searchlammps: Rhodopsin Proteinospray: gravity_spheres_volume/dim_512/scivis/real_timecryptsetup: AES-XTS 256b Decryptioninkscape: SVG Files To PNGstargate: 480000 - 1024helsing: 14 digitclickhouse: 100M Rows Web Analytics Dataset, Third Runnamd: ATPase Simulation - 327,506 Atomsredis: SET - 50webp2: Defaultospray: particle_volume/scivis/real_timestargate: 44100 - 1024ai-benchmark: Device Inference Scoreluxcorerender: DLSC - CPUonnx: yolov4 - CPU - Standardospray: particle_volume/pathtracer/real_timeonednn: Recurrent Neural Network Training - u8s8f32 - CPUpyperformance: chaosonednn: Recurrent Neural Network Training - f32 - CPUbuild-godot: Time To Compilebuild-ffmpeg: Time To Compilem-queens: Time To Solvecompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionpyperformance: json_loadsngspice: C7552hpcg: encode-flac: WAV To FLACbuild-llvm: Ninjaliquid-dsp: 16 - 256 - 57xmrig: Monero - 1Mastcenc: Exhaustiverodinia: OpenMP LavaMDctx-clock: Context Switch Timeai-benchmark: Device AI Scoreliquid-dsp: 2 - 256 - 57svt-av1: Preset 4 - Bosphorus 1080pospray-studio: 1 - 4K - 32 - Path Traceropenscad: Mini-ITX Caseopenssl: RSA4096cryptsetup: PBKDF2-whirlpooltensorflow-lite: Mobilenet Floatai-benchmark: Device Training Scoreospray-studio: 2 - 4K - 32 - Path Tracerastcenc: Mediumwebp2: Quality 100, Compression Effort 5amg: svt-vp9: VMAF Optimized - Bosphorus 4Ksimdjson: TopTweetpovray: Trace Timeprimesieve: 1e12securemark: SecureMark-TLSbuild-linux-kernel: allmodconfiggegl: Wavelet Bluropenscad: Pistolgraphics-magick: Enhancedgegl: Tile Glassindigobench: CPU - Bedroomtjbench: Decompression Throughputbuild-gem5: Time To Compileetcpak: Multi-Threaded - ETC2blender: Pabellon Barcelona - CPU-Onlyblender: Fishy Cat - CPU-Onlygnuradio: Hilbert Transformespeak: Text-To-Speech Synthesiswebp: Quality 100, Highest Compressionluxcorerender: Orange Juice - CPUblender: Classroom - CPU-Onlyavifenc: 2luxcorerender: Danish Mood - CPUsvt-hevc: 10 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Very Fastvpxenc: Speed 5 - Bosphorus 4Kneat: gegl: Color Enhancecompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Decompression Speedxmrig: Wownero - 1Mblosc: blosclz shuffleospray-studio: 3 - 4K - 16 - Path Tracerindigobench: CPU - Supercarcryptsetup: PBKDF2-sha512svt-vp9: VMAF Optimized - Bosphorus 1080pgromacs: MPI CPU - water_GMX50_bareospray: gravity_spheres_volume/dim_512/pathtracer/real_timetscp: AI Chess Performancecompress-zstd: 8, Long Mode - Compression Speedospray-studio: 3 - 4K - 1 - Path Tracerappleseed: Emilyospray: gravity_spheres_volume/dim_512/ao/real_timecryptsetup: AES-XTS 512b Decryptionstargate: 96000 - 1024pybench: Total For Average Test Timesliquid-dsp: 32 - 256 - 57npb: BT.Cstargate: 96000 - 512cryptsetup: AES-XTS 512b Encryptionchia-vdf: Square Plain C++gnuradio: IIR Filtercompress-zstd: 3, Long Mode - Compression Speednettle: chachacompress-lz4: 9 - Compression Speedluaradio: Five Back to Back FIR Filtersospray-studio: 2 - 4K - 1 - Path Tracercompress-zstd: 8 - Decompression Speedkvazaar: Bosphorus 4K - Ultra Fastcoremark: CoreMark Size 666 - Iterations Per Secondfinancebench: Repo OpenMPnettle: aes256askap: tConvolve MT - Degriddingencode-mp3: WAV To MP3cryptsetup: Serpent-XTS 512b Encryptioncrafty: Elapsed Timeonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonnx: GPT-2 - CPU - Standardv-ray: CPUfinancebench: Bonds OpenMPencode-wavpack: WAV To WavPackluxcorerender: LuxCore Benchmark - CPUetcpak: Single-Threaded - ETC2kvazaar: Bosphorus 4K - Very Fastopenssl: SHA256cryptsetup: Serpent-XTS 512b Decryptionblender: Barbershop - CPU-Onlyospray-studio: 2 - 4K - 16 - Path Traceraircrack-ng: pyperformance: crypto_pyaesphpbench: PHP Benchmark Suitecryptsetup: Twofish-XTS 256b Encryptionvpxenc: Speed 5 - Bosphorus 1080pluaradio: FM Deemphasis Filtern-queens: Elapsed Timegnuradio: FM Deemphasis Filteronnx: ArcFace ResNet-100 - CPU - Standardospray: particle_volume/ao/real_timeaskap: tConvolve MT - Griddinggpaw: Carbon Nanotubecryptsetup: Twofish-XTS 512b Decryptionastcenc: Thoroughpennant: leblancbigluaradio: Complex Phasesimdjson: DistinctUserIDsimdjson: PartialTweetsliquid-dsp: 8 - 256 - 57rnnoise: cryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Serpent-XTS 256b Encryptionappleseed: Material Testercryptsetup: Serpent-XTS 256b Decryptionliquid-dsp: 4 - 256 - 57openssl: RSA4096cryptsetup: Twofish-XTS 256b Decryptionaom-av1: Speed 10 Realtime - Bosphorus 4Kgmic: Plotting Isosurface Of A 3D Volume, 1000 Timessysbench: CPUnpb: EP.Ccryptsetup: AES-XTS 256b Encryptionaskap: tConvolve OpenMP - Degriddingoidn: RT.ldr_alb_nrm.3840x2160oidn: RT.hdr_alb_nrm.3840x2160simdjson: LargeRandselenium: Kraken - Google Chromeselenium: Kraken - Firefoxselenium: WASM imageConvolute - Google Chromeselenium: WASM imageConvolute - Firefoxselenium: WASM collisionDetection - Firefoxselenium: PSPDFKit WASM - Google Chromeselenium: PSPDFKit WASM - Firefoxselenium: Jetstream 2 - Google Chromeselenium: Jetstream 2 - Firefoxselenium: Speedometer - Google Chromeselenium: Speedometer - Firefoxselenium: ARES-6 - Google Chromeselenium: ARES-6 - Firefoxncnn: CPU - FastestDetncnn: CPU - regnety_400mncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2npb: FT.Credis: GET - 500redis: SET - 500tensorflow-lite: Mobilenet Quantncnn: CPU - yolov4-tinyncnn: CPU - mobilenetcompress-lz4: 3 - Compression Speedcompress-zstd: 8 - Compression Speedselenium: WASM collisionDetection - Google ChromeCore i9 13900K13900Ki9-13900K155.4742.11213.8162.4924.89871.58133.24426.0539.34130.30262.035172.66.3416398550.19308.4013642.8109.268.4617362267.3911471.21316030.1571155225633.18235.19200.009671.22813885.1206662621531.7813.65223.794.4851257.3919.1352.4894.82718793.713439.2447.736.085148.08424.9245549.963.24011.9404.851167625240.0519.454.21115.489347.4141351.0325.51112.69240.560266.229.26283.0150.8331032309.807166.30278.2579.30354.36254103.590.5552.7145720.116294900.85251.49.94628.600449.980116.2121.9301644.68270400774.8631.914111.071693426.93345.3832.57617.12011.721820224722098.7412306.6418.90492.5565.2214.57980.88.782179419374.48313095.7484.145205.1753733.068111.2232.04517.8684.438187.854163.31280.05535.204454068.1181.914657.759.74.079208.7776.9871541044.22204.4005061.98.72636.94346018311435666752.94125.111715.5711738680.351224.737.9931.692754084011.9454380.869423281.656.0401141936.00990268.70622.3790.224.9186.30318.1015.517747044.314.8786.197002215.687317.390.576904568378.31.59810.13366.32024520244.51685226.6572015.9445.82018.5950.13220.82430.0883.41512.253.30510.196510.017250.05013957333339531.71.725779.65913155322292366678.35815010121.8945815.61298821951.5733508153227138.95312.284455217667145.987.8715.03810.606430753437.58036.54449.72967316.8874.305320.593293182.1144602.569190.3576.701210.216.2794.6647.48152.3833.7683.80204.39152.2424.9118.32232.6056150.45875.516997.825764.48934912.7692817491524.691.4797.2409221842241543.35522155.458835.714466440.74.806556466188210000052156.414.7647466456.8248500645.81642.01799.8183.131990.847165785.675.701104577.98202719350.26562513631.294149.454.565996.3141467451042.79105503051830963.16471310.1814.28347.43140.41382066911331028.1596.147690569910.82349.81589793688.951.38748.95.9321195.759810.14752816.29176.053695.317.887526.23052972.17.787.8789360000014.431689.7994.085.5050161027.7447116667378571.9695.6112.018.183109404.893275.987041.69509.140.570.571.87355.7545.515.6315.3225.821842097340.821206.7213963067.0816.903.248.321.074.252.602.212.6023533.025513026.34225271.421916.9912.407.6883.751613.7190.30120.8253.59170.8552.2620.86963.04116.48825.3468.68115.68233.164647.66.4914888463.25285.4013941.4107.318.7816402394.9261391.04301330.9781131213735.00937.1207.937650.09114521210959622448.1814.25124.674.5161208.2319.3553.9684.98518119.513936.5432.566.251151.83825.4985624.83.35412.3184.98173324418.220.14.35113.189336.3241931.0655.37812.8741.737258.679.4287.4252.2310715306.147196.11274.389.44855.7611921491.5451.4195588.115899923.03246.510.17528.96150.555114.0321.851679.83270400787.432.039310.967540727.48615.4833.23917.23111.944811318732068.9042311.9426.66393.2655.1814.83282.28.934176792377.2513312.0383.095124.6753713.115109.7631.78818.0924.503185.156161.85478.93834.814853631.5280.814722.459.14.08120675.971561052.48201.7925031.48.61632.91145678311574000052.31124.21705.6361731672.651210.938.4231.711746438349.8574780.04968279.916.0454961916.03952668.00322.5690.784.9685.80518.2195.54637026.315.0236.230725213.658317.340.5733845775791.60110.06326.3761420414.55686224.6791999.3545.52001.1449.70620.91630.3443.4412.153.46810.11469.98250.58913851000009498.81.733479.34913055622309500008.34314990822.0495847.21291349954.7793521152446139.01752.299452283800145.217.8315.05810.584430942434.88936.50150.0367716.8944.33319.833273182.7644581.123189.4676.271209.916.3564.647.49151.5733.7013.8204.01153.0224.7818.41732.5996122.55905.917085.3258118965612.8332803679526.71.4727.25625219433415455547154.7602095.739896464.94.811469467189010000051940.364.7669226479.2248200647.61643.71806.9183.451993.447335766.475.441105354.05872219409.97070313618.744158.224.574997.9141086961045.58105253059931040.01562510.1964.29346.88840.5381391399501029.5596.97678970004.10249.71592875690.251.42748.15.9221193.759810.15672816.59175.803695.517.859926.259089727.787.8889281000014.446689.8994.585.4312011026.8446730000378610695.7112.078.185109400.263275.917041.69509.140.570.571.872.937.950.954.022.352.442.8122173.8962190174039823.51786.1711.947.4585.361488.8123.3645.18169.5852.7521.12760.74136.17746.1472.82114.96243.14638.97.0115049300.09299.1412943.2115.168.2716762398.3371408.28318531.8371194215333.42935.43210.859637.55513800.9217160021459.6613.83523.694.6681228.1219.8654.4735.00818321.313731.2431.836.308146.53125.8265433.213.33712.3495.017168024439.2419.824.3116.952338.2242691.0635.54513.08340.547266.159.52290.8151.5970497301.527006.74270.9649.54454.5627512792.8552.0545580.216269905.95245.610.06129.247451.099113.7222.3241670.56264800791.0932.577911.192899727.45155.3733.17417.46811.957827593072058.842269.8426.49591.5735.1314.68981.88.909179840370.94913226.0482.795121.9765623.098111.4432.27218.1364.503185.459161.0379.41535.304653329.0380.814856.359.94.134206.1576.6921551039.05201.9784997.58.72628.93146258211559000052.33125.691695.5791751677.971220.338.3551.706754760800.2477680.008794282.916.1042351936.07255368.22122.3391.144.9185.43818.2835.571746976.214.9166.257307213.73314.410.571524611352.51.61310.15436.37674920424.52680224.8082016.9345.92013.1849.85620.73930.2173.44412.153.74210.1829.937248.60313847000009456.91.739379.04413055742308900008.40514902321.9395856.51289761958.213532153477138.10642.294453791600145.057.8814.96410.54428282435.336.32350.02967416.7964.322321.663087181.7264576.437189.2776.291203.616.3674.6397.52151.6433.593.78203.32152.2224.8618.41732.7686118.75905.51706425679.18980212.7842803679524.151.4767.2751921943341550.35547155.1412095.715976436.64.827441465188770000052033.834.7844016452.7249200648.41648.51801.4683.291998.147305765.275.561101824.89748619369.63281213590.024146.084.578995.2141228541044.41105533055731006.08984410.2054.29347.67640.48382232896701027.3595.637694670053.24249.71592774689.651.47747.65.9261195.759710.16442811.94176.091694.417.88826.22271970.87.797.8789376000014.44689.199585.4277231027.5446880000378867.2695.2112.018.181109354.473275.337042.99509.140.570.571.872.957.911.114.452.32.482.8122650.574146011.2531437221753.211.878.7185.481921.4OpenBenchmarking.org

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.09, N = 8155.47120.82123.361. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150Min: 154.99 / Avg: 155.47 / Max: 155.881. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsCore i9 13900K13900Ki9-13900K1224364860SE +/- 0.21, N = 342.1153.5945.181. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsCore i9 13900K13900Ki9-13900K1122334455Min: 41.7 / Avg: 42.11 / Max: 42.431. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K50100150200250SE +/- 3.21, N = 15213.82170.86169.591. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K4080120160200Min: 169.81 / Avg: 213.82 / Max: 220.281. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetCore i9 13900K13900Ki9-13900K0.61881.23761.85642.47523.094SE +/- 0.04, N = 142.492.262.75MIN: 2.06 / MAX: 7.6MIN: 2.22 / MAX: 2.6MIN: 2.7 / MAX: 3.391. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetCore i9 13900K13900Ki9-13900K246810Min: 2.09 / Avg: 2.49 / Max: 2.631. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileCore i9 13900K13900Ki9-13900K612182430SE +/- 0.08, N = 324.9020.8721.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileCore i9 13900K13900Ki9-13900K612182430Min: 24.78 / Avg: 24.9 / Max: 25.06

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K1632486480SE +/- 0.70, N = 1571.5863.0460.741. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K1428425670Min: 63.79 / Avg: 71.58 / Max: 73.231. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigCore i9 13900K13900Ki9-13900K306090120150SE +/- 1.71, N = 12133.24116.49136.181. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigCore i9 13900K13900Ki9-13900K306090120150Min: 120.48 / Avg: 133.24 / Max: 139.381. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: SpaceshipCore i9 13900K13900Ki9-13900K246810SE +/- 0.06, N = 36.05.36.1
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: SpaceshipCore i9 13900K13900Ki9-13900K246810Min: 5.9 / Avg: 6 / Max: 6.1

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K120240360480600SE +/- 0.54, N = 12539.34468.68472.821. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K100200300400500Min: 536.17 / Avg: 539.34 / Max: 541.651. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150SE +/- 1.73, N = 15130.30115.68114.961. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K20406080100Min: 113.32 / Avg: 130.3 / Max: 133.381. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K60120180240300SE +/- 1.72, N = 10262.03233.16243.101. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K50100150200250Min: 248.5 / Avg: 262.03 / Max: 268.571. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Core i9 13900K13900Ki9-13900K11002200330044005500SE +/- 39.85, N = 155172.64647.64638.91. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Core i9 13900K13900Ki9-13900K9001800270036004500Min: 4619.8 / Avg: 5172.62 / Max: 5229.51. (CXX) g++ options: -O3 -march=native -rdynamic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetCore i9 13900K13900Ki9-13900K246810SE +/- 0.09, N = 156.346.497.01MIN: 5.61 / MAX: 7.69MIN: 6.39 / MAX: 7.7MIN: 6.79 / MAX: 8.41. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetCore i9 13900K13900Ki9-13900K3691215Min: 5.66 / Avg: 6.34 / Max: 6.951. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 1.20, N = 31639148815041. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCore i9 13900K13900Ki9-13900K30060090012001500Min: 1637 / Avg: 1639.33 / Max: 16411. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CCore i9 13900K13900Ki9-13900K2K4K6K8K10KSE +/- 92.47, N = 48550.198463.259300.091. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CCore i9 13900K13900Ki9-13900K16003200480064008000Min: 8418.91 / Avg: 8550.19 / Max: 8824.161. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCore i9 13900K13900Ki9-13900K70140210280350SE +/- 2.72, N = 8308.40285.40299.14MIN: 21.58 / MAX: 30000MIN: 21.53 / MAX: 15000MIN: 22.71 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCore i9 13900K13900Ki9-13900K60120180240300Min: 290.72 / Avg: 308.4 / Max: 315.341. ClickHouse server version 22.5.4.19 (official build).

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i9 13900K13900Ki9-13900K3K6K9K12K15KSE +/- 60.20, N = 313642.813941.412943.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i9 13900K13900Ki9-13900K2K4K6K8K10KMin: 13533 / Avg: 13642.77 / Max: 13740.51. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.80, N = 15109.26107.31115.16MIN: 107.09 / MAX: 572.43MIN: 107.16 / MAX: 108.29MIN: 107.08 / MAX: 511.171. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerCore i9 13900K13900Ki9-13900K20406080100Min: 107.21 / Avg: 109.26 / Max: 115.691. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdCore i9 13900K13900Ki9-13900K246810SE +/- 0.12, N = 158.468.788.27MIN: 7.64 / MAX: 14.98MIN: 8.61 / MAX: 9.61MIN: 8.01 / MAX: 9.871. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdCore i9 13900K13900Ki9-13900K3691215Min: 7.75 / Avg: 8.46 / Max: 9.311. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Core i9 13900K13900Ki9-13900K400800120016002000SE +/- 22.66, N = 20173616401676
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Core i9 13900K13900Ki9-13900K30060090012001500Min: 1540 / Avg: 1736.45 / Max: 1962

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCore i9 13900K13900Ki9-13900K5001000150020002500SE +/- 0.45, N = 32267.392394.932398.34MIN: 2200.7 / MAX: 2461.14MIN: 2289.29 / MAX: 2490.39MIN: 2295.08 / MAX: 2478.371. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCore i9 13900K13900Ki9-13900K400800120016002000Min: 2266.53 / Avg: 2267.39 / Max: 2268.021. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCore i9 13900K13900Ki9-13900K30060090012001500SE +/- 10.84, N = 111471.211391.041408.28
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCore i9 13900K13900Ki9-13900K30060090012001500Min: 1378.09 / Avg: 1471.21 / Max: 1506.2

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionCore i9 13900K13900Ki9-13900K7001400210028003500SE +/- 12.51, N = 83160301331851. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: LionCore i9 13900K13900Ki9-13900K6001200180024003000Min: 3137 / Avg: 3159.88 / Max: 32211. (CXX) g++ options: -O3 -march=native

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeCore i9 13900K13900Ki9-13900K714212835SE +/- 0.07, N = 330.1630.9831.841. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeCore i9 13900K13900Ki9-13900K714212835Min: 30.05 / Avg: 30.16 / Max: 30.31. RawTherapee, version 5.8, command line.

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASCore i9 13900K13900Ki9-13900K30060090012001500SE +/- 15.59, N = 31155113111941. (CXX) g++ options: -flto -O3 -march=native -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASCore i9 13900K13900Ki9-13900K2004006008001000Min: 1128 / Avg: 1155 / Max: 11821. (CXX) g++ options: -flto -O3 -march=native -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCore i9 13900K13900Ki9-13900K5001000150020002500SE +/- 6.57, N = 32256213721531. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCore i9 13900K13900Ki9-13900K400800120016002000Min: 2247 / Avg: 2256.33 / Max: 22691. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i9 13900K13900Ki9-13900K816243240SE +/- 0.03, N = 333.1835.0133.43
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i9 13900K13900Ki9-13900K714212835Min: 33.13 / Avg: 33.18 / Max: 33.22

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K918273645SE +/- 0.27, N = 335.1937.1035.431. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K816243240Min: 34.66 / Avg: 35.19 / Max: 35.461. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Core i9 13900K13900Ki9-13900K50100150200250SE +/- 0.20, N = 4200.01207.94210.86MIN: 196.66 / MAX: 215.62MIN: 197.84 / MAX: 247.39MIN: 195.62 / MAX: 250.351. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Core i9 13900K13900Ki9-13900K4080120160200Min: 199.55 / Avg: 200.01 / Max: 200.531. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K140280420560700SE +/- 3.22, N = 13671.23650.09637.561. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K120240360480600Min: 646.36 / Avg: 671.23 / Max: 686.011. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedCore i9 13900K13900Ki9-13900K3K6K9K12K15KSE +/- 106.77, N = 313885.114521.013800.91. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedCore i9 13900K13900Ki9-13900K3K6K9K12K15KMin: 13749.3 / Avg: 13885.07 / Max: 14095.71. (CC) gcc options: -O3

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenCore i9 13900K13900Ki9-13900K5001000150020002500SE +/- 9.28, N = 32066210921711. (CXX) g++ options: -flto -O3 -march=native -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenCore i9 13900K13900Ki9-13900K400800120016002000Min: 2054 / Avg: 2065.67 / Max: 20841. (CXX) g++ options: -flto -O3 -march=native -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCore i9 13900K13900Ki9-13900K140280420560700SE +/- 0.88, N = 36265966001. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCore i9 13900K13900Ki9-13900K110220330440550Min: 624 / Avg: 625.67 / Max: 6271. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BCore i9 13900K13900Ki9-13900K5K10K15K20K25KSE +/- 262.60, N = 321531.7822448.1821459.661. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BCore i9 13900K13900Ki9-13900K4K8K12K16K20KMin: 21182.31 / Avg: 21531.78 / Max: 22046.051. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

VOSK Speech Recognition Toolkit

VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21Core i9 13900K13900Ki9-13900K48121620SE +/- 0.05, N = 413.6514.2513.84
OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21Core i9 13900K13900Ki9-13900K48121620Min: 13.56 / Avg: 13.65 / Max: 13.77

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCore i9 13900K13900Ki9-13900K612182430SE +/- 0.21, N = 823.7924.6723.69
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCore i9 13900K13900Ki9-13900K612182430Min: 23.11 / Avg: 23.79 / Max: 24.75

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.0Core i9 13900K13900Ki9-13900K1.05032.10063.15094.20125.2515SE +/- 0.031, N = 84.4854.5164.668
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.4.0Core i9 13900K13900Ki9-13900K246810Min: 4.38 / Avg: 4.48 / Max: 4.64

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DCore i9 13900K13900Ki9-13900K30060090012001500SE +/- 2.01, N = 31257.391208.231228.121. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DCore i9 13900K13900Ki9-13900K2004006008001000Min: 1254.08 / Avg: 1257.39 / Max: 1261.021. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Core i9 13900K13900Ki9-13900K510152025SE +/- 0.12, N = 1519.1319.3519.86MIN: 18.07 / MAX: 204.51MIN: 19.12 / MAX: 20.53MIN: 19.12 / MAX: 101.461. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Core i9 13900K13900Ki9-13900K510152025Min: 18.25 / Avg: 19.13 / Max: 20.171. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i9 13900K13900Ki9-13900K1224364860SE +/- 0.08, N = 352.4953.9754.47
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i9 13900K13900Ki9-13900K1122334455Min: 52.38 / Avg: 52.49 / Max: 52.64

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCore i9 13900K13900Ki9-13900K1.12682.25363.38044.50725.634SE +/- 0.006, N = 74.8274.9855.008
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCore i9 13900K13900Ki9-13900K246810Min: 4.81 / Avg: 4.83 / Max: 4.85

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Core i9 13900K13900Ki9-13900K4K8K12K16K20KSE +/- 205.77, N = 518793.718119.518321.3
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Core i9 13900K13900Ki9-13900K3K6K9K12K15KMin: 18089.8 / Avg: 18793.68 / Max: 19157.1

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i9 13900K13900Ki9-13900K3K6K9K12K15KSE +/- 77.03, N = 1213439.213936.513731.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i9 13900K13900Ki9-13900K2K4K6K8K10KMin: 12840.9 / Avg: 13439.2 / Max: 137401. (CC) gcc options: -O3

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K100200300400500SE +/- 1.47, N = 12447.73432.56431.831. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K80160240320400Min: 434.03 / Avg: 447.73 / Max: 452.191. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i9 13900K13900Ki9-13900K246810SE +/- 0.007, N = 76.0856.2516.3081. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i9 13900K13900Ki9-13900K3691215Min: 6.06 / Avg: 6.08 / Max: 6.121. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.77, N = 8148.08151.84146.531. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150Min: 145.12 / Avg: 148.08 / Max: 151.481. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Gzip Compression

This test measures the time needed to archive/compress two copies of the Linux 4.13 kernel source tree using Gzip compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCore i9 13900K13900Ki9-13900K612182430SE +/- 0.02, N = 324.9225.5025.83
OpenBenchmarking.orgSeconds, Fewer Is BetterGzip CompressionLinux Source Tree Archiving To .tar.gzCore i9 13900K13900Ki9-13900K612182430Min: 24.89 / Avg: 24.92 / Max: 24.96

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: poly1305-aesCore i9 13900K13900Ki9-13900K12002400360048006000SE +/- 24.19, N = 155549.965624.805433.211. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: poly1305-aesCore i9 13900K13900Ki9-13900K10002000300040005000Min: 5435.29 / Avg: 5549.96 / Max: 5633.651. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCore i9 13900K13900Ki9-13900K0.75471.50942.26413.01883.7735SE +/- 0.021, N = 93.2403.3543.3371. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCore i9 13900K13900Ki9-13900K246810Min: 3.14 / Avg: 3.24 / Max: 3.351. (CXX) g++ options: -O3 -fPIC -march=native -lm

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzCore i9 13900K13900Ki9-13900K3691215SE +/- 0.03, N = 411.9412.3212.35
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzCore i9 13900K13900Ki9-13900K48121620Min: 11.88 / Avg: 11.94 / Max: 11.99

Bork File Encrypter

Bork is a small, cross-platform file encryption utility. It is written in Java and designed to be included along with the files it encrypts for long-term storage. This test measures the amount of time it takes to encrypt a sample file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBork File Encrypter 1.4File Encryption TimeCore i9 13900K13900Ki9-13900K1.12882.25763.38644.51525.644SE +/- 0.020, N = 74.8514.9805.017
OpenBenchmarking.orgSeconds, Fewer Is BetterBork File Encrypter 1.4File Encryption TimeCore i9 13900K13900Ki9-13900K246810Min: 4.82 / Avg: 4.85 / Max: 4.97

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 6.46, N = 7167617331680
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansCore i9 13900K13900Ki9-13900K30060090012001500Min: 1650 / Avg: 1676.43 / Max: 1702

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CCore i9 13900K13900Ki9-13900K5K10K15K20K25KSE +/- 247.23, N = 625240.0524418.2024439.241. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CCore i9 13900K13900Ki9-13900K4K8K12K16K20KMin: 24457.63 / Avg: 25240.05 / Max: 25701.981. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUCore i9 13900K13900Ki9-13900K510152025SE +/- 0.17, N = 519.4520.1019.82MIN: 17.27 / MAX: 20.15MIN: 20.06 / MAX: 20.21MIN: 17.99 / MAX: 20.2
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: CPUCore i9 13900K13900Ki9-13900K510152025Min: 18.96 / Avg: 19.45 / Max: 19.96

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetCore i9 13900K13900Ki9-13900K0.97881.95762.93643.91524.894SE +/- 0.02, N = 154.214.354.30MIN: 4.12 / MAX: 5.1MIN: 4.29 / MAX: 5.2MIN: 4.19 / MAX: 4.911. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetCore i9 13900K13900Ki9-13900K246810Min: 4.15 / Avg: 4.21 / Max: 4.41. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.86, N = 3115.49113.19116.951. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -march=native -lm -lreadline
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisCore i9 13900K13900Ki9-13900K20406080100Min: 114.47 / Avg: 115.49 / Max: 117.21. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -march=native -lm -lreadline

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K80160240320400SE +/- 0.90, N = 11347.41336.32338.221. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K60120180240300Min: 339.56 / Avg: 347.41 / Max: 350.471. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadeCore i9 13900K13900Ki9-13900K9001800270036004500SE +/- 11.45, N = 74135419342691. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.0Model: Church FacadeCore i9 13900K13900Ki9-13900K7001400210028003500Min: 4104 / Avg: 4134.86 / Max: 41731. (CXX) g++ options: -O3 -march=native

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCore i9 13900K13900Ki9-13900K0.23960.47920.71880.95841.198SE +/- 0.001, N = 31.0321.0651.063
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCore i9 13900K13900Ki9-13900K246810Min: 1.03 / Avg: 1.03 / Max: 1.04

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCore i9 13900K13900Ki9-13900K1.24762.49523.74284.99046.238SE +/- 0.030, N = 75.5115.3785.5451. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCore i9 13900K13900Ki9-13900K246810Min: 5.42 / Avg: 5.51 / Max: 5.611. (CXX) g++ options: -O3 -fPIC -march=native -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resizeCore i9 13900K13900Ki9-13900K3691215SE +/- 0.14, N = 412.6912.8713.08
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resizeCore i9 13900K13900Ki9-13900K48121620Min: 12.4 / Avg: 12.69 / Max: 13.05

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Core i9 13900K13900Ki9-13900K1020304050SE +/- 0.09, N = 1040.5641.7440.55MIN: 39.92 / MAX: 42.16MIN: 40.26 / MAX: 42.27MIN: 40.15 / MAX: 40.951. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Core i9 13900K13900Ki9-13900K918273645Min: 40.32 / Avg: 40.56 / Max: 41.321. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K60120180240300SE +/- 1.06, N = 10266.22258.67266.151. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K50100150200250Min: 259.46 / Avg: 266.22 / Max: 270.981. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Core i9 13900K13900Ki9-13900K3691215SE +/- 0.04, N = 159.269.409.52MIN: 8.74 / MAX: 10.36MIN: 9.26 / MAX: 10.68MIN: 9.15 / MAX: 21.681. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Core i9 13900K13900Ki9-13900K3691215Min: 8.8 / Avg: 9.26 / Max: 9.521. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K60120180240300SE +/- 1.96, N = 13283.01287.42290.811. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K50100150200250Min: 263 / Avg: 283.01 / Max: 288.61. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionCore i9 13900K13900Ki9-13900K1224364860SE +/- 0.03, N = 350.8352.2351.601. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionCore i9 13900K13900Ki9-13900K1020304050Min: 50.78 / Avg: 50.83 / Max: 50.891. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCore i9 13900K13900Ki9-13900K70140210280350SE +/- 2.51, N = 8309.80306.14301.52MIN: 18.73 / MAX: 30000MIN: 24.13 / MAX: 15000MIN: 22.94 / MAX: 150001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCore i9 13900K13900Ki9-13900K60120180240300Min: 301.16 / Avg: 309.8 / Max: 320.381. ClickHouse server version 22.5.4.19 (official build).

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingCore i9 13900K13900Ki9-13900K15003000450060007500SE +/- 59.54, N = 67166.307196.117006.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingCore i9 13900K13900Ki9-13900K12002400360048006000Min: 7006.74 / Avg: 7166.3 / Max: 73961. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesCore i9 13900K13900Ki9-13900K60120180240300SE +/- 1.00, N = 3278.26274.38270.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesCore i9 13900K13900Ki9-13900K50100150200250Min: 276.26 / Avg: 278.26 / Max: 279.39

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotateCore i9 13900K13900Ki9-13900K3691215SE +/- 0.018, N = 59.3039.4489.544
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotateCore i9 13900K13900Ki9-13900K3691215Min: 9.26 / Avg: 9.3 / Max: 9.36

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedCore i9 13900K13900Ki9-13900K1326395265SE +/- 0.17, N = 354.355.754.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedCore i9 13900K13900Ki9-13900K1122334455Min: 54 / Avg: 54.3 / Max: 54.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50Core i9 13900K13900Ki9-13900K1.3M2.6M3.9M5.2M6.5MSE +/- 65493.86, N = 46254103.56119214.06275127.01. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50Core i9 13900K13900Ki9-13900K1.1M2.2M3.3M4.4M5.5MMin: 6065819.5 / Avg: 6254103.5 / Max: 63590101. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.41, N = 690.5591.5492.851. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K20406080100Min: 89.28 / Avg: 90.55 / Max: 91.531. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Core i9 13900K13900Ki9-13900K1224364860SE +/- 0.05, N = 352.7151.4252.051. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Core i9 13900K13900Ki9-13900K1122334455Min: 52.66 / Avg: 52.71 / Max: 52.811. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Core i9 13900K13900Ki9-13900K12002400360048006000SE +/- 5.65, N = 35720.15588.15580.21. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Core i9 13900K13900Ki9-13900K10002000300040005000Min: 5712.6 / Avg: 5720.13 / Max: 5731.21. 3.10.1.1

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestCore i9 13900K13900Ki9-13900K3K6K9K12K15KSE +/- 66.76, N = 7162941589916269
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestCore i9 13900K13900Ki9-13900K3K6K9K12K15KMin: 15970 / Avg: 16293.71 / Max: 16476

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: sha512Core i9 13900K13900Ki9-13900K2004006008001000SE +/- 6.58, N = 15900.85923.03905.951. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: sha512Core i9 13900K13900Ki9-13900K160320480640800Min: 852.92 / Avg: 900.85 / Max: 937.341. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformCore i9 13900K13900Ki9-13900K50100150200250SE +/- 0.84, N = 3251.4246.5245.6
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformCore i9 13900K13900Ki9-13900K50100150200250Min: 249.8 / Avg: 251.43 / Max: 252.6

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levelsCore i9 13900K13900Ki9-13900K3691215SE +/- 0.007, N = 59.94610.17510.061
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levelsCore i9 13900K13900Ki9-13900K3691215Min: 9.93 / Avg: 9.95 / Max: 9.97

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownCore i9 13900K13900Ki9-13900K714212835SE +/- 0.14, N = 328.6028.9629.25MIN: 26.01 / MAX: 32.75MIN: 26.84 / MAX: 32.46MIN: 27.12 / MAX: 32.35
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownCore i9 13900K13900Ki9-13900K612182430Min: 28.36 / Avg: 28.6 / Max: 28.83

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i9 13900K13900Ki9-13900K1224364860SE +/- 0.08, N = 349.9850.5651.101. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i9 13900K13900Ki9-13900K1020304050Min: 49.83 / Avg: 49.98 / Max: 50.081. (CXX) g++ options: -O2 -lOpenCL

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.35, N = 7116.21114.03113.721. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K20406080100Min: 114.13 / Avg: 116.21 / Max: 116.731. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasCore i9 13900K13900Ki9-13900K510152025SE +/- 0.05, N = 321.9321.8522.32
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasCore i9 13900K13900Ki9-13900K510152025Min: 21.85 / Avg: 21.93 / Max: 22.01

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 15.83, N = 31644.681679.831670.561. (CXX) g++ options: -O3 -march=native -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingCore i9 13900K13900Ki9-13900K30060090012001500Min: 1615.77 / Avg: 1644.68 / Max: 1670.291. (CXX) g++ options: -O3 -march=native -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedCore i9 13900K13900Ki9-13900K60K120K180K240K300KSE +/- 1473.09, N = 32704002704002648001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Assembly OptimizedCore i9 13900K13900Ki9-13900K50K100K150K200K250KMin: 267900 / Avg: 270400 / Max: 2730001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACore i9 13900K13900Ki9-13900K2004006008001000SE +/- 2.57, N = 3774.86787.40791.091. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACore i9 13900K13900Ki9-13900K140280420560700Min: 771.32 / Avg: 774.86 / Max: 779.851. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian DragonCore i9 13900K13900Ki9-13900K816243240SE +/- 0.03, N = 331.9132.0432.58MIN: 28.9 / MAX: 34.73MIN: 29.59 / MAX: 34.68MIN: 29.87 / MAX: 35.18
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian DragonCore i9 13900K13900Ki9-13900K714212835Min: 31.86 / Avg: 31.91 / Max: 31.95

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionCore i9 13900K13900Ki9-13900K3691215SE +/- 0.04, N = 511.0710.9711.191. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionCore i9 13900K13900Ki9-13900K3691215Min: 10.94 / Avg: 11.07 / Max: 11.161. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: CrownCore i9 13900K13900Ki9-13900K612182430SE +/- 0.01, N = 326.9327.4927.45MIN: 24.6 / MAX: 30.54MIN: 25.53 / MAX: 30.5MIN: 25.47 / MAX: 30.32
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: CrownCore i9 13900K13900Ki9-13900K612182430Min: 26.92 / Avg: 26.93 / Max: 26.95

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Core i9 13900K13900Ki9-13900K1.2332.4663.6994.9326.165SE +/- 0.04, N = 155.385.485.37MIN: 4.92 / MAX: 7.41MIN: 5.4 / MAX: 5.94MIN: 5.23 / MAX: 6.321. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Core i9 13900K13900Ki9-13900K246810Min: 4.96 / Avg: 5.38 / Max: 5.711. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Core i9 13900K13900Ki9-13900K816243240SE +/- 0.04, N = 332.5833.2433.171. (CC) gcc options: -O3 -march=native -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Core i9 13900K13900Ki9-13900K714212835Min: 32.52 / Avg: 32.58 / Max: 32.661. (CC) gcc options: -O3 -march=native -lz

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectCore i9 13900K13900Ki9-13900K48121620SE +/- 0.00, N = 317.1217.2317.47
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectCore i9 13900K13900Ki9-13900K48121620Min: 17.11 / Avg: 17.12 / Max: 17.13

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-maskCore i9 13900K13900Ki9-13900K3691215SE +/- 0.04, N = 411.7211.9411.96
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-maskCore i9 13900K13900Ki9-13900K3691215Min: 11.68 / Avg: 11.72 / Max: 11.84

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthCore i9 13900K13900Ki9-13900K20M40M60M80M100MSE +/- 794735.19, N = 3820224728113187382759307
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthCore i9 13900K13900Ki9-13900K14M28M42M56M70MMin: 81145774 / Avg: 82022472.33 / Max: 83609021

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceCore i9 13900K13900Ki9-13900K50100150200250SE +/- 1.45, N = 3209206205
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceCore i9 13900K13900Ki9-13900K4080120160200Min: 206 / Avg: 208.67 / Max: 211

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 13.4.0+dfsgProcessing 60 Page PDF DocumentCore i9 13900K13900Ki9-13900K246810SE +/- 0.021, N = 58.7418.9048.840
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 13.4.0+dfsgProcessing 60 Page PDF DocumentCore i9 13900K13900Ki9-13900K3691215Min: 8.69 / Avg: 8.74 / Max: 8.8

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersCore i9 13900K13900Ki9-13900K5001000150020002500SE +/- 21.74, N = 32306.62311.92269.81. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersCore i9 13900K13900Ki9-13900K400800120016002000Min: 2268.8 / Avg: 2306.6 / Max: 2344.11. 3.10.1.1

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K90180270360450SE +/- 1.32, N = 11418.90426.66426.501. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K80160240320400Min: 412.74 / Avg: 418.9 / Max: 423.361. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.75, N = 392.5693.2791.57
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 13900K13900Ki9-13900K20406080100Min: 91.76 / Avg: 92.56 / Max: 94.05

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCore i9 13900K13900Ki9-13900K1.17452.3493.52354.6985.8725SE +/- 0.00, N = 35.225.185.131. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCore i9 13900K13900Ki9-13900K246810Min: 5.22 / Avg: 5.22 / Max: 5.221. (CXX) g++ options: -O3 -march=native

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileCore i9 13900K13900Ki9-13900K48121620SE +/- 0.08, N = 414.5814.8314.69
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileCore i9 13900K13900Ki9-13900K48121620Min: 14.45 / Avg: 14.58 / Max: 14.82

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.80, N = 380.882.281.81. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedCore i9 13900K13900Ki9-13900K1632486480Min: 80 / Avg: 80.8 / Max: 82.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case SlimCore i9 13900K13900Ki9-13900K246810SE +/- 0.025, N = 58.7828.9348.9091. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case SlimCore i9 13900K13900Ki9-13900K3691215Min: 8.74 / Avg: 8.78 / Max: 8.881. OpenSCAD version 2021.01

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K40K80K120K160K200KSE +/- 241.76, N = 31794191767921798401. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K30K60K90K120K150KMin: 178952 / Avg: 179419 / Max: 1797611. (CXX) g++ options: -O3 -march=native -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Lossless CompressionCore i9 13900K13900Ki9-13900K80160240320400SE +/- 1.77, N = 3374.48377.25370.951. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Lossless CompressionCore i9 13900K13900Ki9-13900K70140210280350Min: 370.99 / Avg: 374.48 / Max: 376.721. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedCore i9 13900K13900Ki9-13900K3K6K9K12K15KSE +/- 52.66, N = 313095.7413312.0313226.041. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedCore i9 13900K13900Ki9-13900K2K4K6K8K10KMin: 12996.43 / Avg: 13095.74 / Max: 13175.761. (CC) gcc options: -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.18, N = 684.1483.0982.791. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K1632486480Min: 83.38 / Avg: 84.14 / Max: 84.621. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedCore i9 13900K13900Ki9-13900K11002200330044005500SE +/- 3.23, N = 35205.15124.65121.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedCore i9 13900K13900Ki9-13900K9001800270036004500Min: 5201.3 / Avg: 5205.07 / Max: 5211.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K16K32K48K64K80KSE +/- 38.41, N = 37537375371765621. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K13K26K39K52K65KMin: 75298 / Avg: 75373.33 / Max: 754241. (CXX) g++ options: -O3 -march=native -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K0.70091.40182.10272.80363.5045SE +/- 0.004, N = 33.0683.1153.0981. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K246810Min: 3.06 / Avg: 3.07 / Max: 3.071. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.36, N = 7111.22109.76111.441. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K20406080100Min: 109.45 / Avg: 111.22 / Max: 112.121. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesCore i9 13900K13900Ki9-13900K714212835SE +/- 0.12, N = 332.0531.7932.27
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesCore i9 13900K13900Ki9-13900K714212835Min: 31.81 / Avg: 32.05 / Max: 32.19

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCore i9 13900K13900Ki9-13900K48121620SE +/- 0.08, N = 317.8718.0918.141. (CXX) g++ options: -O3 -march=native -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCore i9 13900K13900Ki9-13900K510152025Min: 17.75 / Avg: 17.87 / Max: 18.011. (CXX) g++ options: -O3 -march=native -lm -ldl

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeCore i9 13900K13900Ki9-13900K1.01322.02643.03964.05285.066SE +/- 0.007, N = 84.4384.5034.5031. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeCore i9 13900K13900Ki9-13900K246810Min: 4.42 / Avg: 4.44 / Max: 4.491. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K4080120160200SE +/- 0.66, N = 9187.85185.16185.461. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K306090120150Min: 184.49 / Avg: 187.85 / Max: 190.571. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7Core i9 13900K13900Ki9-13900K4080120160200SE +/- 1.11, N = 3163.31161.85161.031. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7Core i9 13900K13900Ki9-13900K306090120150Min: 161.18 / Avg: 163.31 / Max: 164.931. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.17, N = 680.0678.9479.421. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K1530456075Min: 79.52 / Avg: 80.06 / Max: 80.531. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian DragonCore i9 13900K13900Ki9-13900K816243240SE +/- 0.29, N = 335.2034.8135.30MIN: 31.25 / MAX: 39.16MIN: 31.84 / MAX: 38.27MIN: 32.3 / MAX: 38.77
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian DragonCore i9 13900K13900Ki9-13900K816243240Min: 34.62 / Avg: 35.2 / Max: 35.53

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CCore i9 13900K13900Ki9-13900K12K24K36K48K60KSE +/- 101.11, N = 354068.1153631.5253329.031. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CCore i9 13900K13900Ki9-13900K9K18K27K36K45KMin: 53908.55 / Avg: 54068.11 / Max: 54255.481. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.50, N = 381.980.880.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileCore i9 13900K13900Ki9-13900K1632486480Min: 81 / Avg: 81.93 / Max: 82.7

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCore i9 13900K13900Ki9-13900K3K6K9K12K15KSE +/- 92.18, N = 414657.714722.414856.31. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCore i9 13900K13900Ki9-13900K3K6K9K12K15KMin: 14407.4 / Avg: 14657.73 / Max: 14818.91. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyCore i9 13900K13900Ki9-13900K1326395265SE +/- 0.18, N = 359.759.159.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyCore i9 13900K13900Ki9-13900K1224364860Min: 59.4 / Avg: 59.67 / Max: 60

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6Core i9 13900K13900Ki9-13900K0.93021.86042.79063.72084.651SE +/- 0.016, N = 84.0794.0814.1341. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6Core i9 13900K13900Ki9-13900K246810Min: 4.03 / Avg: 4.08 / Max: 4.161. (CXX) g++ options: -O3 -fPIC -march=native -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K50100150200250SE +/- 0.69, N = 9208.77206.00206.151. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K4080120160200Min: 204.72 / Avg: 208.77 / Max: 211.461. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7Core i9 13900K13900Ki9-13900K20406080100SE +/- 0.86, N = 376.9975.9776.691. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7Core i9 13900K13900Ki9-13900K1530456075Min: 75.27 / Avg: 76.99 / Max: 77.861. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Core i9 13900K13900Ki9-13900K306090120150SE +/- 0.33, N = 3154156155
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Core i9 13900K13900Ki9-13900K306090120150Min: 154 / Avg: 154.33 / Max: 155

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 12.25, N = 31044.221052.481039.05MIN: 1015.86MIN: 1029.28MIN: 1031.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i9 13900K13900Ki9-13900K2004006008001000Min: 1019.72 / Avg: 1044.22 / Max: 1056.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Core i9 13900K13900Ki9-13900K4080120160200SE +/- 1.62, N = 4204.40201.79201.98MIN: 201.51 / MAX: 209.68MIN: 201.43 / MAX: 202.21MIN: 201.56 / MAX: 202.961. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Core i9 13900K13900Ki9-13900K4080120160200Min: 201.84 / Avg: 204.4 / Max: 209.141. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedCore i9 13900K13900Ki9-13900K11002200330044005500SE +/- 1.61, N = 35061.95031.44997.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedCore i9 13900K13900Ki9-13900K9001800270036004500Min: 5059.8 / Avg: 5061.93 / Max: 5065.11. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibCore i9 13900K13900Ki9-13900K246810SE +/- 0.01, N = 38.728.618.72
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibCore i9 13900K13900Ki9-13900K3691215Min: 8.7 / Avg: 8.72 / Max: 8.75

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPCore i9 13900K13900Ki9-13900K140280420560700SE +/- 0.00, N = 5636.94632.91628.931. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgIterations Per Second, More Is BetterASKAP 1.0Test: Hogbom Clean OpenMPCore i9 13900K13900Ki9-13900K110220330440550Min: 636.94 / Avg: 636.94 / Max: 636.941. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K10002000300040005000SE +/- 35.30, N = 34601456746251. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K8001600240032004000Min: 4538 / Avg: 4601.33 / Max: 46601. (CXX) g++ options: -O3 -march=native -ldl

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ScalarCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.58, N = 3838382MIN: 7 / MAX: 1573MIN: 7 / MAX: 1576MIN: 7 / MAX: 1566
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ScalarCore i9 13900K13900Ki9-13900K1632486480Min: 82 / Avg: 83 / Max: 84

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K20M40M60M80M100MSE +/- 1453333.33, N = 31143566671157400001155900001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K20M40M60M80M100MMin: 111450000 / Avg: 114356666.67 / Max: 1158100001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K1224364860SE +/- 0.25, N = 352.9452.3152.33
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K1122334455Min: 52.45 / Avg: 52.94 / Max: 53.26

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 4KCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.28, N = 3125.11124.20125.691. (CXX) g++ options: -O3 -march=native -lrt
OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 4KCore i9 13900K13900Ki9-13900K20406080100Min: 124.61 / Avg: 125.11 / Max: 125.581. (CXX) g++ options: -O3 -march=native -lrt

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCCore i9 13900K13900Ki9-13900K4080120160200SE +/- 1.67, N = 3171170169MIN: 13 / MAX: 1979MIN: 13 / MAX: 1974MIN: 13 / MAX: 1958
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.0Benchmark: vklBenchmark ISPCCore i9 13900K13900Ki9-13900K306090120150Min: 169 / Avg: 170.67 / Max: 174

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleCore i9 13900K13900Ki9-13900K1.26812.53623.80435.07246.3405SE +/- 0.025, N = 75.5715.6365.579
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleCore i9 13900K13900Ki9-13900K246810Min: 5.51 / Avg: 5.57 / Max: 5.69

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 11.39, N = 10173817311751
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCore i9 13900K13900Ki9-13900K30060090012001500Min: 1681 / Avg: 1738.2 / Max: 1787

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K150300450600750SE +/- 0.94, N = 12680.35672.65677.971. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K120240360480600Min: 674.92 / Avg: 680.35 / Max: 686.51. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterCore i9 13900K13900Ki9-13900K30060090012001500SE +/- 1.07, N = 31224.71210.91220.31. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterCore i9 13900K13900Ki9-13900K2004006008001000Min: 1223.1 / Avg: 1224.67 / Max: 1226.71. 3.10.1.1

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCore i9 13900K13900Ki9-13900K918273645SE +/- 0.21, N = 337.9938.4238.36
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCore i9 13900K13900Ki9-13900K816243240Min: 37.66 / Avg: 37.99 / Max: 38.39

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i9 13900K13900Ki9-13900K0.3850.771.1551.541.925SE +/- 0.001, N = 31.6921.7111.706
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i9 13900K13900Ki9-13900K246810Min: 1.69 / Avg: 1.69 / Max: 1.69

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATCore i9 13900K13900Ki9-13900K160M320M480M640M800MSE +/- 1883383.02, N = 3754084011.95746438349.86754760800.251. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATCore i9 13900K13900Ki9-13900K130M260M390M520M650MMin: 750798510.15 / Avg: 754084011.95 / Max: 757322226.21. (CC) gcc options: -O3 -march=native -lm

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialCore i9 13900K13900Ki9-13900K2040608010080.8780.0580.01

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra FastCore i9 13900K13900Ki9-13900K60120180240300SE +/- 0.49, N = 11281.65279.91282.911. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra FastCore i9 13900K13900Ki9-13900K50100150200250Min: 279.31 / Avg: 281.65 / Max: 284.461. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Core i9 13900K13900Ki9-13900K246810SE +/- 0.003959, N = 36.0401146.0454966.1042351. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 512Core i9 13900K13900Ki9-13900K246810Min: 6.03 / Avg: 6.04 / Max: 6.051. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonCore i9 13900K13900Ki9-13900K4080120160200SE +/- 0.58, N = 3193191193
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonCore i9 13900K13900Ki9-13900K4080120160200Min: 192 / Avg: 193 / Max: 194

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 512Core i9 13900K13900Ki9-13900K246810SE +/- 0.014412, N = 36.0099026.0395266.0725531. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 512Core i9 13900K13900Ki9-13900K246810Min: 5.99 / Avg: 6.01 / Max: 6.041. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Core i9 13900K13900Ki9-13900K1530456075SE +/- 0.38, N = 368.7168.0068.221. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Core i9 13900K13900Ki9-13900K1326395265Min: 68.13 / Avg: 68.71 / Max: 69.431. (CXX) g++ options: -O3 -fPIC -march=native -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K510152025SE +/- 0.04, N = 322.3722.5622.331. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K510152025Min: 22.32 / Avg: 22.37 / Max: 22.441. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.41, N = 490.2290.7891.141. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i9 13900K13900Ki9-13900K20406080100Min: 89.22 / Avg: 90.22 / Max: 91.021. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupCore i9 13900K13900Ki9-13900K1.1162.2323.3484.4645.58SE +/- 0.00, N = 34.914.964.91
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupCore i9 13900K13900Ki9-13900K246810Min: 4.9 / Avg: 4.91 / Max: 4.91

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.12, N = 386.3085.8185.441. (CC) gcc options: -O3 -march=native -pthread -lhmmer -leasel -lm -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchCore i9 13900K13900Ki9-13900K1632486480Min: 86.1 / Avg: 86.3 / Max: 86.521. (CC) gcc options: -O3 -march=native -pthread -lhmmer -leasel -lm -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCore i9 13900K13900Ki9-13900K48121620SE +/- 0.04, N = 1218.1018.2218.281. (CXX) g++ options: -O3 -march=native -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCore i9 13900K13900Ki9-13900K510152025Min: 17.95 / Avg: 18.1 / Max: 18.361. (CXX) g++ options: -O3 -march=native -lm -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCore i9 13900K13900Ki9-13900K1.25362.50723.76085.01446.268SE +/- 0.01732, N = 35.517745.546305.57174
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCore i9 13900K13900Ki9-13900K246810Min: 5.49 / Avg: 5.52 / Max: 5.55

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionCore i9 13900K13900Ki9-13900K15003000450060007500SE +/- 7.33, N = 37044.37026.36976.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionCore i9 13900K13900Ki9-13900K12002400360048006000Min: 7030.4 / Avg: 7044.3 / Max: 7055.3

Inkscape

Inkscape is an open-source vector graphics editor. This test profile times how long it takes to complete various operations by Inkscape. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGCore i9 13900K13900Ki9-13900K48121620SE +/- 0.05, N = 414.8815.0214.921. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)
OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGCore i9 13900K13900Ki9-13900K48121620Min: 14.79 / Avg: 14.88 / Max: 15.011. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 1024Core i9 13900K13900Ki9-13900K246810SE +/- 0.003494, N = 36.1970026.2307256.2573071. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 1024Core i9 13900K13900Ki9-13900K3691215Min: 6.19 / Avg: 6.2 / Max: 6.21. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 14 digitCore i9 13900K13900Ki9-13900K50100150200250SE +/- 1.16, N = 3215.69213.66213.731. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 14 digitCore i9 13900K13900Ki9-13900K4080120160200Min: 213.39 / Avg: 215.69 / Max: 217.151. (CC) gcc options: -O2 -pthread

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCore i9 13900K13900Ki9-13900K70140210280350SE +/- 1.39, N = 8317.39317.34314.41MIN: 24.81 / MAX: 30000MIN: 24.32 / MAX: 30000MIN: 24 / MAX: 300001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCore i9 13900K13900Ki9-13900K60120180240300Min: 313.49 / Avg: 317.39 / Max: 323.551. ClickHouse server version 22.5.4.19 (official build).

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCore i9 13900K13900Ki9-13900K0.12980.25960.38940.51920.649SE +/- 0.00065, N = 30.576900.573380.57152
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCore i9 13900K13900Ki9-13900K246810Min: 0.58 / Avg: 0.58 / Max: 0.58

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50Core i9 13900K13900Ki9-13900K1000K2000K3000K4000K5000KSE +/- 30809.47, N = 34568378.34577579.04611352.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50Core i9 13900K13900Ki9-13900K800K1600K2400K3200K4000KMin: 4508240.5 / Avg: 4568378.33 / Max: 46100771. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultCore i9 13900K13900Ki9-13900K0.36290.72581.08871.45161.8145SE +/- 0.009, N = 111.5981.6011.6131. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultCore i9 13900K13900Ki9-13900K246810Min: 1.55 / Avg: 1.6 / Max: 1.641. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCore i9 13900K13900Ki9-13900K3691215SE +/- 0.00, N = 310.1310.0610.15
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCore i9 13900K13900Ki9-13900K3691215Min: 10.13 / Avg: 10.13 / Max: 10.14

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Core i9 13900K13900Ki9-13900K246810SE +/- 0.004625, N = 36.3202456.3761406.3767491. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 44100 - Buffer Size: 1024Core i9 13900K13900Ki9-13900K3691215Min: 6.31 / Avg: 6.32 / Max: 6.331. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreCore i9 13900K13900Ki9-13900K400800120016002000202420412042

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUCore i9 13900K13900Ki9-13900K1.02382.04763.07144.09525.119SE +/- 0.03, N = 34.514.554.52MIN: 4.34 / MAX: 4.88MIN: 4.41 / MAX: 4.88MIN: 4.38 / MAX: 4.82
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: CPUCore i9 13900K13900Ki9-13900K246810Min: 4.47 / Avg: 4.51 / Max: 4.56

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCore i9 13900K13900Ki9-13900K150300450600750SE +/- 0.17, N = 36856866801. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCore i9 13900K13900Ki9-13900K120240360480600Min: 685 / Avg: 685.17 / Max: 685.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCore i9 13900K13900Ki9-13900K50100150200250SE +/- 0.04, N = 3226.66224.68224.81
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCore i9 13900K13900Ki9-13900K4080120160200Min: 226.59 / Avg: 226.66 / Max: 226.72

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 11.58, N = 32015.941999.352016.93MIN: 1982.34MIN: 1985.91MIN: 2001.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUCore i9 13900K13900Ki9-13900K400800120016002000Min: 1993.08 / Avg: 2015.94 / Max: 2030.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosCore i9 13900K13900Ki9-13900K1020304050SE +/- 0.18, N = 345.845.545.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosCore i9 13900K13900Ki9-13900K918273645Min: 45.5 / Avg: 45.77 / Max: 46.1

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 12.00, N = 32018.592001.142013.18MIN: 1981.55MIN: 1989.94MIN: 2002.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i9 13900K13900Ki9-13900K400800120016002000Min: 1994.96 / Avg: 2018.59 / Max: 2034.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileCore i9 13900K13900Ki9-13900K1122334455SE +/- 0.01, N = 350.1349.7149.86
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileCore i9 13900K13900Ki9-13900K1020304050Min: 50.11 / Avg: 50.13 / Max: 50.16

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileCore i9 13900K13900Ki9-13900K510152025SE +/- 0.02, N = 320.8220.9220.74
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileCore i9 13900K13900Ki9-13900K510152025Min: 20.79 / Avg: 20.82 / Max: 20.85

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveCore i9 13900K13900Ki9-13900K714212835SE +/- 0.05, N = 330.0930.3430.221. (CXX) g++ options: -fopenmp -O3 -march=native -O2
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveCore i9 13900K13900Ki9-13900K714212835Min: 29.99 / Avg: 30.09 / Max: 30.151. (CXX) g++ options: -fopenmp -O3 -march=native -O2

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionCore i9 13900K13900Ki9-13900K0.77491.54982.32473.09963.8745SE +/- 0.014, N = 93.4153.4403.4441. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionCore i9 13900K13900Ki9-13900K246810Min: 3.36 / Avg: 3.42 / Max: 3.481. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsCore i9 13900K13900Ki9-13900K3691215SE +/- 0.03, N = 312.212.112.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsCore i9 13900K13900Ki9-13900K48121620Min: 12.1 / Avg: 12.17 / Max: 12.2

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Core i9 13900K13900Ki9-13900K1224364860SE +/- 0.14, N = 353.3153.4753.741. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Core i9 13900K13900Ki9-13900K1122334455Min: 53.07 / Avg: 53.31 / Max: 53.571. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Core i9 13900K13900Ki9-13900K3691215SE +/- 0.01, N = 310.2010.1110.181. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Core i9 13900K13900Ki9-13900K3691215Min: 10.18 / Avg: 10.2 / Max: 10.221. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format ten times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.3WAV To FLACCore i9 13900K13900Ki9-13900K3691215SE +/- 0.010, N = 510.0179.9809.9371. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.3WAV To FLACCore i9 13900K13900Ki9-13900K3691215Min: 9.99 / Avg: 10.02 / Max: 10.041. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCore i9 13900K13900Ki9-13900K50100150200250SE +/- 0.32, N = 3250.05250.59248.60
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCore i9 13900K13900Ki9-13900K50100150200250Min: 249.42 / Avg: 250.05 / Max: 250.49

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K300M600M900M1200M1500MSE +/- 2179704.36, N = 31395733333138510000013847000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K200M400M600M800M1000MMin: 1391600000 / Avg: 1395733333.33 / Max: 13990000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MCore i9 13900K13900Ki9-13900K2K4K6K8K10KSE +/- 47.19, N = 39531.79498.89456.91. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MCore i9 13900K13900Ki9-13900K17003400510068008500Min: 9441.5 / Avg: 9531.73 / Max: 9600.81. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCore i9 13900K13900Ki9-13900K0.39130.78261.17391.56521.9565SE +/- 0.0074, N = 31.72571.73341.73931. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCore i9 13900K13900Ki9-13900K246810Min: 1.72 / Avg: 1.73 / Max: 1.741. (CXX) g++ options: -O3 -march=native -flto -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.21, N = 379.6679.3579.041. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i9 13900K13900Ki9-13900K1530456075Min: 79.25 / Avg: 79.66 / Max: 79.931. (CXX) g++ options: -O2 -lOpenCL

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.28, N = 141311301301. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeCore i9 13900K13900Ki9-13900K20406080100Min: 130 / Avg: 131 / Max: 1321. (CC) gcc options: -O3 -march=native

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreCore i9 13900K13900Ki9-13900K12002400360048006000553255625574

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K50M100M150M200M250MSE +/- 978678.25, N = 32292366672309500002308900001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K40M80M120M160M200MMin: 227280000 / Avg: 229236666.67 / Max: 2302600001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K246810SE +/- 0.025, N = 38.3588.3438.4051. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K3691215Min: 8.31 / Avg: 8.36 / Max: 8.41. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K30K60K90K120K150KSE +/- 541.39, N = 31501011499081490231. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K30K60K90K120K150KMin: 149030 / Avg: 150101.33 / Max: 1507731. (CXX) g++ options: -O3 -march=native -ldl

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX CaseCore i9 13900K13900Ki9-13900K510152025SE +/- 0.02, N = 321.8922.0521.941. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX CaseCore i9 13900K13900Ki9-13900K510152025Min: 21.87 / Avg: 21.89 / Max: 21.931. OpenSCAD version 2021.01

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 13900K13900Ki9-13900K13002600390052006500SE +/- 22.86, N = 35815.65847.25856.51. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 13900K13900Ki9-13900K10002000300040005000Min: 5786.9 / Avg: 5815.63 / Max: 5860.81. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolCore i9 13900K13900Ki9-13900K300K600K900K1200K1500KSE +/- 2141.33, N = 3129882112913491289761
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolCore i9 13900K13900Ki9-13900K200K400K600K800K1000KMin: 1294538 / Avg: 1298820.67 / Max: 1300962

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 6.65, N = 3951.57954.78958.21
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCore i9 13900K13900Ki9-13900K2004006008001000Min: 938.32 / Avg: 951.57 / Max: 959.18

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreCore i9 13900K13900Ki9-13900K8001600240032004000350835213532

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K30K60K90K120K150KSE +/- 671.80, N = 31532271524461534771. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K30K60K90K120K150KMin: 151903 / Avg: 153227 / Max: 1540871. (CXX) g++ options: -O3 -march=native -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.10, N = 7138.95139.02138.111. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCore i9 13900K13900Ki9-13900K306090120150Min: 138.42 / Avg: 138.95 / Max: 139.191. (CXX) g++ options: -O3 -march=native -flto -pthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Compression Effort 5Core i9 13900K13900Ki9-13900K0.51731.03461.55192.06922.5865SE +/- 0.002, N = 102.2842.2992.2941. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 100, Compression Effort 5Core i9 13900K13900Ki9-13900K246810Min: 2.27 / Avg: 2.28 / Max: 2.31. (CXX) g++ options: -O3 -march=native -fno-rtti -ldl

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Core i9 13900K13900Ki9-13900K100M200M300M400M500MSE +/- 1661127.72, N = 34552176674522838004537916001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Core i9 13900K13900Ki9-13900K80M160M240M320M400MMin: 452276300 / Avg: 455217666.67 / Max: 4580260001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.16, N = 7145.98145.21145.051. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150Min: 145.25 / Avg: 145.98 / Max: 146.491. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetCore i9 13900K13900Ki9-13900K246810SE +/- 0.01, N = 37.877.837.881. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetCore i9 13900K13900Ki9-13900K3691215Min: 7.86 / Avg: 7.87 / Max: 7.881. (CXX) g++ options: -O3 -march=native

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeCore i9 13900K13900Ki9-13900K48121620SE +/- 0.03, N = 415.0415.0614.961. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -R/usr/lib -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeCore i9 13900K13900Ki9-13900K48121620Min: 14.96 / Avg: 15.04 / Max: 15.081. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -R/usr/lib -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Core i9 13900K13900Ki9-13900K3691215SE +/- 0.01, N = 510.6110.5810.541. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Core i9 13900K13900Ki9-13900K3691215Min: 10.57 / Avg: 10.61 / Max: 10.631. (CXX) g++ options: -O3 -march=native

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSCore i9 13900K13900Ki9-13900K90K180K270K360K450KSE +/- 159.10, N = 34307534309424282821. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSCore i9 13900K13900Ki9-13900K70K140K210K280K350KMin: 430571.34 / Avg: 430753.11 / Max: 431070.191. (CC) gcc options: -pedantic -O3

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigCore i9 13900K13900Ki9-13900K90180270360450SE +/- 0.61, N = 3437.58434.89435.30
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: allmodconfigCore i9 13900K13900Ki9-13900K80160240320400Min: 436.37 / Avg: 437.58 / Max: 438.2

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurCore i9 13900K13900Ki9-13900K816243240SE +/- 0.15, N = 336.5436.5036.32
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurCore i9 13900K13900Ki9-13900K816243240Min: 36.29 / Avg: 36.54 / Max: 36.8

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: PistolCore i9 13900K13900Ki9-13900K1122334455SE +/- 0.03, N = 349.7350.0350.031. OpenSCAD version 2021.01
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: PistolCore i9 13900K13900Ki9-13900K1020304050Min: 49.67 / Avg: 49.73 / Max: 49.781. OpenSCAD version 2021.01

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCore i9 13900K13900Ki9-13900K150300450600750SE +/- 2.65, N = 36736776741. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCore i9 13900K13900Ki9-13900K120240360480600Min: 668 / Avg: 673 / Max: 6771. (CC) gcc options: -fopenmp -O3 -march=native -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassCore i9 13900K13900Ki9-13900K48121620SE +/- 0.03, N = 316.8916.8916.80
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassCore i9 13900K13900Ki9-13900K48121620Min: 16.85 / Avg: 16.89 / Max: 16.95

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomCore i9 13900K13900Ki9-13900K0.97431.94862.92293.89724.8715SE +/- 0.041, N = 34.3054.3304.322
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomCore i9 13900K13900Ki9-13900K246810Min: 4.22 / Avg: 4.31 / Max: 4.35

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputCore i9 13900K13900Ki9-13900K70140210280350SE +/- 1.07, N = 3320.59319.83321.661. (CC) gcc options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputCore i9 13900K13900Ki9-13900K60120180240300Min: 318.46 / Avg: 320.59 / Max: 321.771. (CC) gcc options: -O3 -march=native -rdynamic

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileCore i9 13900K13900Ki9-13900K4080120160200SE +/- 0.15, N = 3182.11182.76181.73
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To CompileCore i9 13900K13900Ki9-13900K306090120150Min: 181.92 / Avg: 182.11 / Max: 182.4

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2Core i9 13900K13900Ki9-13900K10002000300040005000SE +/- 1.39, N = 84602.574581.124576.441. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Multi-Threaded - Configuration: ETC2Core i9 13900K13900Ki9-13900K8001600240032004000Min: 4596.5 / Avg: 4602.57 / Max: 4607.231. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K4080120160200SE +/- 0.51, N = 3190.35189.46189.27
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K306090120150Min: 189.35 / Avg: 190.35 / Max: 190.96

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.16, N = 376.7076.2776.29
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K1530456075Min: 76.46 / Avg: 76.7 / Max: 76.99

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformCore i9 13900K13900Ki9-13900K30060090012001500SE +/- 2.80, N = 31210.21209.91203.61. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformCore i9 13900K13900Ki9-13900K2004006008001000Min: 1205 / Avg: 1210.23 / Max: 1214.61. 3.10.1.1

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisCore i9 13900K13900Ki9-13900K48121620SE +/- 0.11, N = 416.2816.3616.371. (CC) gcc options: -O3 -march=native -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisCore i9 13900K13900Ki9-13900K48121620Min: 16.05 / Avg: 16.28 / Max: 16.491. (CC) gcc options: -O3 -march=native -std=c99

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i9 13900K13900Ki9-13900K1.04942.09883.14824.19765.247SE +/- 0.014, N = 84.6644.6404.6391. (CC) gcc options: -fvisibility=hidden -O3 -march=native -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i9 13900K13900Ki9-13900K246810Min: 4.64 / Avg: 4.66 / Max: 4.751. (CC) gcc options: -fvisibility=hidden -O3 -march=native -lm -ljpeg -lpng16 -ltiff

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUCore i9 13900K13900Ki9-13900K246810SE +/- 0.02, N = 37.487.497.52MIN: 6.7 / MAX: 7.95MIN: 6.73 / MAX: 7.88MIN: 6.75 / MAX: 7.95
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPUCore i9 13900K13900Ki9-13900K3691215Min: 7.45 / Avg: 7.48 / Max: 7.52

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.27, N = 3152.38151.57151.64
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K306090120150Min: 151.86 / Avg: 152.38 / Max: 152.77

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2Core i9 13900K13900Ki9-13900K816243240SE +/- 0.10, N = 333.7733.7033.591. (CXX) g++ options: -O3 -fPIC -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2Core i9 13900K13900Ki9-13900K714212835Min: 33.63 / Avg: 33.77 / Max: 33.961. (CXX) g++ options: -O3 -fPIC -march=native -lm

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUCore i9 13900K13900Ki9-13900K0.8551.712.5653.424.275SE +/- 0.02, N = 33.803.803.78MIN: 1.64 / MAX: 4.31MIN: 1.66 / MAX: 4.31MIN: 1.67 / MAX: 4.29
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPUCore i9 13900K13900Ki9-13900K246810Min: 3.77 / Avg: 3.8 / Max: 3.83

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K4080120160200SE +/- 0.38, N = 9204.39204.01203.321. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K4080120160200Min: 202.02 / Avg: 204.39 / Max: 205.481. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very FastCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.24, N = 8152.24153.02152.221. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very FastCore i9 13900K13900Ki9-13900K306090120150Min: 151.73 / Avg: 152.24 / Max: 153.851. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K612182430SE +/- 0.10, N = 324.9124.7824.861. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K612182430Min: 24.71 / Avg: 24.91 / Max: 25.011. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Nebular Empirical Analysis Tool

NEAT is the Nebular Empirical Analysis Tool for empirical analysis of ionised nebulae, with uncertainty propagation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2.3Core i9 13900K13900Ki9-13900K510152025SE +/- 0.04, N = 318.3218.4218.421. (F9X) gfortran options: -O3 -cpp -ffree-line-length-0 -Jsource/ -fopenmp -fno-backtrace -lcfitsio
OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2.3Core i9 13900K13900Ki9-13900K510152025Min: 18.25 / Avg: 18.32 / Max: 18.391. (F9X) gfortran options: -O3 -cpp -ffree-line-length-0 -Jsource/ -fopenmp -fno-backtrace -lcfitsio

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceCore i9 13900K13900Ki9-13900K816243240SE +/- 0.02, N = 332.6132.6032.77
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceCore i9 13900K13900Ki9-13900K714212835Min: 32.56 / Avg: 32.61 / Max: 32.63

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedCore i9 13900K13900Ki9-13900K13002600390052006500SE +/- 1.07, N = 36150.46122.56118.71. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedCore i9 13900K13900Ki9-13900K11002200330044005500Min: 6148.5 / Avg: 6150.43 / Max: 6152.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression SpeedCore i9 13900K13900Ki9-13900K13002600390052006500SE +/- 17.91, N = 35875.55905.95905.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression SpeedCore i9 13900K13900Ki9-13900K10002000300040005000Min: 5847.6 / Avg: 5875.47 / Max: 5908.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1MCore i9 13900K13900Ki9-13900K4K8K12K16K20KSE +/- 117.41, N = 316997.817085.317064.01. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1MCore i9 13900K13900Ki9-13900K3K6K9K12K15KMin: 16765.3 / Avg: 16997.83 / Max: 17142.41. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCore i9 13900K13900Ki9-13900K6K12K18K24K30KSE +/- 37.22, N = 625764.425811.025679.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCore i9 13900K13900Ki9-13900K4K8K12K16K20KMin: 25629 / Avg: 25764.43 / Max: 25865.21. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K20K40K60K80K100KSE +/- 105.23, N = 38934989656898021. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K16K32K48K64K80KMin: 89206 / Avg: 89348.67 / Max: 895541. (CXX) g++ options: -O3 -march=native -ldl

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i9 13900K13900Ki9-13900K3691215SE +/- 0.01, N = 312.7712.8312.78
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i9 13900K13900Ki9-13900K48121620Min: 12.76 / Avg: 12.77 / Max: 12.78

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Core i9 13900K13900Ki9-13900K600K1200K1800K2400K3000KSE +/- 1261.00, N = 3281749128036792803679
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Core i9 13900K13900Ki9-13900K500K1000K1500K2000K2500KMin: 2814969 / Avg: 2817491 / Max: 2818752

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K110220330440550SE +/- 0.68, N = 12524.69526.70524.151. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K90180270360450Min: 520.58 / Avg: 524.69 / Max: 527.621. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCore i9 13900K13900Ki9-13900K0.33280.66560.99841.33121.664SE +/- 0.003, N = 31.4791.4721.4761. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCore i9 13900K13900Ki9-13900K246810Min: 1.47 / Avg: 1.48 / Max: 1.481. (CXX) g++ options: -O3 -march=native

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCore i9 13900K13900Ki9-13900K246810SE +/- 0.01617, N = 37.240927.256257.27519
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCore i9 13900K13900Ki9-13900K3691215Min: 7.22 / Avg: 7.24 / Max: 7.27

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceCore i9 13900K13900Ki9-13900K500K1000K1500K2000K2500KSE +/- 8661.76, N = 142184224219433421943341. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceCore i9 13900K13900Ki9-13900K400K800K1200K1600K2000KMin: 2126555 / Avg: 2184223.86 / Max: 22119591. (CC) gcc options: -O3 -march=native

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression SpeedCore i9 13900K13900Ki9-13900K30060090012001500SE +/- 2.66, N = 31543.31545.01550.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression SpeedCore i9 13900K13900Ki9-13900K30060090012001500Min: 1538.7 / Avg: 1543.33 / Max: 1547.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K12002400360048006000SE +/- 37.22, N = 35522554755471. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K10002000300040005000Min: 5448 / Avg: 5522 / Max: 55661. (CXX) g++ options: -O3 -march=native -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyCore i9 13900K13900Ki9-13900K306090120150155.46154.76155.14

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCore i9 13900K13900Ki9-13900K1.29152.5833.87455.1666.4575SE +/- 0.01800, N = 35.714465.739895.71597
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCore i9 13900K13900Ki9-13900K246810Min: 5.68 / Avg: 5.71 / Max: 5.75

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionCore i9 13900K13900Ki9-13900K14002800420056007000SE +/- 9.66, N = 36440.76464.96436.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionCore i9 13900K13900Ki9-13900K11002200330044005500Min: 6425 / Avg: 6440.7 / Max: 6458.3

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Core i9 13900K13900Ki9-13900K1.08622.17243.25864.34485.431SE +/- 0.003345, N = 34.8065564.8114694.8274411. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 1024Core i9 13900K13900Ki9-13900K246810Min: 4.8 / Avg: 4.81 / Max: 4.811. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesCore i9 13900K13900Ki9-13900K100200300400500SE +/- 0.58, N = 5466467465
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesCore i9 13900K13900Ki9-13900K80160240320400Min: 465 / Avg: 465.8 / Max: 468

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K400M800M1200M1600M2000MSE +/- 9042123.64, N = 31882100000189010000018877000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K300M600M900M1200M1500MMin: 1869700000 / Avg: 1882100000 / Max: 18997000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CCore i9 13900K13900Ki9-13900K11K22K33K44K55KSE +/- 342.39, N = 352156.4151940.3652033.831. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CCore i9 13900K13900Ki9-13900K9K18K27K36K45KMin: 51554.93 / Avg: 52156.41 / Max: 52740.641. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Core i9 13900K13900Ki9-13900K1.07652.1533.22954.3065.3825SE +/- 0.003089, N = 34.7647464.7669224.7844011. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 96000 - Buffer Size: 512Core i9 13900K13900Ki9-13900K246810Min: 4.76 / Avg: 4.76 / Max: 4.771. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionCore i9 13900K13900Ki9-13900K14002800420056007000SE +/- 10.15, N = 36456.86479.26452.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionCore i9 13900K13900Ki9-13900K11002200330044005500Min: 6440.6 / Avg: 6456.83 / Max: 6475.5

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Core i9 13900K13900Ki9-13900K50K100K150K200K250KSE +/- 264.58, N = 32485002482002492001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.7Test: Square Plain C++Core i9 13900K13900Ki9-13900K40K80K120K160K200KMin: 248000 / Avg: 248500 / Max: 2489001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterCore i9 13900K13900Ki9-13900K140280420560700SE +/- 0.80, N = 3645.8647.6648.41. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterCore i9 13900K13900Ki9-13900K110220330440550Min: 644.9 / Avg: 645.8 / Max: 647.41. 3.10.1.1

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression SpeedCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 2.03, N = 31642.01643.71648.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression SpeedCore i9 13900K13900Ki9-13900K30060090012001500Min: 1639 / Avg: 1642.03 / Max: 1645.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: chachaCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 6.92, N = 131799.811806.911801.46MIN: 812.93 / MAX: 5637.98MIN: 838.65 / MAX: 5624.63MIN: 840.18 / MAX: 5585.041. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: chachaCore i9 13900K13900Ki9-13900K30060090012001500Min: 1740.23 / Avg: 1799.81 / Max: 1814.071. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.01, N = 383.1383.4583.291. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i9 13900K13900Ki9-13900K1632486480Min: 83.12 / Avg: 83.13 / Max: 83.151. (CC) gcc options: -O3

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 7.82, N = 31990.81993.41998.1
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersCore i9 13900K13900Ki9-13900K30060090012001500Min: 1982.8 / Avg: 1990.77 / Max: 2006.4

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K10002000300040005000SE +/- 35.04, N = 34716473347301. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K8001600240032004000Min: 4646 / Avg: 4716 / Max: 47541. (CXX) g++ options: -O3 -march=native -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression SpeedCore i9 13900K13900Ki9-13900K12002400360048006000SE +/- 1.69, N = 125785.65766.45765.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression SpeedCore i9 13900K13900Ki9-13900K10002000300040005000Min: 5774.4 / Avg: 5785.63 / Max: 5792.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastCore i9 13900K13900Ki9-13900K20406080100SE +/- 0.08, N = 675.7075.4475.561. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastCore i9 13900K13900Ki9-13900K1530456075Min: 75.34 / Avg: 75.7 / Max: 75.871. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i9 13900K13900Ki9-13900K200K400K600K800K1000KSE +/- 528.46, N = 31104577.981105354.061101824.901. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i9 13900K13900Ki9-13900K200K400K600K800K1000KMin: 1103725.07 / Avg: 1104577.98 / Max: 11055451. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPCore i9 13900K13900Ki9-13900K4K8K12K16K20KSE +/- 9.47, N = 319350.2719409.9719369.631. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPCore i9 13900K13900Ki9-13900K3K6K9K12K15KMin: 19339.12 / Avg: 19350.27 / Max: 19369.111. (CXX) g++ options: -O3 -march=native -fopenmp

Nettle

GNU Nettle is a low-level cryptographic library used by GnuTLS and other software. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: aes256Core i9 13900K13900Ki9-13900K3K6K9K12K15KSE +/- 3.35, N = 813631.2913618.7413590.02MIN: 8759.21 / MAX: 23355.78MIN: 8789.18 / MAX: 23296.07MIN: 8728.07 / MAX: 23281.771. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.8Test: aes256Core i9 13900K13900Ki9-13900K2K4K6K8K10KMin: 13618.46 / Avg: 13631.29 / Max: 13646.171. (CC) gcc options: -O3 -march=native -ggdb3 -lnettle -lgmp -lm -lcrypto

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingCore i9 13900K13900Ki9-13900K9001800270036004500SE +/- 3.75, N = 34149.454158.224146.081. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingCore i9 13900K13900Ki9-13900K7001400210028003500Min: 4142.05 / Avg: 4149.45 / Max: 4154.161. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Core i9 13900K13900Ki9-13900K1.03012.06023.09034.12045.1505SE +/- 0.006, N = 84.5654.5744.5781. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -march=native -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Core i9 13900K13900Ki9-13900K246810Min: 4.55 / Avg: 4.56 / Max: 4.61. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -march=native -lncurses -lm

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 0.26, N = 3996.3997.9995.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionCore i9 13900K13900Ki9-13900K2004006008001000Min: 995.8 / Avg: 996.27 / Max: 996.7

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i9 13900K13900Ki9-13900K3M6M9M12M15MSE +/- 54571.12, N = 41414674514108696141228541. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i9 13900K13900Ki9-13900K2M4M6M8M10MMin: 14019207 / Avg: 14146744.75 / Max: 142577851. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 11.62, N = 31042.791045.581044.41MIN: 1015.71MIN: 1022.83MIN: 1035.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCore i9 13900K13900Ki9-13900K2004006008001000Min: 1019.58 / Avg: 1042.79 / Max: 1055.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCore i9 13900K13900Ki9-13900K2K4K6K8K10KSE +/- 41.20, N = 31055010525105531. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCore i9 13900K13900Ki9-13900K2K4K6K8K10KMin: 10470 / Avg: 10549.5 / Max: 106081. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUCore i9 13900K13900Ki9-13900K7K14K21K28K35KSE +/- 76.32, N = 3305183059930557
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPUCore i9 13900K13900Ki9-13900K5K10K15K20K25KMin: 30394 / Avg: 30517.67 / Max: 30657

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPCore i9 13900K13900Ki9-13900K7K14K21K28K35KSE +/- 2.29, N = 330963.1631040.0231006.091. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPCore i9 13900K13900Ki9-13900K5K10K15K20K25KMin: 30958.68 / Avg: 30963.16 / Max: 30966.161. (CXX) g++ options: -O3 -march=native -fopenmp

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackCore i9 13900K13900Ki9-13900K3691215SE +/- 0.02, N = 510.1810.2010.211. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackCore i9 13900K13900Ki9-13900K3691215Min: 10.16 / Avg: 10.18 / Max: 10.261. (CXX) g++ options: -O3 -march=native -rdynamic

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUCore i9 13900K13900Ki9-13900K0.96531.93062.89593.86124.8265SE +/- 0.03, N = 34.284.294.29MIN: 1.88 / MAX: 4.85MIN: 1.93 / MAX: 4.82MIN: 1.93 / MAX: 4.83
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: CPUCore i9 13900K13900Ki9-13900K246810Min: 4.24 / Avg: 4.28 / Max: 4.33

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. The test profile uses a 8K x 8K game texture as a sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2Core i9 13900K13900Ki9-13900K80160240320400SE +/- 0.18, N = 3347.43346.89347.681. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 1.0Benchmark: Single-Threaded - Configuration: ETC2Core i9 13900K13900Ki9-13900K60120180240300Min: 347.09 / Avg: 347.43 / Max: 347.71. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastCore i9 13900K13900Ki9-13900K918273645SE +/- 0.04, N = 440.4140.5040.481. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastCore i9 13900K13900Ki9-13900K816243240Min: 40.31 / Avg: 40.41 / Max: 40.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Core i9 13900K13900Ki9-13900K8000M16000M24000M32000M40000MSE +/- 64587128.28, N = 33820669113338139139950382232896701. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Core i9 13900K13900Ki9-13900K7000M14000M21000M28000M35000MMin: 38086240840 / Avg: 38206691133.33 / Max: 383073302201. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 0.30, N = 21028.11029.51027.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionCore i9 13900K13900Ki9-13900K2004006008001000Min: 1027.8 / Avg: 1028.1 / Max: 1028.4

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K130260390520650SE +/- 0.74, N = 3596.14596.90595.63
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCore i9 13900K13900Ki9-13900K110220330440550Min: 594.66 / Avg: 596.14 / Max: 596.9

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K16K32K48K64K80KSE +/- 117.20, N = 37690576789769461. (CXX) g++ options: -O3 -march=native -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCore i9 13900K13900Ki9-13900K13K26K39K52K65KMin: 76671 / Avg: 76905 / Max: 770341. (CXX) g++ options: -O3 -march=native -ldl

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Core i9 13900K13900Ki9-13900K15K30K45K60K75KSE +/- 96.12, N = 369910.8270004.1070053.241. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Core i9 13900K13900Ki9-13900K12K24K36K48K60KMin: 69755.13 / Avg: 69910.82 / Max: 70086.321. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesCore i9 13900K13900Ki9-13900K1122334455SE +/- 0.12, N = 349.849.749.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesCore i9 13900K13900Ki9-13900K1020304050Min: 49.6 / Avg: 49.77 / Max: 50

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i9 13900K13900Ki9-13900K300K600K900K1200K1500KSE +/- 5731.41, N = 4158979315928751592774
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i9 13900K13900Ki9-13900K300K600K900K1200K1500KMin: 1573845 / Avg: 1589792.75 / Max: 1600962

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionCore i9 13900K13900Ki9-13900K150300450600750SE +/- 1.17, N = 3688.9690.2689.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionCore i9 13900K13900Ki9-13900K120240360480600Min: 686.6 / Avg: 688.93 / Max: 690.2

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K1224364860SE +/- 0.07, N = 451.3851.4251.471. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080pCore i9 13900K13900Ki9-13900K1020304050Min: 51.25 / Avg: 51.38 / Max: 51.511. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterCore i9 13900K13900Ki9-13900K160320480640800SE +/- 0.12, N = 3748.9748.1747.6
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterCore i9 13900K13900Ki9-13900K130260390520650Min: 748.7 / Avg: 748.87 / Max: 749.1

N-Queens

This is a test of the OpenMP version of a test that solves the N-queens problem. The board problem size is 18. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeCore i9 13900K13900Ki9-13900K1.33472.66944.00415.33886.6735SE +/- 0.014, N = 75.9325.9225.9261. (CC) gcc options: -static -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterN-Queens 1.0Elapsed TimeCore i9 13900K13900Ki9-13900K246810Min: 5.92 / Avg: 5.93 / Max: 6.021. (CC) gcc options: -static -fopenmp -O3 -march=native

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterCore i9 13900K13900Ki9-13900K30060090012001500SE +/- 0.12, N = 31195.71193.71195.71. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterCore i9 13900K13900Ki9-13900K2004006008001000Min: 1195.5 / Avg: 1195.73 / Max: 1195.91. 3.10.1.1

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCore i9 13900K13900Ki9-13900K130260390520650SE +/- 0.17, N = 35985985971. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCore i9 13900K13900Ki9-13900K110220330440550Min: 597.5 / Avg: 597.83 / Max: 5981. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCore i9 13900K13900Ki9-13900K3691215SE +/- 0.01, N = 310.1510.1610.16
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCore i9 13900K13900Ki9-13900K3691215Min: 10.13 / Avg: 10.15 / Max: 10.18

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingCore i9 13900K13900Ki9-13900K6001200180024003000SE +/- 2.71, N = 32816.292816.592811.941. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingCore i9 13900K13900Ki9-13900K5001000150020002500Min: 2811.94 / Avg: 2816.29 / Max: 2821.261. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeCore i9 13900K13900Ki9-13900K4080120160200SE +/- 0.64, N = 3176.05175.80176.091. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeCore i9 13900K13900Ki9-13900K306090120150Min: 174.96 / Avg: 176.05 / Max: 177.161. (CC) gcc options: -shared -fwrapv -O2 -O3 -march=native -lxc -lblas -lmpi

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionCore i9 13900K13900Ki9-13900K150300450600750SE +/- 0.21, N = 3695.3695.5694.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionCore i9 13900K13900Ki9-13900K120240360480600Min: 694.9 / Avg: 695.3 / Max: 695.6

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCore i9 13900K13900Ki9-13900K48121620SE +/- 0.01, N = 317.8917.8617.891. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCore i9 13900K13900Ki9-13900K510152025Min: 17.86 / Avg: 17.89 / Max: 17.911. (CXX) g++ options: -O3 -march=native -flto -pthread

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigCore i9 13900K13900Ki9-13900K612182430SE +/- 0.03, N = 326.2326.2626.221. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigCore i9 13900K13900Ki9-13900K612182430Min: 26.19 / Avg: 26.23 / Max: 26.281. (CXX) g++ options: -fopenmp -lmpi_cxx -lmpi

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 0.15, N = 3972.1972.0970.8
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseCore i9 13900K13900Ki9-13900K2004006008001000Min: 971.8 / Avg: 972.07 / Max: 972.3

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCore i9 13900K13900Ki9-13900K246810SE +/- 0.01, N = 37.787.787.791. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCore i9 13900K13900Ki9-13900K3691215Min: 7.77 / Avg: 7.78 / Max: 7.791. (CXX) g++ options: -O3 -march=native

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsCore i9 13900K13900Ki9-13900K246810SE +/- 0.00, N = 37.877.887.871. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsCore i9 13900K13900Ki9-13900K3691215Min: 7.87 / Avg: 7.87 / Max: 7.881. (CXX) g++ options: -O3 -march=native

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K200M400M600M800M1000MSE +/- 365558.93, N = 38936000008928100008937600001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K150M300M450M600M750MMin: 892880000 / Avg: 893600000 / Max: 8940700001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Core i9 13900K13900Ki9-13900K48121620SE +/- 0.02, N = 414.4314.4514.441. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Core i9 13900K13900Ki9-13900K48121620Min: 14.38 / Avg: 14.43 / Max: 14.451. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionCore i9 13900K13900Ki9-13900K150300450600750SE +/- 0.12, N = 3689.7689.8689.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionCore i9 13900K13900Ki9-13900K120240360480600Min: 689.5 / Avg: 689.7 / Max: 689.9

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 2.34, N = 3994.0994.5995.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionCore i9 13900K13900Ki9-13900K2004006008001000Min: 989.3 / Avg: 993.97 / Max: 996.6

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterCore i9 13900K13900Ki9-13900K2040608010085.5185.4385.43

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionCore i9 13900K13900Ki9-13900K2004006008001000SE +/- 0.83, N = 31027.71026.81027.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionCore i9 13900K13900Ki9-13900K2004006008001000Min: 1026.1 / Avg: 1027.7 / Max: 1028.9

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K100M200M300M400M500MSE +/- 119768.29, N = 34471166674467300004468800001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57Core i9 13900K13900Ki9-13900K80M160M240M320M400MMin: 446930000 / Avg: 447116666.67 / Max: 4473400001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 13900K13900Ki9-13900K80K160K240K320K400KSE +/- 622.37, N = 3378571.9378610.0378867.21. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 13900K13900Ki9-13900K70K140K210K280K350KMin: 377819.4 / Avg: 378571.87 / Max: 379806.81. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionCore i9 13900K13900Ki9-13900K150300450600750SE +/- 0.26, N = 3695.6695.7695.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionCore i9 13900K13900Ki9-13900K120240360480600Min: 695.1 / Avg: 695.57 / Max: 696

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K306090120150SE +/- 0.29, N = 6112.01112.07112.011. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.4Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KCore i9 13900K13900Ki9-13900K20406080100Min: 111.01 / Avg: 112.01 / Max: 112.731. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i9 13900K13900Ki9-13900K246810SE +/- 0.017, N = 68.1838.1858.181
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i9 13900K13900Ki9-13900K3691215Min: 8.16 / Avg: 8.18 / Max: 8.26

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUCore i9 13900K13900Ki9-13900K20K40K60K80K100KSE +/- 32.86, N = 3109404.89109400.26109354.471. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUCore i9 13900K13900Ki9-13900K20K40K60K80K100KMin: 109370.1 / Avg: 109404.89 / Max: 109470.581. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CCore i9 13900K13900Ki9-13900K7001400210028003500SE +/- 0.34, N = 93275.983275.913275.331. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CCore i9 13900K13900Ki9-13900K6001200180024003000Min: 3273.92 / Avg: 3275.98 / Max: 3277.21. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionCore i9 13900K13900Ki9-13900K15003000450060007500SE +/- 8.85, N = 37041.67041.67042.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionCore i9 13900K13900Ki9-13900K12002400360048006000Min: 7031.1 / Avg: 7041.6 / Max: 7059.2

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingCore i9 13900K13900Ki9-13900K2K4K6K8K10KSE +/- 0.00, N = 69509.149509.149509.141. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingCore i9 13900K13900Ki9-13900K17003400510068008500Min: 9509.14 / Avg: 9509.14 / Max: 9509.141. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Core i9 13900K13900Ki9-13900K0.12830.25660.38490.51320.6415SE +/- 0.00, N = 30.570.570.57
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x2160Core i9 13900K13900Ki9-13900K246810Min: 0.57 / Avg: 0.57 / Max: 0.57

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160Core i9 13900K13900Ki9-13900K0.12830.25660.38490.51320.6415SE +/- 0.00, N = 30.570.570.57
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160Core i9 13900K13900Ki9-13900K246810Min: 0.57 / Avg: 0.57 / Max: 0.57

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCore i9 13900K13900Ki9-13900K0.42080.84161.26241.68322.104SE +/- 0.00, N = 31.871.871.871. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCore i9 13900K13900Ki9-13900K246810Min: 1.86 / Avg: 1.87 / Max: 1.871. (CXX) g++ options: -O3 -march=native

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeCore i9 13900K80160240320400SE +/- 0.51, N = 4355.71. chrome 104.0.5112.101

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxCore i9 13900K120240360480600SE +/- 1.07, N = 3545.51. firefox 105.0.1

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeCore i9 13900K48121620SE +/- 0.03, N = 715.631. chrome 104.0.5112.101

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxCore i9 13900K48121620SE +/- 0.03, N = 315.31. firefox 105.0.1

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxCore i9 13900K50100150200250SE +/- 0.10, N = 3225.81. firefox 105.0.1

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeCore i9 13900K5001000150020002500SE +/- 4.36, N = 321841. chrome 104.0.5112.101

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: FirefoxCore i9 13900K5001000150020002500SE +/- 2.00, N = 320971. firefox 105.0.1

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeCore i9 13900K70140210280350SE +/- 3.50, N = 3340.821. chrome 104.0.5112.101

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxCore i9 13900K50100150200250SE +/- 2.44, N = 3206.721. firefox 105.0.1

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromeCore i9 13900K90180270360450SE +/- 0.88, N = 33961. chrome 104.0.5112.101

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: FirefoxCore i9 13900K70140210280350SE +/- 1.33, N = 33061. firefox 105.0.1

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromeCore i9 13900K246810SE +/- 0.05, N = 37.081. chrome 104.0.5112.101

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxCore i9 13900K48121620SE +/- 0.11, N = 316.901. firefox 105.0.1

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetCore i9 13900K13900Ki9-13900K0.7291.4582.1872.9163.645SE +/- 0.12, N = 153.242.932.95MIN: 2.77 / MAX: 4.35MIN: 2.88 / MAX: 4.25MIN: 2.87 / MAX: 4.581. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetCore i9 13900K13900Ki9-13900K246810Min: 2.8 / Avg: 3.24 / Max: 3.791. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mCore i9 13900K13900Ki9-13900K246810SE +/- 0.16, N = 158.327.957.91MIN: 7.51 / MAX: 9.63MIN: 7.77 / MAX: 9.11MIN: 7.68 / MAX: 9.561. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mCore i9 13900K13900Ki9-13900K3691215Min: 7.63 / Avg: 8.32 / Max: 9.051. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceCore i9 13900K13900Ki9-13900K0.24980.49960.74940.99921.249SE +/- 0.02, N = 151.070.951.11MIN: 0.91 / MAX: 1.58MIN: 0.92 / MAX: 2.18MIN: 1.07 / MAX: 2.731. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceCore i9 13900K13900Ki9-13900K246810Min: 0.93 / Avg: 1.07 / Max: 1.151. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Core i9 13900K13900Ki9-13900K1.00132.00263.00394.00525.0065SE +/- 0.08, N = 154.254.024.45MIN: 3.51 / MAX: 139.57MIN: 3.94 / MAX: 4.85MIN: 4.36 / MAX: 6.031. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Core i9 13900K13900Ki9-13900K246810Min: 3.56 / Avg: 4.25 / Max: 4.951. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Core i9 13900K13900Ki9-13900K0.5851.171.7552.342.925SE +/- 0.07, N = 152.602.352.30MIN: 2.21 / MAX: 3.12MIN: 2.3 / MAX: 2.7MIN: 2.24 / MAX: 3.111. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Core i9 13900K13900Ki9-13900K246810Min: 2.24 / Avg: 2.6 / Max: 2.831. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Core i9 13900K13900Ki9-13900K0.5581.1161.6742.2322.79SE +/- 0.04, N = 152.212.442.48MIN: 1.96 / MAX: 7.4MIN: 2.39 / MAX: 2.94MIN: 2.42 / MAX: 3.211. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Core i9 13900K13900Ki9-13900K246810Min: 1.99 / Avg: 2.21 / Max: 2.441. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Core i9 13900K13900Ki9-13900K0.63231.26461.89692.52923.1615SE +/- 0.05, N = 152.602.812.81MIN: 2.21 / MAX: 3.6MIN: 2.76 / MAX: 3.27MIN: 2.74 / MAX: 3.681. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Core i9 13900K13900Ki9-13900K246810Min: 2.25 / Avg: 2.6 / Max: 2.851. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CCore i9 13900K13900Ki9-13900K5K10K15K20K25KSE +/- 415.84, N = 1523533.0222173.8922650.571. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CCore i9 13900K13900Ki9-13900K4K8K12K16K20KMin: 22063.22 / Avg: 23533.02 / Max: 261171. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500Core i9 13900K13900Ki9-13900K1.3M2.6M3.9M5.2M6.5MSE +/- 128738.97, N = 155513026.306219017.004146011.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500Core i9 13900K13900Ki9-13900K1.1M2.2M3.3M4.4M5.5MMin: 4837080 / Avg: 5513026.27 / Max: 61335741. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Core i9 13900K13900Ki9-13900K900K1800K2700K3600K4500KSE +/- 102848.37, N = 154225271.424039823.503143722.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Core i9 13900K13900Ki9-13900K700K1400K2100K2800K3500KMin: 3275954.75 / Avg: 4225271.42 / Max: 4626713.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 43.05, N = 151916.991786.171753.20
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCore i9 13900K13900Ki9-13900K30060090012001500Min: 1761.58 / Avg: 1916.99 / Max: 2290.46

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyCore i9 13900K13900Ki9-13900K3691215SE +/- 0.54, N = 1512.4011.9411.87MIN: 10.39 / MAX: 1430.63MIN: 11.8 / MAX: 13.15MIN: 11.66 / MAX: 13.321. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyCore i9 13900K13900Ki9-13900K48121620Min: 10.5 / Avg: 12.4 / Max: 18.091. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetCore i9 13900K13900Ki9-13900K246810SE +/- 0.51, N = 157.687.458.71MIN: 6.27 / MAX: 1364.5MIN: 7.34 / MAX: 8.34MIN: 8.55 / MAX: 10.211. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetCore i9 13900K13900Ki9-13900K3691215Min: 6.33 / Avg: 7.68 / Max: 14.151. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i9 13900K13900Ki9-13900K20406080100SE +/- 1.67, N = 1283.7585.3685.481. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i9 13900K13900Ki9-13900K1632486480Min: 65.41 / Avg: 83.75 / Max: 85.461. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedCore i9 13900K13900Ki9-13900K400800120016002000SE +/- 45.10, N = 121613.71488.81921.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedCore i9 13900K13900Ki9-13900K30060090012001500Min: 1465.9 / Avg: 1613.67 / Max: 2011.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Mozilla Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeCore i9 13900K4080120160200SE +/- 3.10, N = 15190.301. chrome 104.0.5112.101

343 Results Shown

SVT-VP9
CloverLeaf
SVT-AV1
NCNN
Timed Mesa Compilation
x264
Pennant
Natron
SVT-VP9:
  PSNR/SSIM Optimized - Bosphorus 1080p
  Visual Quality Optimized - Bosphorus 4K
x264
QuantLib
NCNN
GraphicsMagick
NAS Parallel Benchmarks
ClickHouse
LZ4 Compression
NCNN:
  CPU - vision_transformer
  CPU - squeezenet_ssd
DaCapo Benchmark
TNN
TensorFlow Lite
Google Draco
RawTherapee
LeelaChessZero
GraphicsMagick
G'MIC
x265
TNN
SVT-AV1
LZ4 Compression
LeelaChessZero
GraphicsMagick
NAS Parallel Benchmarks
VOSK Speech Recognition Toolkit
Node.js V8 Web Tooling Benchmark
GNU Octave Benchmark
NAS Parallel Benchmarks
NCNN
G'MIC
Unpacking The Linux Kernel
TensorFlow Lite
LZ4 Compression
SVT-VP9
Rodinia
SVT-AV1
Gzip Compression
Nettle
libavif avifenc
Unpacking Firefox
Bork File Encrypter
DaCapo Benchmark
NAS Parallel Benchmarks
LuxCoreRender
NCNN
Timed MrBayes Analysis
SVT-HEVC
Google Draco
PyHPC Benchmarks
libavif avifenc
GIMP
TNN
AOM AV1
NCNN
AOM AV1
Xcompact3d Incompact3d
ClickHouse
ASKAP
Timed LLVM Compilation
GIMP
Zstd Compression
Redis
x265
Ngspice
GNU Radio
Node.js Express HTTP Load Test
Nettle
LuaRadio
GIMP
Embree
Rodinia
SVT-HEVC
GEGL
TTSIOD 3D Renderer
Chia Blockchain VDF
Stress-NG
Embree
Xcompact3d Incompact3d
Embree
NCNN
SQLite Speedtest
GEGL
GIMP
asmFish
PyPerformance
OCRMyPDF
GNU Radio
SVT-AV1
WireGuard + Linux Networking Stack Stress Test
simdjson
Timed MPlayer Compilation
Zstd Compression
OpenSCAD
OSPRay Studio
WebP2 Image Encode
LZ4 Compression
AOM AV1
Zstd Compression
OSPRay Studio
SVT-AV1
AOM AV1
GEGL
LAMMPS Molecular Dynamics Simulator
Opus Codec Encoding
SVT-AV1
WebP2 Image Encode
SVT-AV1
Embree
NAS Parallel Benchmarks
PyPerformance
C-Blosc
PyPerformance
libavif avifenc
AOM AV1
WebP2 Image Encode
PyPerformance
oneDNN
TNN
Zstd Compression
PyPerformance
ASKAP
OSPRay Studio
OpenVKL
Liquid-DSP
Blender
libgav1
OpenVKL
GEGL
DaCapo Benchmark
SVT-HEVC
GNU Radio
Timed Linux Kernel Compilation
PyHPC Benchmarks
Hierarchical INTegration
Appleseed
Kvazaar
Stargate Digital Audio Workstation
PyPerformance
Stargate Digital Audio Workstation
libavif avifenc
VP9 libvpx Encoding
LibRaw
PyPerformance
Timed HMMer Search
LAMMPS Molecular Dynamics Simulator
OSPRay
Cryptsetup
Inkscape
Stargate Digital Audio Workstation
Helsing
ClickHouse
NAMD
Redis
WebP2 Image Encode
OSPRay
Stargate Digital Audio Workstation
AI Benchmark Alpha
LuxCoreRender
ONNX Runtime
OSPRay
oneDNN
PyPerformance
oneDNN
Timed Godot Game Engine Compilation
Timed FFmpeg Compilation
m-queens
Parallel BZIP2 Compression
PyPerformance
Ngspice
High Performance Conjugate Gradient
FLAC Audio Encoding
Timed LLVM Compilation
Liquid-DSP
Xmrig
ASTC Encoder
Rodinia
ctx_clock
AI Benchmark Alpha
Liquid-DSP
SVT-AV1
OSPRay Studio
OpenSCAD
OpenSSL
Cryptsetup
TensorFlow Lite
AI Benchmark Alpha
OSPRay Studio
ASTC Encoder
WebP2 Image Encode
Algebraic Multi-Grid Benchmark
SVT-VP9
simdjson
POV-Ray
Primesieve
SecureMark
Timed Linux Kernel Compilation
GEGL
OpenSCAD
GraphicsMagick
GEGL
IndigoBench
libjpeg-turbo tjbench
Timed Gem5 Compilation
Etcpak
Blender:
  Pabellon Barcelona - CPU-Only
  Fishy Cat - CPU-Only
GNU Radio
eSpeak-NG Speech Engine
WebP Image Encode
LuxCoreRender
Blender
libavif avifenc
LuxCoreRender
SVT-HEVC
Kvazaar
VP9 libvpx Encoding
Nebular Empirical Analysis Tool
GEGL
Zstd Compression:
  8, Long Mode - Decompression Speed
  3, Long Mode - Decompression Speed
Xmrig
C-Blosc
OSPRay Studio
IndigoBench
Cryptsetup
SVT-VP9
GROMACS
OSPRay
TSCP
Zstd Compression
OSPRay Studio
Appleseed
OSPRay
Cryptsetup
Stargate Digital Audio Workstation
PyBench
Liquid-DSP
NAS Parallel Benchmarks
Stargate Digital Audio Workstation
Cryptsetup
Chia Blockchain VDF
GNU Radio
Zstd Compression
Nettle
LZ4 Compression
LuaRadio
OSPRay Studio
Zstd Compression
Kvazaar
Coremark
FinanceBench
Nettle
ASKAP
LAME MP3 Encoding
Cryptsetup
Crafty
oneDNN
ONNX Runtime
Chaos Group V-RAY
FinanceBench
WavPack Audio Encoding
LuxCoreRender
Etcpak
Kvazaar
OpenSSL
Cryptsetup
Blender
OSPRay Studio
Aircrack-ng
PyPerformance
PHPBench
Cryptsetup
VP9 libvpx Encoding
LuaRadio
N-Queens
GNU Radio
ONNX Runtime
OSPRay
ASKAP
GPAW
Cryptsetup
ASTC Encoder
Pennant
LuaRadio
simdjson:
  DistinctUserID
  PartialTweets
Liquid-DSP
RNNoise
Cryptsetup:
  Twofish-XTS 512b Encryption
  Serpent-XTS 256b Encryption
Appleseed
Cryptsetup
Liquid-DSP
OpenSSL
Cryptsetup
AOM AV1
G'MIC
Sysbench
NAS Parallel Benchmarks
Cryptsetup
ASKAP
Intel Open Image Denoise:
  RT.ldr_alb_nrm.3840x2160
  RT.hdr_alb_nrm.3840x2160
simdjson
Selenium:
  Kraken - Google Chrome
  Kraken - Firefox
  WASM imageConvolute - Google Chrome
  WASM imageConvolute - Firefox
  WASM collisionDetection - Firefox
  PSPDFKit WASM - Google Chrome
  PSPDFKit WASM - Firefox
  Jetstream 2 - Google Chrome
  Jetstream 2 - Firefox
  Speedometer - Google Chrome
  Speedometer - Firefox
  ARES-6 - Google Chrome
  ARES-6 - Firefox
NCNN:
  CPU - FastestDet
  CPU - regnety_400m
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
NAS Parallel Benchmarks
Redis:
  GET - 500
  SET - 500
TensorFlow Lite
NCNN:
  CPU - yolov4-tiny
  CPU - mobilenet
LZ4 Compression
Zstd Compression
Selenium