Core i5 12400 Linux

Intel Core i5-12400 testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and llvmpipe on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2201079-PTS-COREI51239
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 4 Tests
C++ Boost Tests 2 Tests
Web Browsers 1 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 14 Tests
Compression Tests 4 Tests
CPU Massive 23 Tests
Creator Workloads 24 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 5 Tests
Encoding 7 Tests
Game Development 3 Tests
HPC - High Performance Computing 11 Tests
Imaging 9 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 8 Tests
Multi-Core 27 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 3 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 11 Tests
Python 3 Tests
Renderers 6 Tests
Rust Tests 2 Tests
Scientific Computing 2 Tests
Server 4 Tests
Server CPU Tests 17 Tests
Single-Threaded 3 Tests
Texture Compression 2 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i5 12400
January 06 2022
  9 Hours, 29 Minutes
i5 12400
January 07 2022
  10 Hours, 29 Minutes
Invert Hiding All Results Option
  9 Hours, 59 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i5 12400 LinuxOpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)ASUS PRIME Z690-P WIFI D4 (0605 BIOS)Intel Device 7aa716GB1000GB Western Digital WDS100T1X0E-00AFY0llvmpipeRealtek ALC897Realtek RTL8125 2.5GbE + Intel Device 7af0Ubuntu 21.105.15.7-051507-generic (x86_64)GNOME Shell 40.5X Server 1.20.134.5 Mesa 22.0.0-devel (git-d80c7f3 2021-11-14 impish-oibaf-ppa) (LLVM 13.0.0 256 bits)1.2.197GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I5 12400 Linux BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x12 - Thermald 2.4.6- OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.21.10)- Python 3.9.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Core i5 12400 vs. i5 12400 ComparisonPhoronix Test SuiteBaseline+2.3%+2.3%+4.6%+4.6%+6.9%+6.9%2.9%2.2%DNN - D.N.N9%4.1%BLASS.A.OOpenCVAircrack-ngLeelaChessZeroChia Blockchain VDFCore i5 12400i5 12400

Core i5 12400 Linuxbuild-llvm: Ninjaselenium: Jetstream 2 - Google Chromebuild-nodejs: Time To Compileonnx: shufflenet-v2-10 - CPUlczero: BLASlczero: Eigenjpegxl: PNG - 8securemark: SecureMark-TLSxmrig: Monero - 1Mplaidml: No - Inference - ResNet 50 - CPUblender: Fishy Cat - CPU-Onlyopenssl: SHA256plaidml: No - Inference - VGG19 - CPUxmrig: Wownero - 1Mblender: BMW27 - CPU-Onlytnn: CPU - DenseNetopencv: Object Detectiontensorflow-lite: Inception V4appleseed: Material Testertensorflow-lite: Inception ResNet V2appleseed: Disney Materialplaidml: No - Inference - VGG16 - CPUwireguard: srsran: OFDM_Testonnx: fcn-resnet101-11 - CPUonnx: yolov4 - CPUonnx: super-resolution-10 - CPUbuild-linux-kernel: Time To Compileaircrack-ng: pyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingaom-av1: Speed 6 Two-Pass - Bosphorus 4Kselenium: StyleBench - Google Chromepyhpc: CPU - Aesara - 4194304 - Isoneutral Mixingv-ray: CPUastcenc: Exhaustivemnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3pyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixingavifenc: 6, Losslessembree: Pathtracer - Crownsimdjson: DistinctUserIDsimdjson: PartialTweetsbuild-gdb: Time To Compileindigobench: CPU - Bedroomindigobench: CPU - Supercartensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Quantopenssl: RSA4096openssl: RSA4096compress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedaom-av1: Speed 6 Realtime - Bosphorus 4Kpyhpc: CPU - Numba - 4194304 - Isoneutral Mixingembree: Pathtracer ISPC - Crownbuild-wasmer: Time To Compilesimdjson: Kostyastargate: 480000 - 512stargate: 480000 - 1024rawtherapee: Total Benchmark Timeopencv: DNN - Deep Neural Networkcompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedcompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetsimdjson: LargeRandbuild-mesa: Time To Compilepyhpc: CPU - Numpy - 4194304 - Equation of Stateselenium: Speedometer - Google Chromeselenium: Octane - Google Chromehugin: Panorama Photo Assistant + Stitching Timestockfish: Total Timebuild-mplayer: Time To Compilesvt-av1: Preset 8 - Bosphorus 4Kcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMtungsten: Hairpyhpc: CPU - Aesara - 4194304 - Equation of Stateospray: San Miguel - SciVisprimesieve: 1e12 Prime Number Generationselenium: PSPDFKit WASM - Google Chromecompress-7zip: Decompression Ratingcompress-7zip: Compression Ratinglibraw: Post-Processing Benchmarksrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMtungsten: Water Causticchia-vdf: Square Plain C++pyhpc: CPU - Numba - 4194304 - Equation of Statecoremark: CoreMark Size 666 - Iterations Per Secondtjbench: Decompression Throughputetcpak: ETC2chia-vdf: Square Assembly Optimizedpyhpc: CPU - TensorFlow - 4194304 - Equation of Stateunpack-firefox: firefox-84.0.source.tar.xzselenium: Kraken - Google Chromecrafty: Elapsed Timecython-bench: N-Queensphpbench: PHP Benchmark Suitesrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMwebp: Quality 100, Losslesstnn: CPU - MobileNet v2aom-av1: Speed 8 Realtime - Bosphorus 4Ksrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMpybench: Total For Average Test Timesospray: NASA Streamlines - SciVissrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMtnn: CPU - SqueezeNet v1.1avifenc: 6svt-av1: Preset 8 - Bosphorus 1080pastcenc: Thoroughaom-av1: Speed 9 Realtime - Bosphorus 4Krav1e: 10gimp: unsharp-masktungsten: Volumetric Causticaom-av1: Speed 10 Realtime - Bosphorus 4Koctave-benchmark: gimp: auto-levelsgimp: rotatejpegxl: JPEG - 8tungsten: Non-Exponentialhelsing: 12 digitselenium: WASM imageConvolute - Google Chromewebp: Quality 100, Highest Compressionunpack-linux: linux-4.15.tar.xzgimp: resizedarktable: Boat - CPU-onlysvt-hevc: 7 - Bosphorus 1080pdarktable: Masskrug - CPU-onlydacapobench: Jythondarktable: Server Room - CPU-onlyetcpak: DXT1svt-vp9: VMAF Optimized - Bosphorus 1080ptnn: CPU - SqueezeNet v2svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080plammps: Rhodopsin Proteinsvt-hevc: 10 - Bosphorus 1080pnode-web-tooling: Core i5 12400i5 12400673.002213.234576.1582884172113690.983291513625.68.00239.27876608866012.866040.0165.812368.372327613230930230.926332924140223.19809315.73125.38420362666742296311192.75522904.3241.9148.6850.11.344962667.201123.3842.8342.0083.51820.0842.3471.0931.33965.6939.55466.475.6462.3291.4833.643221027150255182792158832135580.02091.94163.026.411.080.93311.347255.2284.042.9552073.02403651.400109554050.334.412560.264.6212543.366.266.0115.0216.8316.998.389.8336.319.171.083.762.432.862.432.6710.091.4546.9471.3442348244839.5752079877237.33416.49733.595186.6528.531.34460.19515.62529.7782657411586388044.96169.2483.226.05691905670.159312650.613722233.842810207.5222164000.10615.577532.21073773316.8521209237194.1530.914.749211.67743.05114.1175.759018.18177.3481.5180.45712.47051.4077.084959.389.55810.3799.9400765.475.4378.8608.45235.737.886606.94621.026.2074.5745.9555.326112.294.55023283.4041448.081188.3348.203192.056.726238.06673.406213.426576.5352907274213680.983291863567.98.06239.02875891422012.876109.9165.742371.115328073231460230.9281452922760222.91963815.76125.93920171333342298310592.70322004.4291.9198.7050.31.343958367.210523.1002.8372.0113.51720.0042.3381.0911.35265.6369.59776.475.6562.3661.4843.641221043150311183132158847135550.92092.34157.526.411.090.93311.324655.1244.052.9602483.01984451.331119454036.934.612507.564.5612488.766.126.0215.0216.9017.138.409.9736.349.271.083.752.432.882.442.6810.071.4546.9141.3452368198139.5182058364737.35116.57033.715183.8521.531.35900.19415.62529.7882664412976428145.01169.0483.726.14201897670.159311313.121686233.743043207.8642212000.10415.483529.11074823616.6201207099194.4529.214.688211.05543.20114.6176.058818.18177.9481.6179.56012.37751.4857.087759.479.57410.3779.9416365.605.4528.8358.48136.177.782846.95221.396.2414.5765.9645.311112.654.53823143.4011452.614189.1048.514193.116.729239.21OpenBenchmarking.org

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCore i5 12400i5 12400150300450600750SE +/- 0.31, N = 3SE +/- 0.24, N = 3673.00673.41
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCore i5 12400i5 12400120240360480600Min: 672.61 / Avg: 673 / Max: 673.61Min: 673.06 / Avg: 673.41 / Max: 673.88

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google Chromei5 12400Core i5 1240050100150200250SE +/- 0.36, N = 3SE +/- 2.13, N = 3213.43213.231. chrome 95.0.4638.69
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google Chromei5 12400Core i5 124004080120160200Min: 212.78 / Avg: 213.43 / Max: 214.04Min: 211.07 / Avg: 213.23 / Max: 217.51. chrome 95.0.4638.69

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To CompileCore i5 12400i5 12400120240360480600SE +/- 0.18, N = 3SE +/- 0.04, N = 3576.16576.54
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To CompileCore i5 12400i5 12400100200300400500Min: 575.8 / Avg: 576.16 / Max: 576.39Min: 576.46 / Avg: 576.53 / Max: 576.61

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: shufflenet-v2-10 - Device: CPUi5 12400Core i5 124006K12K18K24K30KSE +/- 302.96, N = 12SE +/- 346.81, N = 1229072288411. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: shufflenet-v2-10 - Device: CPUi5 12400Core i5 124005K10K15K20K25KMin: 26785 / Avg: 29072.17 / Max: 29629.5Min: 26762.5 / Avg: 28841.25 / Max: 29627.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASi5 12400Core i5 12400160320480640800SE +/- 3.93, N = 3SE +/- 2.40, N = 37427211. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASi5 12400Core i5 12400130260390520650Min: 734 / Avg: 741.67 / Max: 747Min: 718 / Avg: 721.33 / Max: 7261. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenCore i5 12400i5 1240030060090012001500SE +/- 8.74, N = 3SE +/- 5.67, N = 3136913681. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenCore i5 12400i5 124002004006008001000Min: 1357 / Avg: 1369 / Max: 1386Min: 1357 / Avg: 1368.33 / Max: 13741. (CXX) g++ options: -flto -pthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8i5 12400Core i5 124000.22050.4410.66150.8821.1025SE +/- 0.00, N = 3SE +/- 0.00, N = 30.980.981. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8i5 12400Core i5 12400246810Min: 0.98 / Avg: 0.98 / Max: 0.99Min: 0.98 / Avg: 0.98 / Max: 0.981. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSi5 12400Core i5 1240070K140K210K280K350KSE +/- 131.20, N = 3SE +/- 695.29, N = 33291863291511. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSi5 12400Core i5 1240060K120K180K240K300KMin: 328929.81 / Avg: 329186.1 / Max: 329363Min: 328058 / Avg: 329151.49 / Max: 330442.221. (CC) gcc options: -pedantic -O3

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MCore i5 12400i5 124008001600240032004000SE +/- 3.64, N = 3SE +/- 50.99, N = 33625.63567.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MCore i5 12400i5 124006001200180024003000Min: 3618.7 / Avg: 3625.57 / Max: 3631.1Min: 3470 / Avg: 3567.87 / Max: 3641.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUi5 12400Core i5 12400246810SE +/- 0.01, N = 3SE +/- 0.03, N = 38.068.00
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUi5 12400Core i5 124003691215Min: 8.05 / Avg: 8.06 / Max: 8.07Min: 7.96 / Avg: 8 / Max: 8.06

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Fishy Cat - Compute: CPU-Onlyi5 12400Core i5 1240050100150200250SE +/- 0.08, N = 3SE +/- 0.13, N = 3239.02239.27
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Fishy Cat - Compute: CPU-Onlyi5 12400Core i5 124004080120160200Min: 238.88 / Avg: 239.02 / Max: 239.16Min: 239.03 / Avg: 239.27 / Max: 239.49

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Core i5 12400i5 124002000M4000M6000M8000M10000MSE +/- 8921620.23, N = 3SE +/- 9549406.73, N = 3876608866087589142201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Core i5 12400i5 124001500M3000M4500M6000M7500MMin: 8750518130 / Avg: 8766088660 / Max: 8781420770Min: 8740530350 / Avg: 8758914220 / Max: 87725892901. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUi5 12400Core i5 124003691215SE +/- 0.03, N = 3SE +/- 0.04, N = 312.8712.86
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUi5 12400Core i5 1240048121620Min: 12.83 / Avg: 12.87 / Max: 12.92Min: 12.81 / Avg: 12.86 / Max: 12.93

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1Mi5 12400Core i5 1240013002600390052006500SE +/- 32.09, N = 3SE +/- 8.58, N = 36109.96040.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1Mi5 12400Core i5 1240011002200330044005500Min: 6046.3 / Avg: 6109.87 / Max: 6149.3Min: 6024.9 / Avg: 6039.97 / Max: 6054.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: BMW27 - Compute: CPU-Onlyi5 12400Core i5 124004080120160200SE +/- 0.22, N = 3SE +/- 0.12, N = 3165.74165.81
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: BMW27 - Compute: CPU-Onlyi5 12400Core i5 12400306090120150Min: 165.33 / Avg: 165.74 / Max: 166.08Min: 165.57 / Avg: 165.81 / Max: 165.94

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCore i5 12400i5 124005001000150020002500SE +/- 1.69, N = 3SE +/- 2.74, N = 32368.372371.12MIN: 2321.4 / MAX: 2426.03MIN: 2322.16 / MAX: 2432.531. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCore i5 12400i5 12400400800120016002000Min: 2365.8 / Avg: 2368.37 / Max: 2371.55Min: 2366.02 / Avg: 2371.11 / Max: 2375.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object DetectionCore i5 12400i5 124007K14K21K28K35KSE +/- 307.61, N = 15SE +/- 292.67, N = 1532761328071. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object DetectionCore i5 12400i5 124006K12K18K24K30KMin: 31242 / Avg: 32761.2 / Max: 35071Min: 30362 / Avg: 32807.07 / Max: 347851. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i5 12400i5 12400700K1400K2100K2800K3500KSE +/- 235.02, N = 3SE +/- 506.39, N = 332309303231460
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i5 12400i5 12400600K1200K1800K2400K3000KMin: 3230690 / Avg: 3230930 / Max: 3231400Min: 3230470 / Avg: 3231460 / Max: 3232140

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterCore i5 12400i5 1240050100150200250230.93230.93

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2i5 12400Core i5 12400600K1200K1800K2400K3000KSE +/- 349.48, N = 3SE +/- 355.29, N = 329227602924140
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2i5 12400Core i5 12400500K1000K1500K2000K2500KMin: 2922080 / Avg: 2922760 / Max: 2923240Min: 2923600 / Avg: 2924140 / Max: 2924810

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Materiali5 12400Core i5 1240050100150200250222.92223.20

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUi5 12400Core i5 1240048121620SE +/- 0.03, N = 3SE +/- 0.03, N = 315.7615.73
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUi5 12400Core i5 1240048121620Min: 15.73 / Avg: 15.76 / Max: 15.82Min: 15.69 / Avg: 15.73 / Max: 15.79

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i5 12400i5 12400306090120150SE +/- 0.44, N = 3SE +/- 0.44, N = 3125.38125.94
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i5 12400i5 1240020406080100Min: 124.53 / Avg: 125.38 / Max: 126.03Min: 125.12 / Avg: 125.94 / Max: 126.63

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.10Test: OFDM_TestCore i5 12400i5 1240040M80M120M160M200MSE +/- 2637034.17, N = 15SE +/- 2630279.12, N = 152036266672017133331. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.10Test: OFDM_TestCore i5 12400i5 1240040M80M120M160M200MMin: 191800000 / Avg: 203626666.67 / Max: 214300000Min: 189400000 / Avg: 201713333.33 / Max: 2151000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: fcn-resnet101-11 - Device: CPUi5 12400Core i5 124001020304050SE +/- 0.17, N = 3SE +/- 0.29, N = 342421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: fcn-resnet101-11 - Device: CPUi5 12400Core i5 12400918273645Min: 42 / Avg: 42.33 / Max: 42.5Min: 41.5 / Avg: 42 / Max: 42.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: yolov4 - Device: CPUi5 12400Core i5 1240060120180240300SE +/- 2.25, N = 3SE +/- 2.42, N = 32982961. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: yolov4 - Device: CPUi5 12400Core i5 1240050100150200250Min: 293.5 / Avg: 298 / Max: 300.5Min: 293 / Avg: 295.67 / Max: 300.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: super-resolution-10 - Device: CPUCore i5 12400i5 124007001400210028003500SE +/- 9.37, N = 3SE +/- 6.45, N = 3311131051. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: super-resolution-10 - Device: CPUCore i5 12400i5 124005001000150020002500Min: 3092 / Avg: 3110.67 / Max: 3121.5Min: 3092.5 / Avg: 3105 / Max: 31141. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compilei5 12400Core i5 1240020406080100SE +/- 0.59, N = 3SE +/- 0.58, N = 392.7092.76
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compilei5 12400Core i5 1240020406080100Min: 92.03 / Avg: 92.7 / Max: 93.89Min: 92.16 / Avg: 92.75 / Max: 93.92

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Core i5 12400i5 124005K10K15K20K25KSE +/- 194.07, N = 3SE +/- 158.29, N = 1522904.3222004.431. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Core i5 12400i5 124004K8K12K16K20KMin: 22517.49 / Avg: 22904.32 / Max: 23125.41Min: 20833.03 / Avg: 22004.43 / Max: 23057.231. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 124000.43180.86361.29541.72722.159SE +/- 0.003, N = 3SE +/- 0.003, N = 31.9141.919
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 12400246810Min: 1.91 / Avg: 1.91 / Max: 1.92Min: 1.91 / Avg: 1.92 / Max: 1.92

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Ki5 12400Core i5 12400246810SE +/- 0.01, N = 3SE +/- 0.01, N = 38.708.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Ki5 12400Core i5 124003691215Min: 8.68 / Avg: 8.7 / Max: 8.71Min: 8.67 / Avg: 8.68 / Max: 8.71. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chromei5 12400Core i5 124001122334455SE +/- 0.03, N = 3SE +/- 0.03, N = 350.350.11. chrome 95.0.4638.69
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chromei5 12400Core i5 124001020304050Min: 50.3 / Avg: 50.33 / Max: 50.4Min: 50 / Avg: 50.07 / Max: 50.11. chrome 95.0.4638.69

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixingi5 12400Core i5 124000.30240.60480.90721.20961.512SE +/- 0.003, N = 3SE +/- 0.001, N = 31.3431.344
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixingi5 12400Core i5 12400246810Min: 1.34 / Avg: 1.34 / Max: 1.35Min: 1.34 / Avg: 1.34 / Max: 1.35

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUCore i5 12400i5 124002K4K6K8K10KSE +/- 7.75, N = 3SE +/- 15.51, N = 396269583
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUCore i5 12400i5 124002K4K6K8K10KMin: 9611 / Avg: 9626.33 / Max: 9636Min: 9552 / Avg: 9582.67 / Max: 9602

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveCore i5 12400i5 124001530456075SE +/- 0.02, N = 3SE +/- 0.01, N = 367.2067.211. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveCore i5 12400i5 124001326395265Min: 67.16 / Avg: 67.2 / Max: 67.22Min: 67.19 / Avg: 67.21 / Max: 67.231. (CXX) g++ options: -O3 -flto -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3i5 12400Core i5 12400612182430SE +/- 0.05, N = 3SE +/- 0.31, N = 323.1023.38MIN: 22.88 / MAX: 29.41MIN: 22.92 / MAX: 30.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3i5 12400Core i5 12400510152025Min: 23.01 / Avg: 23.1 / Max: 23.17Min: 23.05 / Avg: 23.38 / Max: 23.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0Core i5 12400i5 124000.63831.27661.91492.55323.1915SE +/- 0.008, N = 3SE +/- 0.006, N = 32.8342.837MIN: 2.8 / MAX: 5.69MIN: 2.8 / MAX: 5.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0Core i5 12400i5 12400246810Min: 2.82 / Avg: 2.83 / Max: 2.84Min: 2.83 / Avg: 2.84 / Max: 2.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224Core i5 12400i5 124000.45250.9051.35751.812.2625SE +/- 0.013, N = 3SE +/- 0.011, N = 32.0082.011MIN: 1.97 / MAX: 2.38MIN: 1.96 / MAX: 8.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224Core i5 12400i5 12400246810Min: 1.99 / Avg: 2.01 / Max: 2.03Min: 1.99 / Avg: 2.01 / Max: 2.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0i5 12400Core i5 124000.79161.58322.37483.16643.958SE +/- 0.020, N = 3SE +/- 0.015, N = 33.5173.518MIN: 3.45 / MAX: 5.08MIN: 3.45 / MAX: 7.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0i5 12400Core i5 12400246810Min: 3.48 / Avg: 3.52 / Max: 3.55Min: 3.49 / Avg: 3.52 / Max: 3.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50i5 12400Core i5 12400510152025SE +/- 0.03, N = 3SE +/- 0.05, N = 320.0020.08MIN: 19.84 / MAX: 29.89MIN: 19.89 / MAX: 28.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50i5 12400Core i5 12400510152025Min: 19.96 / Avg: 20 / Max: 20.07Min: 20.02 / Avg: 20.08 / Max: 20.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1i5 12400Core i5 124000.52811.05621.58432.11242.6405SE +/- 0.012, N = 3SE +/- 0.012, N = 32.3382.347MIN: 2.3 / MAX: 3.16MIN: 2.31 / MAX: 2.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1i5 12400Core i5 12400246810Min: 2.32 / Avg: 2.34 / Max: 2.35Min: 2.33 / Avg: 2.35 / Max: 2.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3i5 12400Core i5 124000.24590.49180.73770.98361.2295SE +/- 0.005, N = 3SE +/- 0.004, N = 31.0911.093MIN: 1.07 / MAX: 8.02MIN: 1.07 / MAX: 8.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3i5 12400Core i5 12400246810Min: 1.08 / Avg: 1.09 / Max: 1.1Min: 1.09 / Avg: 1.09 / Max: 1.11. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 124000.30420.60840.91261.21681.521SE +/- 0.006, N = 3SE +/- 0.011, N = 31.3391.352
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 12400246810Min: 1.33 / Avg: 1.34 / Max: 1.35Min: 1.34 / Avg: 1.35 / Max: 1.37

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Losslessi5 12400Core i5 124001530456075SE +/- 0.13, N = 3SE +/- 0.09, N = 365.6465.691. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Losslessi5 12400Core i5 124001326395265Min: 65.38 / Avg: 65.64 / Max: 65.83Min: 65.6 / Avg: 65.69 / Max: 65.871. (CXX) g++ options: -O3 -fPIC -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crowni5 12400Core i5 124003691215SE +/- 0.0261, N = 3SE +/- 0.0258, N = 39.59779.5546MIN: 9.5 / MAX: 9.75MIN: 9.48 / MAX: 9.75
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crowni5 12400Core i5 124003691215Min: 9.55 / Avg: 9.6 / Max: 9.64Min: 9.53 / Avg: 9.55 / Max: 9.61

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDi5 12400Core i5 12400246810SE +/- 0.00, N = 3SE +/- 0.00, N = 36.476.471. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDi5 12400Core i5 124003691215Min: 6.46 / Avg: 6.47 / Max: 6.47Min: 6.46 / Avg: 6.47 / Max: 6.471. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsi5 12400Core i5 124001.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.01, N = 35.655.641. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsi5 12400Core i5 12400246810Min: 5.64 / Avg: 5.65 / Max: 5.66Min: 5.62 / Avg: 5.64 / Max: 5.651. (CXX) g++ options: -O3

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCore i5 12400i5 124001428425670SE +/- 0.07, N = 3SE +/- 0.04, N = 362.3362.37
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCore i5 12400i5 124001224364860Min: 62.2 / Avg: 62.33 / Max: 62.43Min: 62.3 / Avg: 62.37 / Max: 62.45

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroomi5 12400Core i5 124000.33390.66781.00171.33561.6695SE +/- 0.001, N = 3SE +/- 0.001, N = 31.4841.483
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroomi5 12400Core i5 12400246810Min: 1.48 / Avg: 1.48 / Max: 1.49Min: 1.48 / Avg: 1.48 / Max: 1.48

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i5 12400i5 124000.81971.63942.45913.27884.0985SE +/- 0.001, N = 3SE +/- 0.004, N = 33.6433.641
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i5 12400i5 12400246810Min: 3.64 / Avg: 3.64 / Max: 3.65Min: 3.63 / Avg: 3.64 / Max: 3.65

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i5 12400i5 1240050K100K150K200K250KSE +/- 30.99, N = 3SE +/- 52.54, N = 3221027221043
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i5 12400i5 1240040K80K120K160K200KMin: 220986 / Avg: 221027.33 / Max: 221088Min: 220939 / Avg: 221043 / Max: 221108

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i5 12400i5 1240030K60K90K120K150KSE +/- 9.33, N = 3SE +/- 17.29, N = 3150255150311
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i5 12400i5 1240030K60K90K120K150KMin: 150240 / Avg: 150254.67 / Max: 150272Min: 150278 / Avg: 150311.33 / Max: 150336

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i5 12400i5 1240040K80K120K160K200KSE +/- 262.66, N = 3SE +/- 177.94, N = 3182792183132
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i5 12400i5 1240030K60K90K120K150KMin: 182312 / Avg: 182791.67 / Max: 183217Min: 182776 / Avg: 183131.67 / Max: 183320

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i5 12400i5 1240030K60K90K120K150KSE +/- 7.00, N = 3SE +/- 41.04, N = 3158832158847
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i5 12400i5 1240030K60K90K120K150KMin: 158825 / Avg: 158832 / Max: 158846Min: 158795 / Avg: 158847 / Max: 158928

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i5 12400i5 1240030K60K90K120K150KSE +/- 67.71, N = 3SE +/- 39.83, N = 3135580.0135550.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i5 12400i5 1240020K40K60K80K100KMin: 135447.8 / Avg: 135580 / Max: 135671.5Min: 135483.2 / Avg: 135550.93 / Max: 135621.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096i5 12400Core i5 12400400800120016002000SE +/- 0.10, N = 3SE +/- 0.72, N = 32092.32091.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096i5 12400Core i5 12400400800120016002000Min: 2092.1 / Avg: 2092.3 / Max: 2092.4Min: 2090.7 / Avg: 2091.9 / Max: 2093.21. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedCore i5 12400i5 124009001800270036004500SE +/- 0.55, N = 3SE +/- 5.33, N = 34163.04157.51. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedCore i5 12400i5 124007001400210028003500Min: 4162.3 / Avg: 4163.03 / Max: 4164.1Min: 4147 / Avg: 4157.53 / Max: 4164.21. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speedi5 12400Core i5 12400612182430SE +/- 0.06, N = 3SE +/- 0.09, N = 326.426.41. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speedi5 12400Core i5 12400612182430Min: 26.3 / Avg: 26.4 / Max: 26.5Min: 26.3 / Avg: 26.43 / Max: 26.61. (CC) gcc options: -O3 -pthread -lz -llzma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124003691215SE +/- 0.05, N = 3SE +/- 0.07, N = 311.0911.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124003691215Min: 11 / Avg: 11.09 / Max: 11.17Min: 11 / Avg: 11.08 / Max: 11.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 124000.20990.41980.62970.83961.0495SE +/- 0.000, N = 3SE +/- 0.000, N = 30.9330.933
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 12400246810Min: 0.93 / Avg: 0.93 / Max: 0.93Min: 0.93 / Avg: 0.93 / Max: 0.93

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownCore i5 12400i5 124003691215SE +/- 0.02, N = 3SE +/- 0.03, N = 311.3511.32MIN: 11.24 / MAX: 11.56MIN: 11.2 / MAX: 11.55
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownCore i5 12400i5 124003691215Min: 11.31 / Avg: 11.35 / Max: 11.38Min: 11.27 / Avg: 11.32 / Max: 11.38

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To Compilei5 12400Core i5 124001224364860SE +/- 0.31, N = 3SE +/- 0.13, N = 355.1255.231. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To Compilei5 12400Core i5 124001122334455Min: 54.52 / Avg: 55.12 / Max: 55.57Min: 54.99 / Avg: 55.23 / Max: 55.421. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyai5 12400Core i5 124000.91131.82262.73393.64524.5565SE +/- 0.00, N = 3SE +/- 0.00, N = 34.054.041. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyai5 12400Core i5 12400246810Min: 4.05 / Avg: 4.05 / Max: 4.05Min: 4.04 / Avg: 4.04 / Max: 4.041. (CXX) g++ options: -O3

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 512i5 12400Core i5 124000.66611.33221.99832.66443.3305SE +/- 0.003692, N = 3SE +/- 0.000934, N = 32.9602482.9552071. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 512i5 12400Core i5 12400246810Min: 2.95 / Avg: 2.96 / Max: 2.97Min: 2.95 / Avg: 2.96 / Max: 2.961. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 1024Core i5 12400i5 124000.68041.36082.04122.72163.402SE +/- 0.000714, N = 3SE +/- 0.000417, N = 33.0240363.0198441. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 1024Core i5 12400i5 12400246810Min: 3.02 / Avg: 3.02 / Max: 3.03Min: 3.02 / Avg: 3.02 / Max: 3.021. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Timei5 12400Core i5 124001224364860SE +/- 0.00, N = 3SE +/- 0.02, N = 351.3351.401. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Timei5 12400Core i5 124001020304050Min: 51.33 / Avg: 51.33 / Max: 51.34Min: 51.36 / Avg: 51.4 / Max: 51.421. RawTherapee, version 5.8, command line.

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural NetworkCore i5 12400i5 124003K6K9K12K15KSE +/- 316.94, N = 15SE +/- 853.64, N = 1210955119451. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural NetworkCore i5 12400i5 124002K4K6K8K10KMin: 9530 / Avg: 10955.27 / Max: 13666Min: 9685 / Avg: 11944.75 / Max: 201321. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedCore i5 12400i5 124009001800270036004500SE +/- 1.25, N = 3SE +/- 7.63, N = 34050.34036.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedCore i5 12400i5 124007001400210028003500Min: 4047.8 / Avg: 4050.27 / Max: 4051.9Min: 4025.1 / Avg: 4036.93 / Max: 4051.21. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speedi5 12400Core i5 12400816243240SE +/- 0.09, N = 3SE +/- 0.09, N = 334.634.41. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speedi5 12400Core i5 12400714212835Min: 34.5 / Avg: 34.63 / Max: 34.8Min: 34.3 / Avg: 34.43 / Max: 34.61. (CC) gcc options: -O3 -pthread -lz -llzma

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i5 12400i5 124003K6K9K12K15KSE +/- 4.68, N = 3SE +/- 74.66, N = 312560.212507.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i5 12400i5 124002K4K6K8K10KMin: 12551.2 / Avg: 12560.23 / Max: 12566.9Min: 12358.7 / Avg: 12507.53 / Max: 12592.31. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i5 12400i5 124001428425670SE +/- 0.02, N = 3SE +/- 0.08, N = 364.6264.561. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i5 12400i5 124001326395265Min: 64.58 / Avg: 64.62 / Max: 64.66Min: 64.43 / Avg: 64.56 / Max: 64.711. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i5 12400i5 124003K6K9K12K15KSE +/- 10.66, N = 3SE +/- 40.17, N = 312543.312488.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i5 12400i5 124002K4K6K8K10KMin: 12522.2 / Avg: 12543.33 / Max: 12556.3Min: 12432.4 / Avg: 12488.73 / Max: 12566.51. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i5 12400i5 124001530456075SE +/- 0.02, N = 3SE +/- 0.10, N = 366.2666.121. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i5 12400i5 124001326395265Min: 66.24 / Avg: 66.26 / Max: 66.29Min: 65.93 / Avg: 66.12 / Max: 66.271. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mCore i5 12400i5 12400246810SE +/- 0.03, N = 3SE +/- 0.01, N = 36.016.02MIN: 5.92 / MAX: 9.07MIN: 5.94 / MAX: 9.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mCore i5 12400i5 12400246810Min: 5.96 / Avg: 6.01 / Max: 6.05Min: 6 / Avg: 6.02 / Max: 6.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdCore i5 12400i5 1240048121620SE +/- 0.02, N = 3SE +/- 0.00, N = 315.0215.02MIN: 14.86 / MAX: 15.35MIN: 14.88 / MAX: 16.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdCore i5 12400i5 1240048121620Min: 14.99 / Avg: 15.02 / Max: 15.06Min: 15.01 / Avg: 15.02 / Max: 15.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyCore i5 12400i5 1240048121620SE +/- 0.00, N = 3SE +/- 0.02, N = 316.8316.90MIN: 16.68 / MAX: 17.3MIN: 16.73 / MAX: 25.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyCore i5 12400i5 1240048121620Min: 16.83 / Avg: 16.83 / Max: 16.84Min: 16.87 / Avg: 16.9 / Max: 16.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50Core i5 12400i5 1240048121620SE +/- 0.01, N = 3SE +/- 0.06, N = 316.9917.13MIN: 16.84 / MAX: 18.86MIN: 16.9 / MAX: 17.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50Core i5 12400i5 1240048121620Min: 16.97 / Avg: 16.99 / Max: 17Min: 17.01 / Avg: 17.13 / Max: 17.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetCore i5 12400i5 12400246810SE +/- 0.00, N = 3SE +/- 0.01, N = 38.388.40MIN: 8.3 / MAX: 8.63MIN: 8.32 / MAX: 8.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetCore i5 12400i5 124003691215Min: 8.38 / Avg: 8.38 / Max: 8.39Min: 8.39 / Avg: 8.4 / Max: 8.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18Core i5 12400i5 124003691215SE +/- 0.01, N = 3SE +/- 0.04, N = 39.839.97MIN: 9.74 / MAX: 10.74MIN: 9.77 / MAX: 15.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18Core i5 12400i5 124003691215Min: 9.82 / Avg: 9.83 / Max: 9.84Min: 9.89 / Avg: 9.97 / Max: 10.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16Core i5 12400i5 12400816243240SE +/- 0.01, N = 3SE +/- 0.01, N = 336.3136.34MIN: 36.16 / MAX: 44.29MIN: 36.17 / MAX: 42.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16Core i5 12400i5 12400816243240Min: 36.3 / Avg: 36.31 / Max: 36.32Min: 36.33 / Avg: 36.34 / Max: 36.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetCore i5 12400i5 124003691215SE +/- 0.01, N = 3SE +/- 0.06, N = 39.179.27MIN: 9.08 / MAX: 9.42MIN: 9.1 / MAX: 9.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetCore i5 12400i5 124003691215Min: 9.16 / Avg: 9.17 / Max: 9.18Min: 9.16 / Avg: 9.27 / Max: 9.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceCore i5 12400i5 124000.2430.4860.7290.9721.215SE +/- 0.00, N = 3SE +/- 0.00, N = 31.081.08MIN: 1.06 / MAX: 1.26MIN: 1.06 / MAX: 1.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceCore i5 12400i5 12400246810Min: 1.08 / Avg: 1.08 / Max: 1.08Min: 1.08 / Avg: 1.08 / Max: 1.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0i5 12400Core i5 124000.8461.6922.5383.3844.23SE +/- 0.01, N = 3SE +/- 0.02, N = 33.753.76MIN: 3.68 / MAX: 6.87MIN: 3.68 / MAX: 6.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0i5 12400Core i5 12400246810Min: 3.73 / Avg: 3.75 / Max: 3.76Min: 3.73 / Avg: 3.76 / Max: 3.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetCore i5 12400i5 124000.54681.09361.64042.18722.734SE +/- 0.02, N = 3SE +/- 0.01, N = 32.432.43MIN: 2.37 / MAX: 5.59MIN: 2.38 / MAX: 5.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetCore i5 12400i5 12400246810Min: 2.41 / Avg: 2.43 / Max: 2.46Min: 2.42 / Avg: 2.43 / Max: 2.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2Core i5 12400i5 124000.6481.2961.9442.5923.24SE +/- 0.00, N = 3SE +/- 0.00, N = 32.862.88MIN: 2.79 / MAX: 5.81MIN: 2.8 / MAX: 61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2Core i5 12400i5 12400246810Min: 2.86 / Avg: 2.86 / Max: 2.86Min: 2.88 / Avg: 2.88 / Max: 2.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3Core i5 12400i5 124000.5491.0981.6472.1962.745SE +/- 0.01, N = 3SE +/- 0.01, N = 32.432.44MIN: 2.38 / MAX: 5.55MIN: 2.39 / MAX: 5.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3Core i5 12400i5 12400246810Min: 2.42 / Avg: 2.43 / Max: 2.44Min: 2.43 / Avg: 2.44 / Max: 2.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2Core i5 12400i5 124000.6031.2061.8092.4123.015SE +/- 0.01, N = 3SE +/- 0.00, N = 32.672.68MIN: 2.61 / MAX: 5.72MIN: 2.62 / MAX: 5.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2Core i5 12400i5 12400246810Min: 2.66 / Avg: 2.67 / Max: 2.68Min: 2.67 / Avg: 2.68 / Max: 2.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobileneti5 12400Core i5 124003691215SE +/- 0.01, N = 3SE +/- 0.02, N = 310.0710.09MIN: 9.94 / MAX: 10.3MIN: 9.95 / MAX: 10.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobileneti5 12400Core i5 124003691215Min: 10.06 / Avg: 10.07 / Max: 10.08Min: 10.05 / Avg: 10.09 / Max: 10.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandomi5 12400Core i5 124000.32630.65260.97891.30521.6315SE +/- 0.00, N = 3SE +/- 0.00, N = 31.451.451. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandomi5 12400Core i5 12400246810Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.45 / Avg: 1.45 / Max: 1.451. (CXX) g++ options: -O3

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compilei5 12400Core i5 124001122334455SE +/- 0.06, N = 3SE +/- 0.05, N = 346.9146.95
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compilei5 12400Core i5 124001020304050Min: 46.8 / Avg: 46.91 / Max: 46.98Min: 46.85 / Avg: 46.95 / Max: 47.02

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12400i5 124000.30260.60520.90781.21041.513SE +/- 0.000, N = 3SE +/- 0.001, N = 31.3441.345
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12400i5 12400246810Min: 1.34 / Avg: 1.34 / Max: 1.34Min: 1.34 / Avg: 1.35 / Max: 1.35

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google Chromei5 12400Core i5 1240050100150200250SE +/- 0.58, N = 3SE +/- 1.00, N = 32362341. chrome 95.0.4638.69
OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google Chromei5 12400Core i5 124004080120160200Min: 235 / Avg: 236 / Max: 237Min: 232 / Avg: 234 / Max: 2351. chrome 95.0.4638.69

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeCore i5 12400i5 1240020K40K60K80K100KSE +/- 253.63, N = 3SE +/- 307.20, N = 382448819811. chrome 95.0.4638.69
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeCore i5 12400i5 1240014K28K42K56K70KMin: 82138 / Avg: 82448.33 / Max: 82951Min: 81454 / Avg: 81980.67 / Max: 825181. chrome 95.0.4638.69

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Timei5 12400Core i5 12400918273645SE +/- 0.01, N = 3SE +/- 0.20, N = 339.5239.58
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Timei5 12400Core i5 12400816243240Min: 39.49 / Avg: 39.52 / Max: 39.53Min: 39.24 / Avg: 39.58 / Max: 39.92

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeCore i5 12400i5 124004M8M12M16M20MSE +/- 61643.21, N = 3SE +/- 61327.70, N = 320798772205836471. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeCore i5 12400i5 124004M8M12M16M20MMin: 20675514 / Avg: 20798771.67 / Max: 20862706Min: 20486797 / Avg: 20583646.67 / Max: 206972511. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileCore i5 12400i5 12400918273645SE +/- 0.00, N = 3SE +/- 0.01, N = 337.3337.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileCore i5 12400i5 12400816243240Min: 37.33 / Avg: 37.33 / Max: 37.34Min: 37.34 / Avg: 37.35 / Max: 37.37

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4Ki5 12400Core i5 1240048121620SE +/- 0.08, N = 3SE +/- 0.10, N = 316.5716.501. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4Ki5 12400Core i5 1240048121620Min: 16.44 / Avg: 16.57 / Max: 16.7Min: 16.31 / Avg: 16.5 / Max: 16.651. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i5 12400i5 12400816243240SE +/- 0.01, N = 3SE +/- 0.02, N = 333.6033.721. (CC) gcc options: -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i5 12400i5 12400714212835Min: 33.58 / Avg: 33.59 / Max: 33.62Min: 33.69 / Avg: 33.72 / Max: 33.761. (CC) gcc options: -fvisibility=hidden -O2

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i5 12400i5 124004080120160200SE +/- 0.13, N = 3SE +/- 1.65, N = 3186.6183.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i5 12400i5 12400306090120150Min: 186.3 / Avg: 186.57 / Max: 186.7Min: 180.9 / Avg: 183.83 / Max: 186.61. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i5 12400i5 12400110220330440550SE +/- 0.23, N = 3SE +/- 4.10, N = 3528.5521.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i5 12400i5 1240090180270360450Min: 528.3 / Avg: 528.53 / Max: 529Min: 513.8 / Avg: 521.5 / Max: 527.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairCore i5 12400i5 12400714212835SE +/- 0.03, N = 3SE +/- 0.01, N = 331.3431.361. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairCore i5 12400i5 12400714212835Min: 31.32 / Avg: 31.34 / Max: 31.4Min: 31.34 / Avg: 31.36 / Max: 31.371. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of Statei5 12400Core i5 124000.04390.08780.13170.17560.2195SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1940.195
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of Statei5 12400Core i5 1240012345Min: 0.19 / Avg: 0.19 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.2

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisi5 12400Core i5 1240048121620SE +/- 0.00, N = 3SE +/- 0.00, N = 315.6315.63MIN: 15.15 / MAX: 15.87MAX: 15.87
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisi5 12400Core i5 1240048121620Min: 15.63 / Avg: 15.63 / Max: 15.63Min: 15.63 / Avg: 15.63 / Max: 15.63

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.71e12 Prime Number GenerationCore i5 12400i5 12400714212835SE +/- 0.03, N = 3SE +/- 0.03, N = 329.7829.791. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.71e12 Prime Number GenerationCore i5 12400i5 12400714212835Min: 29.73 / Avg: 29.78 / Max: 29.82Min: 29.74 / Avg: 29.79 / Max: 29.841. (CXX) g++ options: -O3

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeCore i5 12400i5 124006001200180024003000SE +/- 13.68, N = 3SE +/- 9.21, N = 3265726641. chrome 95.0.4638.69
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeCore i5 12400i5 124005001000150020002500Min: 2630 / Avg: 2657.33 / Max: 2672Min: 2653 / Avg: 2663.67 / Max: 26821. chrome 95.0.4638.69

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratingi5 12400Core i5 124009K18K27K36K45KSE +/- 24.91, N = 3SE +/- 77.93, N = 341297411581. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratingi5 12400Core i5 124007K14K21K28K35KMin: 41248 / Avg: 41297.33 / Max: 41328Min: 41030 / Avg: 41158 / Max: 412991. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratingi5 12400Core i5 1240014K28K42K56K70KSE +/- 42.06, N = 3SE +/- 253.47, N = 364281638801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratingi5 12400Core i5 1240011K22K33K44K55KMin: 64198 / Avg: 64281.33 / Max: 64333Min: 63467 / Avg: 63879.67 / Max: 643411. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmarki5 12400Core i5 124001020304050SE +/- 0.01, N = 3SE +/- 0.03, N = 345.0144.961. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmarki5 12400Core i5 12400918273645Min: 44.99 / Avg: 45.01 / Max: 45.04Min: 44.9 / Avg: 44.96 / Max: 45.011. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCore i5 12400i5 124004080120160200SE +/- 0.12, N = 3SE +/- 0.36, N = 3169.2169.01. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCore i5 12400i5 12400306090120150Min: 169 / Avg: 169.2 / Max: 169.4Min: 168.5 / Avg: 169 / Max: 169.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMi5 12400Core i5 12400100200300400500SE +/- 1.48, N = 3SE +/- 0.61, N = 3483.7483.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMi5 12400Core i5 1240090180270360450Min: 480.8 / Avg: 483.73 / Max: 485.6Min: 482 / Avg: 483.2 / Max: 4841. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticCore i5 12400i5 12400612182430SE +/- 0.05, N = 3SE +/- 0.11, N = 326.0626.141. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticCore i5 12400i5 12400612182430Min: 25.99 / Avg: 26.06 / Max: 26.15Min: 25.99 / Avg: 26.14 / Max: 26.361. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++Core i5 12400i5 1240040K80K120K160K200KSE +/- 202.76, N = 3SE +/- 1039.76, N = 31905671897671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++Core i5 12400i5 1240030K60K90K120K150KMin: 190200 / Avg: 190566.67 / Max: 190900Min: 187700 / Avg: 189766.67 / Max: 1910001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12400i5 124000.03580.07160.10740.14320.179SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1590.159
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12400i5 1240012345Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.16

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i5 12400i5 1240070K140K210K280K350KSE +/- 458.62, N = 3SE +/- 579.93, N = 3312650.61311313.121. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i5 12400i5 1240050K100K150K200K250KMin: 311863.82 / Avg: 312650.61 / Max: 313452.33Min: 310719.83 / Avg: 311313.12 / Max: 312472.881. (CC) gcc options: -O2 -lrt" -lrt

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputCore i5 12400i5 1240050100150200250SE +/- 0.03, N = 3SE +/- 0.16, N = 3233.84233.741. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputCore i5 12400i5 124004080120160200Min: 233.78 / Avg: 233.84 / Max: 233.89Min: 233.48 / Avg: 233.74 / Max: 234.031. (CC) gcc options: -O3 -rdynamic

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2i5 12400Core i5 1240050100150200250SE +/- 0.02, N = 3SE +/- 0.38, N = 3207.86207.521. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2i5 12400Core i5 124004080120160200Min: 207.83 / Avg: 207.86 / Max: 207.9Min: 206.77 / Avg: 207.52 / Max: 207.921. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Assembly Optimizedi5 12400Core i5 1240050K100K150K200K250KSE +/- 2136.20, N = 3SE +/- 529.15, N = 32212002164001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Assembly Optimizedi5 12400Core i5 1240040K80K120K160K200KMin: 217200 / Avg: 221200 / Max: 224500Min: 215600 / Avg: 216400 / Max: 2174001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of Statei5 12400Core i5 124000.02390.04780.07170.09560.1195SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1040.106
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of Statei5 12400Core i5 1240012345Min: 0.1 / Avg: 0.1 / Max: 0.11Min: 0.1 / Avg: 0.11 / Max: 0.11

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzi5 12400Core i5 1240048121620SE +/- 0.01, N = 4SE +/- 0.08, N = 415.4815.58
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzi5 12400Core i5 1240048121620Min: 15.45 / Avg: 15.48 / Max: 15.51Min: 15.37 / Avg: 15.58 / Max: 15.71

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chromei5 12400Core i5 12400120240360480600SE +/- 1.77, N = 3SE +/- 1.52, N = 3529.1532.21. chrome 95.0.4638.69
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chromei5 12400Core i5 1240090180270360450Min: 527.3 / Avg: 529.07 / Max: 532.6Min: 529.2 / Avg: 532.17 / Max: 534.21. chrome 95.0.4638.69

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Timei5 12400Core i5 124002M4M6M8M10MSE +/- 55831.41, N = 3SE +/- 31318.72, N = 310748236107377331. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Timei5 12400Core i5 124002M4M6M8M10MMin: 10677258 / Avg: 10748236.33 / Max: 10858378Min: 10682865 / Avg: 10737733.33 / Max: 107913341. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

Cython Benchmark

Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-Queensi5 12400Core i5 1240048121620SE +/- 0.01, N = 3SE +/- 0.24, N = 316.6216.85
OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-Queensi5 12400Core i5 1240048121620Min: 16.6 / Avg: 16.62 / Max: 16.65Min: 16.54 / Avg: 16.85 / Max: 17.32

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i5 12400i5 12400300K600K900K1200K1500KSE +/- 1204.34, N = 3SE +/- 592.04, N = 312092371207099
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i5 12400i5 12400200K400K600K800K1000KMin: 1206836 / Avg: 1209236.67 / Max: 1210607Min: 1205935 / Avg: 1207099.33 / Max: 1207868

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMi5 12400Core i5 124004080120160200SE +/- 0.32, N = 3SE +/- 0.12, N = 3194.4194.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMi5 12400Core i5 124004080120160200Min: 193.8 / Avg: 194.43 / Max: 194.8Min: 193.9 / Avg: 194.1 / Max: 194.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCore i5 12400i5 12400110220330440550SE +/- 0.50, N = 3SE +/- 0.66, N = 3530.9529.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCore i5 12400i5 1240090180270360450Min: 530 / Avg: 530.93 / Max: 531.7Min: 527.9 / Avg: 529.17 / Max: 530.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessi5 12400Core i5 1240048121620SE +/- 0.02, N = 3SE +/- 0.05, N = 314.6914.751. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessi5 12400Core i5 1240048121620Min: 14.66 / Avg: 14.69 / Max: 14.73Min: 14.68 / Avg: 14.75 / Max: 14.841. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2i5 12400Core i5 1240050100150200250SE +/- 0.50, N = 3SE +/- 0.46, N = 3211.06211.68MIN: 203.47 / MAX: 220.35MIN: 203.11 / MAX: 220.531. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2i5 12400Core i5 124004080120160200Min: 210.49 / Avg: 211.05 / Max: 212.06Min: 211.03 / Avg: 211.68 / Max: 212.561. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001020304050SE +/- 0.01, N = 3SE +/- 0.04, N = 343.2043.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Ki5 12400Core i5 12400918273645Min: 43.17 / Avg: 43.2 / Max: 43.22Min: 43 / Avg: 43.05 / Max: 43.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMi5 12400Core i5 12400306090120150SE +/- 0.46, N = 3SE +/- 0.41, N = 3114.6114.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMi5 12400Core i5 1240020406080100Min: 113.8 / Avg: 114.63 / Max: 115.4Min: 113.5 / Avg: 114.13 / Max: 114.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMi5 12400Core i5 124004080120160200SE +/- 0.30, N = 3SE +/- 0.24, N = 3176.0175.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMi5 12400Core i5 12400306090120150Min: 175.4 / Avg: 176 / Max: 176.3Min: 175.2 / Avg: 175.67 / Max: 1761. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesi5 12400Core i5 12400130260390520650SE +/- 0.88, N = 3SE +/- 0.67, N = 3588590
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesi5 12400Core i5 12400100200300400500Min: 586 / Avg: 587.67 / Max: 589Min: 589 / Avg: 589.67 / Max: 591

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisi5 12400Core i5 1240048121620SE +/- 0.00, N = 3SE +/- 0.00, N = 318.1818.18MIN: 17.86 / MAX: 18.52MIN: 17.86 / MAX: 18.52
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisi5 12400Core i5 12400510152025Min: 18.18 / Avg: 18.18 / Max: 18.18Min: 18.18 / Avg: 18.18 / Max: 18.18

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMi5 12400Core i5 124004080120160200SE +/- 0.20, N = 3SE +/- 0.43, N = 3177.9177.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMi5 12400Core i5 12400306090120150Min: 177.7 / Avg: 177.9 / Max: 178.3Min: 176.5 / Avg: 177.33 / Max: 177.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMi5 12400Core i5 12400100200300400500SE +/- 1.75, N = 3SE +/- 0.64, N = 3481.6481.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMi5 12400Core i5 1240090180270360450Min: 478.2 / Avg: 481.63 / Max: 483.9Min: 480.5 / Avg: 481.5 / Max: 482.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1i5 12400Core i5 124004080120160200SE +/- 0.44, N = 3SE +/- 0.32, N = 3179.56180.46MIN: 174.58 / MAX: 189.52MIN: 174.11 / MAX: 190.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1i5 12400Core i5 12400306090120150Min: 178.98 / Avg: 179.56 / Max: 180.42Min: 179.98 / Avg: 180.46 / Max: 181.051. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6i5 12400Core i5 124003691215SE +/- 0.04, N = 3SE +/- 0.03, N = 312.3812.471. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6i5 12400Core i5 1240048121620Min: 12.32 / Avg: 12.38 / Max: 12.46Min: 12.41 / Avg: 12.47 / Max: 12.511. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pi5 12400Core i5 124001224364860SE +/- 0.15, N = 3SE +/- 0.22, N = 351.4951.411. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pi5 12400Core i5 124001020304050Min: 51.2 / Avg: 51.49 / Max: 51.73Min: 51.03 / Avg: 51.41 / Max: 51.791. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughCore i5 12400i5 12400246810SE +/- 0.0034, N = 3SE +/- 0.0018, N = 37.08497.08771. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughCore i5 12400i5 124003691215Min: 7.08 / Avg: 7.08 / Max: 7.09Min: 7.08 / Avg: 7.09 / Max: 7.091. (CXX) g++ options: -O3 -flto -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001326395265SE +/- 0.02, N = 3SE +/- 0.05, N = 359.4759.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001224364860Min: 59.44 / Avg: 59.47 / Max: 59.51Min: 59.29 / Avg: 59.38 / Max: 59.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 10i5 12400Core i5 124003691215SE +/- 0.043, N = 3SE +/- 0.087, N = 39.5749.558
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 10i5 12400Core i5 124003691215Min: 9.49 / Avg: 9.57 / Max: 9.62Min: 9.39 / Avg: 9.56 / Max: 9.68

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-maski5 12400Core i5 124003691215SE +/- 0.02, N = 3SE +/- 0.01, N = 310.3810.38
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-maski5 12400Core i5 124003691215Min: 10.36 / Avg: 10.38 / Max: 10.41Min: 10.37 / Avg: 10.38 / Max: 10.4

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticCore i5 12400i5 124003691215SE +/- 0.01114, N = 3SE +/- 0.01440, N = 39.940079.941631. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticCore i5 12400i5 124003691215Min: 9.92 / Avg: 9.94 / Max: 9.96Min: 9.92 / Avg: 9.94 / Max: 9.971. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001530456075SE +/- 0.08, N = 3SE +/- 0.06, N = 365.6065.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001326395265Min: 65.45 / Avg: 65.6 / Max: 65.7Min: 65.36 / Avg: 65.47 / Max: 65.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.2.0Core i5 12400i5 124001.22672.45343.68014.90686.1335SE +/- 0.018, N = 5SE +/- 0.025, N = 55.4375.452
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.2.0Core i5 12400i5 12400246810Min: 5.4 / Avg: 5.44 / Max: 5.48Min: 5.39 / Avg: 5.45 / Max: 5.54

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levelsi5 12400Core i5 12400246810SE +/- 0.028, N = 3SE +/- 0.014, N = 38.8358.860
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levelsi5 12400Core i5 124003691215Min: 8.79 / Avg: 8.84 / Max: 8.89Min: 8.84 / Avg: 8.86 / Max: 8.89

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotateCore i5 12400i5 12400246810SE +/- 0.011, N = 3SE +/- 0.017, N = 38.4528.481
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotateCore i5 12400i5 124003691215Min: 8.43 / Avg: 8.45 / Max: 8.47Min: 8.46 / Avg: 8.48 / Max: 8.51

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8i5 12400Core i5 12400816243240SE +/- 0.13, N = 3SE +/- 0.17, N = 336.1735.731. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8i5 12400Core i5 12400816243240Min: 36.04 / Avg: 36.17 / Max: 36.42Min: 35.39 / Avg: 35.73 / Max: 35.91. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-Exponentiali5 12400Core i5 12400246810SE +/- 0.10643, N = 3SE +/- 0.02374, N = 37.782847.886601. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-Exponentiali5 12400Core i5 124003691215Min: 7.57 / Avg: 7.78 / Max: 7.91Min: 7.84 / Avg: 7.89 / Max: 7.921. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 12 digitCore i5 12400i5 12400246810SE +/- 0.028, N = 3SE +/- 0.010, N = 36.9466.9521. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 12 digitCore i5 12400i5 124003691215Min: 6.91 / Avg: 6.95 / Max: 7Min: 6.94 / Avg: 6.95 / Max: 6.971. (CC) gcc options: -O2 -pthread

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeCore i5 12400i5 12400510152025SE +/- 0.04, N = 3SE +/- 0.31, N = 321.0221.391. chrome 95.0.4638.69
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeCore i5 12400i5 12400510152025Min: 20.97 / Avg: 21.02 / Max: 21.09Min: 21 / Avg: 21.39 / Max: 221. chrome 95.0.4638.69

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i5 12400i5 12400246810SE +/- 0.018, N = 3SE +/- 0.052, N = 36.2076.2411. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i5 12400i5 12400246810Min: 6.19 / Avg: 6.21 / Max: 6.24Min: 6.19 / Avg: 6.24 / Max: 6.351. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCore i5 12400i5 124001.02962.05923.08884.11845.148SE +/- 0.050, N = 4SE +/- 0.052, N = 44.5744.576
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCore i5 12400i5 12400246810Min: 4.43 / Avg: 4.57 / Max: 4.64Min: 4.42 / Avg: 4.58 / Max: 4.64

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resizeCore i5 12400i5 124001.34192.68384.02575.36766.7095SE +/- 0.064, N = 3SE +/- 0.074, N = 35.9555.964
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resizeCore i5 12400i5 12400246810Min: 5.89 / Avg: 5.96 / Max: 6.08Min: 5.89 / Avg: 5.96 / Max: 6.11

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Boat - Acceleration: CPU-onlyi5 12400Core i5 124001.19842.39683.59524.79365.992SE +/- 0.003, N = 3SE +/- 0.004, N = 35.3115.326
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Boat - Acceleration: CPU-onlyi5 12400Core i5 12400246810Min: 5.31 / Avg: 5.31 / Max: 5.32Min: 5.32 / Avg: 5.33 / Max: 5.33

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pi5 12400Core i5 12400306090120150SE +/- 0.72, N = 3SE +/- 0.74, N = 3112.65112.291. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pi5 12400Core i5 1240020406080100Min: 111.3 / Avg: 112.65 / Max: 113.77Min: 110.82 / Avg: 112.29 / Max: 113.11. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Masskrug - Acceleration: CPU-onlyi5 12400Core i5 124001.02382.04763.07144.09525.119SE +/- 0.010, N = 3SE +/- 0.003, N = 34.5384.550
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Masskrug - Acceleration: CPU-onlyi5 12400Core i5 12400246810Min: 4.53 / Avg: 4.54 / Max: 4.56Min: 4.55 / Avg: 4.55 / Max: 4.55

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythoni5 12400Core i5 124005001000150020002500SE +/- 17.91, N = 4SE +/- 8.05, N = 423142328
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythoni5 12400Core i5 12400400800120016002000Min: 2273 / Avg: 2314 / Max: 2346Min: 2306 / Avg: 2327.5 / Max: 2345

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Server Room - Acceleration: CPU-onlyi5 12400Core i5 124000.76591.53182.29773.06363.8295SE +/- 0.004, N = 3SE +/- 0.002, N = 33.4013.404
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Server Room - Acceleration: CPU-onlyi5 12400Core i5 12400246810Min: 3.4 / Avg: 3.4 / Max: 3.41Min: 3.4 / Avg: 3.4 / Max: 3.41

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1i5 12400Core i5 1240030060090012001500SE +/- 1.93, N = 3SE +/- 0.65, N = 31452.611448.081. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1i5 12400Core i5 1240030060090012001500Min: 1448.75 / Avg: 1452.61 / Max: 1454.72Min: 1447.27 / Avg: 1448.08 / Max: 1449.361. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pi5 12400Core i5 124004080120160200SE +/- 1.28, N = 3SE +/- 2.12, N = 3189.10188.331. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pi5 12400Core i5 12400306090120150Min: 186.56 / Avg: 189.1 / Max: 190.68Min: 184.12 / Avg: 188.33 / Max: 190.771. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Core i5 12400i5 124001122334455SE +/- 0.04, N = 3SE +/- 0.14, N = 348.2048.51MIN: 47.16 / MAX: 50.82MIN: 47 / MAX: 51.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Core i5 12400i5 124001020304050Min: 48.15 / Avg: 48.2 / Max: 48.28Min: 48.23 / Avg: 48.51 / Max: 48.71. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pi5 12400Core i5 124004080120160200SE +/- 0.33, N = 3SE +/- 0.33, N = 3193.11192.051. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pi5 12400Core i5 124004080120160200Min: 192.75 / Avg: 193.11 / Max: 193.76Min: 191.4 / Avg: 192.05 / Max: 192.421. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteini5 12400Core i5 12400246810SE +/- 0.001, N = 3SE +/- 0.005, N = 36.7296.7261. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteini5 12400Core i5 124003691215Min: 6.73 / Avg: 6.73 / Max: 6.73Min: 6.72 / Avg: 6.73 / Max: 6.741. (CXX) g++ options: -O3 -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pi5 12400Core i5 1240050100150200250SE +/- 0.22, N = 3SE +/- 0.44, N = 3239.21238.061. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pi5 12400Core i5 124004080120160200Min: 238.76 / Avg: 239.21 / Max: 239.43Min: 237.34 / Avg: 238.06 / Max: 238.851. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

Core i5 12400: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: Error: Cannot find module 'web-tooling-benchmark-0.5.3/dist/cli.js'

i5 12400: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: Error: Cannot find module 'web-tooling-benchmark-0.5.3/dist/cli.js'

164 Results Shown

Timed LLVM Compilation
Selenium
Timed Node.js Compilation
ONNX Runtime
LeelaChessZero:
  BLAS
  Eigen
JPEG XL libjxl
SecureMark
Xmrig
PlaidML
Blender
OpenSSL
PlaidML
Xmrig
Blender
TNN
OpenCV
TensorFlow Lite
Appleseed
TensorFlow Lite
Appleseed
PlaidML
WireGuard + Linux Networking Stack Stress Test
srsRAN
ONNX Runtime:
  fcn-resnet101-11 - CPU
  yolov4 - CPU
  super-resolution-10 - CPU
Timed Linux Kernel Compilation
Aircrack-ng
PyHPC Benchmarks
AOM AV1
Selenium
PyHPC Benchmarks
Chaos Group V-RAY
ASTC Encoder
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
  mobilenetV3
PyHPC Benchmarks
libavif avifenc
Embree
simdjson:
  DistinctUserID
  PartialTweets
Timed GDB GNU Debugger Compilation
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
TensorFlow Lite:
  SqueezeNet
  Mobilenet Float
  NASNet Mobile
  Mobilenet Quant
OpenSSL:
  RSA4096:
    verify/s
    sign/s
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
AOM AV1
PyHPC Benchmarks
Embree
Timed Wasmer Compilation
simdjson
Stargate Digital Audio Workstation:
  480000 - 512
  480000 - 1024
RawTherapee
OpenCV
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
  3 - Decompression Speed
  3 - Compression Speed
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
simdjson
Timed Mesa Compilation
PyHPC Benchmarks
Selenium:
  Speedometer - Google Chrome
  Octane - Google Chrome
Hugin
Stockfish
Timed MPlayer Compilation
SVT-AV1
XZ Compression
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 256-QAM:
    UE Mb/s
    eNb Mb/s
Tungsten Renderer
PyHPC Benchmarks
OSPray
Primesieve
Selenium
7-Zip Compression:
  Decompression Rating
  Compression Rating
LibRaw
srsRAN:
  4G PHY_DL_Test 100 PRB MIMO 64-QAM:
    UE Mb/s
    eNb Mb/s
Tungsten Renderer
Chia Blockchain VDF
PyHPC Benchmarks
Coremark
libjpeg-turbo tjbench
Etcpak
Chia Blockchain VDF
PyHPC Benchmarks
Unpacking Firefox
Selenium
Crafty
Cython Benchmark
PHPBench
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 256-QAM:
    UE Mb/s
    eNb Mb/s
WebP Image Encode
TNN
AOM AV1
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
PyBench
OSPray
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM:
    UE Mb/s
    eNb Mb/s
TNN
libavif avifenc
SVT-AV1
ASTC Encoder
AOM AV1
rav1e
GIMP
Tungsten Renderer
AOM AV1
GNU Octave Benchmark
GIMP:
  auto-levels
  rotate
JPEG XL libjxl
Tungsten Renderer
Helsing
Selenium
WebP Image Encode
Unpacking The Linux Kernel
GIMP
Darktable
SVT-HEVC
Darktable
DaCapo Benchmark
Darktable
Etcpak
SVT-VP9
TNN
SVT-VP9
LAMMPS Molecular Dynamics Simulator
SVT-HEVC