Core i5 12400 Linux

Intel Core i5-12400 testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and llvmpipe on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2201079-PTS-COREI51239
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 4 Tests
C++ Boost Tests 2 Tests
Web Browsers 1 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 14 Tests
Compression Tests 4 Tests
CPU Massive 23 Tests
Creator Workloads 24 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 5 Tests
Encoding 7 Tests
Game Development 3 Tests
HPC - High Performance Computing 11 Tests
Imaging 9 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 8 Tests
Multi-Core 27 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 3 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 11 Tests
Python 3 Tests
Renderers 6 Tests
Rust Tests 2 Tests
Scientific Computing 2 Tests
Server 4 Tests
Server CPU Tests 17 Tests
Single-Threaded 3 Tests
Texture Compression 2 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i5 12400
January 06 2022
  9 Hours, 29 Minutes
i5 12400
January 07 2022
  10 Hours, 29 Minutes
Invert Hiding All Results Option
  9 Hours, 59 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i5 12400 LinuxOpenBenchmarking.orgPhoronix Test SuiteIntel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)ASUS PRIME Z690-P WIFI D4 (0605 BIOS)Intel Device 7aa716GB1000GB Western Digital WDS100T1X0E-00AFY0llvmpipeRealtek ALC897Realtek RTL8125 2.5GbE + Intel Device 7af0Ubuntu 21.105.15.7-051507-generic (x86_64)GNOME Shell 40.5X Server 1.20.134.5 Mesa 22.0.0-devel (git-d80c7f3 2021-11-14 impish-oibaf-ppa) (LLVM 13.0.0 256 bits)1.2.197GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I5 12400 Linux BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x12 - Thermald 2.4.6- OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.21.10)- Python 3.9.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Core i5 12400 vs. i5 12400 ComparisonPhoronix Test SuiteBaseline+2.3%+2.3%+4.6%+4.6%+6.9%+6.9%2.9%2.2%DNN - D.N.N9%4.1%BLASS.A.OOpenCVAircrack-ngLeelaChessZeroChia Blockchain VDFCore i5 12400i5 12400

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Core i5 12400i5 124005K10K15K20K25KSE +/- 194.07, N = 3SE +/- 158.29, N = 1522904.3222004.431. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASi5 12400Core i5 12400160320480640800SE +/- 3.93, N = 3SE +/- 2.40, N = 37427211. (CXX) g++ options: -flto -pthread

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Assembly Optimizedi5 12400Core i5 1240050K100K150K200K250KSE +/- 2136.20, N = 3SE +/- 529.15, N = 32212002164001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of Statei5 12400Core i5 124000.02390.04780.07170.09560.1195SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1040.106

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeCore i5 12400i5 12400510152025SE +/- 0.04, N = 3SE +/- 0.31, N = 321.0221.391. chrome 95.0.4638.69

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MCore i5 12400i5 124008001600240032004000SE +/- 3.64, N = 3SE +/- 50.99, N = 33625.63567.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i5 12400i5 124004080120160200SE +/- 0.13, N = 3SE +/- 1.65, N = 3186.6183.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18Core i5 12400i5 124003691215SE +/- 0.01, N = 3SE +/- 0.04, N = 39.839.97MIN: 9.74 / MAX: 10.74MIN: 9.77 / MAX: 15.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cython Benchmark

Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-Queensi5 12400Core i5 1240048121620SE +/- 0.01, N = 3SE +/- 0.24, N = 316.6216.85

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i5 12400i5 12400110220330440550SE +/- 0.23, N = 3SE +/- 4.10, N = 3528.5521.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-Exponentiali5 12400Core i5 12400246810SE +/- 0.10643, N = 3SE +/- 0.02374, N = 37.782847.886601. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8i5 12400Core i5 12400816243240SE +/- 0.13, N = 3SE +/- 0.17, N = 336.1735.731. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3i5 12400Core i5 12400612182430SE +/- 0.05, N = 3SE +/- 0.31, N = 323.1023.38MIN: 22.88 / MAX: 29.41MIN: 22.92 / MAX: 30.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1Mi5 12400Core i5 1240013002600390052006500SE +/- 32.09, N = 3SE +/- 8.58, N = 36109.96040.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetCore i5 12400i5 124003691215SE +/- 0.01, N = 3SE +/- 0.06, N = 39.179.27MIN: 9.08 / MAX: 9.42MIN: 9.1 / MAX: 9.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeCore i5 12400i5 124004M8M12M16M20MSE +/- 61643.21, N = 3SE +/- 61327.70, N = 320798772205836471. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 124000.30420.60840.91261.21681.521SE +/- 0.006, N = 3SE +/- 0.011, N = 31.3391.352

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.10Test: OFDM_TestCore i5 12400i5 1240040M80M120M160M200MSE +/- 2637034.17, N = 15SE +/- 2630279.12, N = 152036266672017133331. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google Chromei5 12400Core i5 1240050100150200250SE +/- 0.58, N = 3SE +/- 1.00, N = 32362341. chrome 95.0.4638.69

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50Core i5 12400i5 1240048121620SE +/- 0.01, N = 3SE +/- 0.06, N = 316.9917.13MIN: 16.84 / MAX: 18.86MIN: 16.9 / MAX: 17.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: shufflenet-v2-10 - Device: CPUi5 12400Core i5 124006K12K18K24K30KSE +/- 302.96, N = 12SE +/- 346.81, N = 1229072288411. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6i5 12400Core i5 124003691215SE +/- 0.04, N = 3SE +/- 0.03, N = 312.3812.471. (CXX) g++ options: -O3 -fPIC -lm

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUi5 12400Core i5 12400246810SE +/- 0.01, N = 3SE +/- 0.03, N = 38.068.00

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2Core i5 12400i5 124000.6481.2961.9442.5923.24SE +/- 0.00, N = 3SE +/- 0.00, N = 32.862.88MIN: 2.79 / MAX: 5.81MIN: 2.8 / MAX: 61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: yolov4 - Device: CPUi5 12400Core i5 1240060120180240300SE +/- 2.25, N = 3SE +/- 2.42, N = 32982961. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Core i5 12400i5 124001122334455SE +/- 0.04, N = 3SE +/- 0.14, N = 348.2048.51MIN: 47.16 / MAX: 50.82MIN: 47 / MAX: 51.141. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression Ratingi5 12400Core i5 1240014K28K42K56K70KSE +/- 42.06, N = 3SE +/- 253.47, N = 364281638801. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzi5 12400Core i5 1240048121620SE +/- 0.01, N = 4SE +/- 0.08, N = 415.4815.58

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jythoni5 12400Core i5 124005001000150020002500SE +/- 17.91, N = 4SE +/- 8.05, N = 423142328

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chromei5 12400Core i5 12400120240360480600SE +/- 1.77, N = 3SE +/- 1.52, N = 3529.1532.21. chrome 95.0.4638.69

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speedi5 12400Core i5 12400816243240SE +/- 0.09, N = 3SE +/- 0.09, N = 334.634.41. (CC) gcc options: -O3 -pthread -lz -llzma

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeCore i5 12400i5 1240020K40K60K80K100KSE +/- 253.63, N = 3SE +/- 307.20, N = 382448819811. chrome 95.0.4638.69

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pi5 12400Core i5 124004080120160200SE +/- 0.33, N = 3SE +/- 0.33, N = 3193.11192.051. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i5 12400i5 12400246810SE +/- 0.018, N = 3SE +/- 0.052, N = 36.2076.2411. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of Statei5 12400Core i5 124000.04390.08780.13170.17560.2195SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1940.195

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1i5 12400Core i5 124004080120160200SE +/- 0.44, N = 3SE +/- 0.32, N = 3179.56180.46MIN: 174.58 / MAX: 189.52MIN: 174.11 / MAX: 190.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pi5 12400Core i5 1240050100150200250SE +/- 0.22, N = 3SE +/- 0.44, N = 3239.21238.061. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crowni5 12400Core i5 124003691215SE +/- 0.0261, N = 3SE +/- 0.0258, N = 39.59779.5546MIN: 9.5 / MAX: 9.75MIN: 9.48 / MAX: 9.75

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUCore i5 12400i5 124002K4K6K8K10KSE +/- 7.75, N = 3SE +/- 15.51, N = 396269583

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i5 12400i5 12400306090120150SE +/- 0.44, N = 3SE +/- 0.44, N = 3125.38125.94

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4Ki5 12400Core i5 1240048121620SE +/- 0.08, N = 3SE +/- 0.10, N = 316.5716.501. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMi5 12400Core i5 12400306090120150SE +/- 0.46, N = 3SE +/- 0.41, N = 3114.6114.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i5 12400i5 124003K6K9K12K15KSE +/- 10.66, N = 3SE +/- 40.17, N = 312543.312488.71. (CC) gcc options: -O3

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i5 12400i5 1240070K140K210K280K350KSE +/- 458.62, N = 3SE +/- 579.93, N = 3312650.61311313.121. (CC) gcc options: -O2 -lrt" -lrt

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++Core i5 12400i5 1240040K80K120K160K200KSE +/- 202.76, N = 3SE +/- 1039.76, N = 31905671897671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i5 12400i5 124003K6K9K12K15KSE +/- 4.68, N = 3SE +/- 74.66, N = 312560.212507.51. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyCore i5 12400i5 1240048121620SE +/- 0.00, N = 3SE +/- 0.02, N = 316.8316.90MIN: 16.68 / MAX: 17.3MIN: 16.73 / MAX: 25.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Losslessi5 12400Core i5 1240048121620SE +/- 0.02, N = 3SE +/- 0.05, N = 314.6914.751. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3Core i5 12400i5 124000.5491.0981.6472.1962.745SE +/- 0.01, N = 3SE +/- 0.01, N = 32.432.44MIN: 2.38 / MAX: 5.55MIN: 2.39 / MAX: 5.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pi5 12400Core i5 124004080120160200SE +/- 1.28, N = 3SE +/- 2.12, N = 3189.10188.331. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50i5 12400Core i5 12400510152025SE +/- 0.03, N = 3SE +/- 0.05, N = 320.0020.08MIN: 19.84 / MAX: 29.89MIN: 19.89 / MAX: 28.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chromei5 12400Core i5 124001122334455SE +/- 0.03, N = 3SE +/- 0.03, N = 350.350.11. chrome 95.0.4638.69

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1i5 12400Core i5 124000.52811.05621.58432.11242.6405SE +/- 0.012, N = 3SE +/- 0.012, N = 32.3382.347MIN: 2.3 / MAX: 3.16MIN: 2.31 / MAX: 2.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2Core i5 12400i5 124000.6031.2061.8092.4123.015SE +/- 0.01, N = 3SE +/- 0.00, N = 32.672.68MIN: 2.61 / MAX: 5.72MIN: 2.62 / MAX: 5.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i5 12400i5 12400816243240SE +/- 0.01, N = 3SE +/- 0.02, N = 333.6033.721. (CC) gcc options: -fvisibility=hidden -O2

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001020304050SE +/- 0.01, N = 3SE +/- 0.04, N = 343.2043.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotateCore i5 12400i5 12400246810SE +/- 0.011, N = 3SE +/- 0.017, N = 38.4528.481

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Timesi5 12400Core i5 12400130260390520650SE +/- 0.88, N = 3SE +/- 0.67, N = 3588590

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMi5 12400Core i5 124004080120160200SE +/- 0.20, N = 3SE +/- 0.43, N = 3177.9177.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression Ratingi5 12400Core i5 124009K18K27K36K45KSE +/- 24.91, N = 3SE +/- 77.93, N = 341297411581. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedCore i5 12400i5 124009001800270036004500SE +/- 1.25, N = 3SE +/- 7.63, N = 34050.34036.91. (CC) gcc options: -O3 -pthread -lz -llzma

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticCore i5 12400i5 12400612182430SE +/- 0.05, N = 3SE +/- 0.11, N = 326.0626.141. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCore i5 12400i5 12400110220330440550SE +/- 0.50, N = 3SE +/- 0.66, N = 3530.9529.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pi5 12400Core i5 12400306090120150SE +/- 0.72, N = 3SE +/- 0.74, N = 3112.65112.291. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1i5 12400Core i5 1240030060090012001500SE +/- 1.93, N = 3SE +/- 0.65, N = 31452.611448.081. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2i5 12400Core i5 1240050100150200250SE +/- 0.50, N = 3SE +/- 0.46, N = 3211.06211.68MIN: 203.47 / MAX: 220.35MIN: 203.11 / MAX: 220.531. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levelsi5 12400Core i5 12400246810SE +/- 0.028, N = 3SE +/- 0.014, N = 38.8358.860

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Boat - Acceleration: CPU-onlyi5 12400Core i5 124001.19842.39683.59524.79365.992SE +/- 0.003, N = 3SE +/- 0.004, N = 35.3115.326

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.2.0Core i5 12400i5 124001.22672.45343.68014.90686.1335SE +/- 0.018, N = 5SE +/- 0.025, N = 55.4375.452

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0i5 12400Core i5 124000.8461.6922.5383.3844.23SE +/- 0.01, N = 3SE +/- 0.02, N = 33.753.76MIN: 3.68 / MAX: 6.87MIN: 3.68 / MAX: 6.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Masskrug - Acceleration: CPU-onlyi5 12400Core i5 124001.02382.04763.07144.09525.119SE +/- 0.010, N = 3SE +/- 0.003, N = 34.5384.550

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeCore i5 12400i5 124006001200180024003000SE +/- 13.68, N = 3SE +/- 9.21, N = 3265726641. chrome 95.0.4638.69

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 124000.43180.86361.29541.72722.159SE +/- 0.003, N = 3SE +/- 0.003, N = 31.9141.919

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostyai5 12400Core i5 124000.91131.82262.73393.64524.5565SE +/- 0.00, N = 3SE +/- 0.00, N = 34.054.041. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetCore i5 12400i5 12400246810SE +/- 0.00, N = 3SE +/- 0.01, N = 38.388.40MIN: 8.3 / MAX: 8.63MIN: 8.32 / MAX: 8.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Ki5 12400Core i5 12400246810SE +/- 0.01, N = 3SE +/- 0.01, N = 38.708.681. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i5 12400i5 124001530456075SE +/- 0.02, N = 3SE +/- 0.10, N = 366.2666.121. (CC) gcc options: -O3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownCore i5 12400i5 124003691215SE +/- 0.02, N = 3SE +/- 0.03, N = 311.3511.32MIN: 11.24 / MAX: 11.56MIN: 11.2 / MAX: 11.55

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobileneti5 12400Core i5 124003691215SE +/- 0.01, N = 3SE +/- 0.02, N = 310.0710.09MIN: 9.94 / MAX: 10.3MIN: 9.95 / MAX: 10.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001530456075SE +/- 0.08, N = 3SE +/- 0.06, N = 365.6065.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: super-resolution-10 - Device: CPUCore i5 12400i5 124007001400210028003500SE +/- 9.37, N = 3SE +/- 6.45, N = 3311131051. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUi5 12400Core i5 1240048121620SE +/- 0.03, N = 3SE +/- 0.03, N = 315.7615.73

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To Compilei5 12400Core i5 124001224364860SE +/- 0.31, N = 3SE +/- 0.13, N = 355.1255.231. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i5 12400i5 1240040K80K120K160K200KSE +/- 262.66, N = 3SE +/- 177.94, N = 3182792183132

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3i5 12400Core i5 124000.24590.49180.73770.98361.2295SE +/- 0.005, N = 3SE +/- 0.004, N = 31.0911.093MIN: 1.07 / MAX: 8.02MIN: 1.07 / MAX: 8.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsi5 12400Core i5 124001.27132.54263.81395.08526.3565SE +/- 0.01, N = 3SE +/- 0.01, N = 35.655.641. (CXX) g++ options: -O3

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i5 12400i5 12400300K600K900K1200K1500KSE +/- 1204.34, N = 3SE +/- 592.04, N = 312092371207099

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMi5 12400Core i5 124004080120160200SE +/- 0.30, N = 3SE +/- 0.24, N = 3176.0175.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 512i5 12400Core i5 124000.66611.33221.99832.66443.3305SE +/- 0.003692, N = 3SE +/- 0.000934, N = 32.9602482.9552071. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 10i5 12400Core i5 124003691215SE +/- 0.043, N = 3SE +/- 0.087, N = 39.5749.558

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mCore i5 12400i5 12400246810SE +/- 0.03, N = 3SE +/- 0.01, N = 36.016.02MIN: 5.92 / MAX: 9.07MIN: 5.94 / MAX: 9.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2i5 12400Core i5 1240050100150200250SE +/- 0.02, N = 3SE +/- 0.38, N = 3207.86207.521. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMi5 12400Core i5 124004080120160200SE +/- 0.32, N = 3SE +/- 0.12, N = 3194.4194.11. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pi5 12400Core i5 124001224364860SE +/- 0.15, N = 3SE +/- 0.22, N = 351.4951.411. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124001326395265SE +/- 0.02, N = 3SE +/- 0.05, N = 359.4759.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resizeCore i5 12400i5 124001.34192.68384.02575.36766.7095SE +/- 0.064, N = 3SE +/- 0.074, N = 35.9555.964

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224Core i5 12400i5 124000.45250.9051.35751.812.2625SE +/- 0.013, N = 3SE +/- 0.011, N = 32.0082.011MIN: 1.97 / MAX: 2.38MIN: 1.96 / MAX: 8.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Timei5 12400Core i5 12400918273645SE +/- 0.01, N = 3SE +/- 0.20, N = 339.5239.58

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object DetectionCore i5 12400i5 124007K14K21K28K35KSE +/- 307.61, N = 15SE +/- 292.67, N = 1532761328071. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 1024Core i5 12400i5 124000.68041.36082.04122.72163.402SE +/- 0.000714, N = 3SE +/- 0.000417, N = 33.0240363.0198441. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Timei5 12400Core i5 124001224364860SE +/- 0.00, N = 3SE +/- 0.02, N = 351.3351.401. RawTherapee, version 5.8, command line.

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedCore i5 12400i5 124009001800270036004500SE +/- 0.55, N = 3SE +/- 5.33, N = 34163.04157.51. (CC) gcc options: -O3 -pthread -lz -llzma

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Materiali5 12400Core i5 1240050100150200250222.92223.20

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCore i5 12400i5 124004080120160200SE +/- 0.12, N = 3SE +/- 0.36, N = 3169.2169.01. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCore i5 12400i5 124005001000150020002500SE +/- 1.69, N = 3SE +/- 2.74, N = 32368.372371.12MIN: 2321.4 / MAX: 2426.03MIN: 2322.16 / MAX: 2432.531. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmarki5 12400Core i5 124001020304050SE +/- 0.01, N = 3SE +/- 0.03, N = 345.0144.961. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0Core i5 12400i5 124000.63831.27661.91492.55323.1915SE +/- 0.008, N = 3SE +/- 0.006, N = 32.8342.837MIN: 2.8 / MAX: 5.69MIN: 2.8 / MAX: 5.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Fishy Cat - Compute: CPU-Onlyi5 12400Core i5 1240050100150200250SE +/- 0.08, N = 3SE +/- 0.13, N = 3239.02239.27

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMi5 12400Core i5 12400100200300400500SE +/- 1.48, N = 3SE +/- 0.61, N = 3483.7483.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Timei5 12400Core i5 124002M4M6M8M10MSE +/- 55831.41, N = 3SE +/- 31318.72, N = 310748236107377331. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i5 12400i5 124001428425670SE +/- 0.02, N = 3SE +/- 0.08, N = 364.6264.561. (CC) gcc options: -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Ki5 12400Core i5 124003691215SE +/- 0.05, N = 3SE +/- 0.07, N = 311.0911.081. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google Chromei5 12400Core i5 1240050100150200250SE +/- 0.36, N = 3SE +/- 2.13, N = 3213.43213.231. chrome 95.0.4638.69

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Server Room - Acceleration: CPU-onlyi5 12400Core i5 124000.76591.53182.29773.06363.8295SE +/- 0.004, N = 3SE +/- 0.002, N = 33.4013.404

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Losslessi5 12400Core i5 124001530456075SE +/- 0.13, N = 3SE +/- 0.09, N = 365.6465.691. (CXX) g++ options: -O3 -fPIC -lm

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 12 digitCore i5 12400i5 12400246810SE +/- 0.028, N = 3SE +/- 0.010, N = 36.9466.9521. (CC) gcc options: -O2 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16Core i5 12400i5 12400816243240SE +/- 0.01, N = 3SE +/- 0.01, N = 336.3136.34MIN: 36.16 / MAX: 44.29MIN: 36.17 / MAX: 42.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Core i5 12400i5 124002000M4000M6000M8000M10000MSE +/- 8921620.23, N = 3SE +/- 9549406.73, N = 3876608866087589142201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUi5 12400Core i5 124003691215SE +/- 0.03, N = 3SE +/- 0.04, N = 312.8712.86

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixingi5 12400Core i5 124000.30240.60480.90721.20961.512SE +/- 0.003, N = 3SE +/- 0.001, N = 31.3431.344

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12400i5 124000.30260.60520.90781.21041.513SE +/- 0.000, N = 3SE +/- 0.001, N = 31.3441.345

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenCore i5 12400i5 1240030060090012001500SE +/- 8.74, N = 3SE +/- 5.67, N = 3136913681. (CXX) g++ options: -flto -pthread

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compilei5 12400Core i5 124001122334455SE +/- 0.06, N = 3SE +/- 0.05, N = 346.9146.95

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroomi5 12400Core i5 124000.33390.66781.00171.33561.6695SE +/- 0.001, N = 3SE +/- 0.001, N = 31.4841.483

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To CompileCore i5 12400i5 12400120240360480600SE +/- 0.18, N = 3SE +/- 0.04, N = 3576.16576.54

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCore i5 12400i5 12400150300450600750SE +/- 0.31, N = 3SE +/- 0.24, N = 3673.00673.41

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCore i5 12400i5 124001428425670SE +/- 0.07, N = 3SE +/- 0.04, N = 362.3362.37

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compilei5 12400Core i5 1240020406080100SE +/- 0.59, N = 3SE +/- 0.58, N = 392.7092.76

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i5 12400i5 124000.81971.63942.45913.27884.0985SE +/- 0.001, N = 3SE +/- 0.004, N = 33.6433.641

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2i5 12400Core i5 12400600K1200K1800K2400K3000KSE +/- 349.48, N = 3SE +/- 355.29, N = 329227602924140

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairCore i5 12400i5 12400714212835SE +/- 0.03, N = 3SE +/- 0.01, N = 331.3431.361. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileCore i5 12400i5 12400918273645SE +/- 0.00, N = 3SE +/- 0.01, N = 337.3337.35

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteini5 12400Core i5 12400246810SE +/- 0.001, N = 3SE +/- 0.005, N = 36.7296.7261. (CXX) g++ options: -O3 -lm

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCore i5 12400i5 124001.02962.05923.08884.11845.148SE +/- 0.050, N = 4SE +/- 0.052, N = 44.5744.576

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputCore i5 12400i5 1240050100150200250SE +/- 0.03, N = 3SE +/- 0.16, N = 3233.84233.741. (CC) gcc options: -O3 -rdynamic

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: BMW27 - Compute: CPU-Onlyi5 12400Core i5 124004080120160200SE +/- 0.22, N = 3SE +/- 0.12, N = 3165.74165.81

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughCore i5 12400i5 12400246810SE +/- 0.0034, N = 3SE +/- 0.0018, N = 37.08497.08771. (CXX) g++ options: -O3 -flto -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i5 12400i5 1240030K60K90K120K150KSE +/- 9.33, N = 3SE +/- 17.29, N = 3150255150311

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.71e12 Prime Number GenerationCore i5 12400i5 12400714212835SE +/- 0.03, N = 3SE +/- 0.03, N = 329.7829.791. (CXX) g++ options: -O3

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0i5 12400Core i5 124000.79161.58322.37483.16643.958SE +/- 0.020, N = 3SE +/- 0.015, N = 33.5173.518MIN: 3.45 / MAX: 5.08MIN: 3.45 / MAX: 7.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i5 12400i5 1240030K60K90K120K150KSE +/- 67.71, N = 3SE +/- 39.83, N = 3135580.0135550.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMi5 12400Core i5 12400100200300400500SE +/- 1.75, N = 3SE +/- 0.64, N = 3481.6481.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-maski5 12400Core i5 124003691215SE +/- 0.02, N = 3SE +/- 0.01, N = 310.3810.38

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096i5 12400Core i5 12400400800120016002000SE +/- 0.10, N = 3SE +/- 0.72, N = 32092.32091.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i5 12400i5 12400700K1400K2100K2800K3500KSE +/- 235.02, N = 3SE +/- 506.39, N = 332309303231460

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticCore i5 12400i5 124003691215SE +/- 0.01114, N = 3SE +/- 0.01440, N = 39.940079.941631. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveCore i5 12400i5 124001530456075SE +/- 0.02, N = 3SE +/- 0.01, N = 367.2067.211. (CXX) g++ options: -O3 -flto -pthread

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSi5 12400Core i5 1240070K140K210K280K350KSE +/- 131.20, N = 3SE +/- 695.29, N = 33291863291511. (CC) gcc options: -pedantic -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i5 12400i5 1240030K60K90K120K150KSE +/- 7.00, N = 3SE +/- 41.04, N = 3158832158847

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i5 12400i5 1240050K100K150K200K250KSE +/- 30.99, N = 3SE +/- 52.54, N = 3221027221043

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterCore i5 12400i5 1240050100150200250230.93230.93

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400i5 124000.20990.41980.62970.83961.0495SE +/- 0.000, N = 3SE +/- 0.000, N = 30.9330.933

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12400i5 124000.03580.07160.10740.14320.179SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1590.159

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: fcn-resnet101-11 - Device: CPUi5 12400Core i5 124001020304050SE +/- 0.17, N = 3SE +/- 0.29, N = 342421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdCore i5 12400i5 1240048121620SE +/- 0.02, N = 3SE +/- 0.00, N = 315.0215.02MIN: 14.86 / MAX: 15.35MIN: 14.88 / MAX: 16.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceCore i5 12400i5 124000.2430.4860.7290.9721.215SE +/- 0.00, N = 3SE +/- 0.00, N = 31.081.08MIN: 1.06 / MAX: 1.26MIN: 1.06 / MAX: 1.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetCore i5 12400i5 124000.54681.09361.64042.18722.734SE +/- 0.02, N = 3SE +/- 0.01, N = 32.432.43MIN: 2.37 / MAX: 5.59MIN: 2.38 / MAX: 5.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisi5 12400Core i5 1240048121620SE +/- 0.00, N = 3SE +/- 0.00, N = 318.1818.18MIN: 17.86 / MAX: 18.52MIN: 17.86 / MAX: 18.52

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisi5 12400Core i5 1240048121620SE +/- 0.00, N = 3SE +/- 0.00, N = 315.6315.63MIN: 15.15 / MAX: 15.87MAX: 15.87

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8i5 12400Core i5 124000.22050.4410.66150.8821.1025SE +/- 0.00, N = 3SE +/- 0.00, N = 30.980.981. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speedi5 12400Core i5 12400612182430SE +/- 0.06, N = 3SE +/- 0.09, N = 326.426.41. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDi5 12400Core i5 12400246810SE +/- 0.00, N = 3SE +/- 0.00, N = 36.476.471. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandomi5 12400Core i5 124000.32630.65260.97891.30521.6315SE +/- 0.00, N = 3SE +/- 0.00, N = 31.451.451. (CXX) g++ options: -O3

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural NetworkCore i5 12400i5 124003K6K9K12K15KSE +/- 316.94, N = 15SE +/- 853.64, N = 1210955119451. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

Core i5 12400: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: Error: Cannot find module 'web-tooling-benchmark-0.5.3/dist/cli.js'

i5 12400: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: Error: Cannot find module 'web-tooling-benchmark-0.5.3/dist/cli.js'

164 Results Shown

Aircrack-ng
LeelaChessZero
Chia Blockchain VDF
PyHPC Benchmarks
Selenium
Xmrig
srsRAN
NCNN
Cython Benchmark
srsRAN
Tungsten Renderer
JPEG XL libjxl
Mobile Neural Network
Xmrig
NCNN
Stockfish
PyHPC Benchmarks
srsRAN
Selenium
NCNN
ONNX Runtime
libavif avifenc
PlaidML
NCNN
ONNX Runtime
TNN
7-Zip Compression
Unpacking Firefox
DaCapo Benchmark
Selenium
Zstd Compression
Selenium
SVT-VP9
WebP Image Encode
PyHPC Benchmarks
TNN
SVT-HEVC
Embree
Chaos Group V-RAY
WireGuard + Linux Networking Stack Stress Test
SVT-AV1
srsRAN
LZ4 Compression
Coremark
Chia Blockchain VDF
LZ4 Compression
NCNN
WebP Image Encode
NCNN
SVT-VP9
Mobile Neural Network
Selenium
Mobile Neural Network
NCNN
XZ Compression
AOM AV1
GIMP
PyBench
srsRAN
7-Zip Compression
Zstd Compression
Tungsten Renderer
srsRAN
SVT-HEVC
Etcpak
TNN
GIMP
Darktable
GNU Octave Benchmark
NCNN
Darktable
Selenium
PyHPC Benchmarks
simdjson
NCNN
AOM AV1
LZ4 Compression
Embree
NCNN
AOM AV1
ONNX Runtime
PlaidML
Timed Wasmer Compilation
TensorFlow Lite
Mobile Neural Network
simdjson
PHPBench
srsRAN
Stargate Digital Audio Workstation
rav1e
NCNN
Etcpak
srsRAN
SVT-AV1
AOM AV1
GIMP
Mobile Neural Network
Hugin
OpenCV
Stargate Digital Audio Workstation
RawTherapee
Zstd Compression
Appleseed
srsRAN
TNN
LibRaw
Mobile Neural Network
Blender
srsRAN
Crafty
LZ4 Compression
AOM AV1
Selenium
Darktable
libavif avifenc
Helsing
NCNN
OpenSSL
PlaidML
PyHPC Benchmarks:
  CPU - Aesara - 4194304 - Isoneutral Mixing
  CPU - Numpy - 4194304 - Equation of State
LeelaChessZero
Timed Mesa Compilation
IndigoBench
Timed Node.js Compilation
Timed LLVM Compilation
Timed GDB GNU Debugger Compilation
Timed Linux Kernel Compilation
IndigoBench
TensorFlow Lite
Tungsten Renderer
Timed MPlayer Compilation
LAMMPS Molecular Dynamics Simulator
Unpacking The Linux Kernel
libjpeg-turbo tjbench
Blender
ASTC Encoder
TensorFlow Lite
Primesieve
Mobile Neural Network
OpenSSL
srsRAN
GIMP
OpenSSL
TensorFlow Lite
Tungsten Renderer
ASTC Encoder
SecureMark
TensorFlow Lite:
  Mobilenet Quant
  SqueezeNet
Appleseed
PyHPC Benchmarks:
  CPU - Numba - 4194304 - Isoneutral Mixing
  CPU - Numba - 4194304 - Equation of State
ONNX Runtime
NCNN:
  CPU - squeezenet_ssd
  CPU - blazeface
  CPU - mnasnet
OSPray:
  NASA Streamlines - SciVis
  San Miguel - SciVis
JPEG XL libjxl
Zstd Compression
simdjson:
  DistinctUserID
  LargeRand
OpenCV