Intel Core i5-12400 testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and llvmpipe on Ubuntu 21.10 via the Phoronix Test Suite.
Core i5 12400 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x12 - Thermald 2.4.6Java Notes: OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.21.10)Python Notes: Python 3.9.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
i5 12400 Processor: Intel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z690-P WIFI D4 (0605 BIOS), Chipset: Intel Device 7aa7, Memory: 16GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: llvmpipe, Audio: Realtek ALC897, Network: Realtek RTL8125 2.5GbE + Intel Device 7af0
OS: Ubuntu 21.10, Kernel: 5.15.7-051507-generic (x86_64), Desktop: GNOME Shell 40.5, Display Server: X Server 1.20.13, OpenGL: 4.5 Mesa 22.0.0-devel (git-d80c7f3 2021-11-14 impish-oibaf-ppa) (LLVM 13.0.0 256 bits), Vulkan: 1.2.197, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
WireGuard + Linux Networking Stack Stress Test This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WireGuard + Linux Networking Stack Stress Test Core i5 12400 i5 12400 30 60 90 120 150 SE +/- 0.44, N = 3 SE +/- 0.44, N = 3 125.38 125.94
Etcpak Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 0.7 Configuration: DXT1 i5 12400 Core i5 12400 300 600 900 1200 1500 SE +/- 1.93, N = 3 SE +/- 0.65, N = 3 1452.61 1448.08 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 0.7 Configuration: ETC2 i5 12400 Core i5 12400 50 100 150 200 250 SE +/- 0.02, N = 3 SE +/- 0.38, N = 3 207.86 207.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: Eigen Core i5 12400 i5 12400 300 600 900 1200 1500 SE +/- 8.74, N = 3 SE +/- 5.67, N = 3 1369 1368 1. (CXX) g++ options: -flto -pthread
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless i5 12400 Core i5 12400 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 14.69 14.75 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression Core i5 12400 i5 12400 2 4 6 8 10 SE +/- 0.018, N = 3 SE +/- 0.052, N = 3 6.207 6.241 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: Kostya i5 12400 Core i5 12400 0.9113 1.8226 2.7339 3.6452 4.5565 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 4.05 4.04 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: LargeRandom i5 12400 Core i5 12400 0.3263 0.6526 0.9789 1.3052 1.6315 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 1.45 1.45 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: PartialTweets i5 12400 Core i5 12400 1.2713 2.5426 3.8139 5.0852 6.3565 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 5.65 5.64 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 1.0 Throughput Test: DistinctUserID i5 12400 Core i5 12400 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 6.47 6.47 1. (CXX) g++ options: -O3
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.12.1 Variant: Monero - Hash Count: 1M Core i5 12400 i5 12400 800 1600 2400 3200 4000 SE +/- 3.64, N = 3 SE +/- 50.99, N = 3 3625.6 3567.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.12.1 Variant: Wownero - Hash Count: 1M i5 12400 Core i5 12400 1300 2600 3900 5200 6500 SE +/- 32.09, N = 3 SE +/- 8.58, N = 3 6109.9 6040.0 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Chia Blockchain VDF Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org IPS, More Is Better Chia Blockchain VDF 1.0.1 Test: Square Plain C++ Core i5 12400 i5 12400 40K 80K 120K 160K 200K SE +/- 202.76, N = 3 SE +/- 1039.76, N = 3 190567 189767 1. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.org IPS, More Is Better Chia Blockchain VDF 1.0.1 Test: Square Assembly Optimized i5 12400 Core i5 12400 50K 100K 150K 200K 250K SE +/- 2136.20, N = 3 SE +/- 529.15, N = 3 221200 216400 1. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 3 - Decompression Speed Core i5 12400 i5 12400 3K 6K 9K 12K 15K SE +/- 10.66, N = 3 SE +/- 40.17, N = 3 12543.3 12488.7 1. (CC) gcc options: -O3
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 9 - Decompression Speed Core i5 12400 i5 12400 3K 6K 9K 12K 15K SE +/- 4.68, N = 3 SE +/- 74.66, N = 3 12560.2 12507.5 1. (CC) gcc options: -O3
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Compression Speed i5 12400 Core i5 12400 8 16 24 32 40 SE +/- 0.09, N = 3 SE +/- 0.09, N = 3 34.6 34.4 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19 - Decompression Speed Core i5 12400 i5 12400 900 1800 2700 3600 4500 SE +/- 1.25, N = 3 SE +/- 7.63, N = 3 4050.3 4036.9 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Compression Speed i5 12400 Core i5 12400 6 12 18 24 30 SE +/- 0.06, N = 3 SE +/- 0.09, N = 3 26.4 26.4 1. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.0 Compression Level: 19, Long Mode - Decompression Speed Core i5 12400 i5 12400 900 1800 2700 3600 4500 SE +/- 0.55, N = 3 SE +/- 5.33, N = 3 4163.0 4157.5 1. (CC) gcc options: -O3 -pthread -lz -llzma
JPEG XL libjxl The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.6.1 Input: PNG - Encode Speed: 8 i5 12400 Core i5 12400 0.2205 0.441 0.6615 0.882 1.1025 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.98 0.98 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.6.1 Input: JPEG - Encode Speed: 8 i5 12400 Core i5 12400 8 16 24 32 40 SE +/- 0.13, N = 3 SE +/- 0.17, N = 3 36.17 35.73 1. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsRAN 21.10 Test: OFDM_Test Core i5 12400 i5 12400 40M 80M 120M 160M 200M SE +/- 2637034.17, N = 15 SE +/- 2630279.12, N = 15 203626667 201713333 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM i5 12400 Core i5 12400 100 200 300 400 500 SE +/- 1.48, N = 3 SE +/- 0.61, N = 3 483.7 483.2 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM Core i5 12400 i5 12400 40 80 120 160 200 SE +/- 0.12, N = 3 SE +/- 0.36, N = 3 169.2 169.0 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM i5 12400 Core i5 12400 100 200 300 400 500 SE +/- 1.75, N = 3 SE +/- 0.64, N = 3 481.6 481.5 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM i5 12400 Core i5 12400 40 80 120 160 200 SE +/- 0.20, N = 3 SE +/- 0.43, N = 3 177.9 177.3 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM Core i5 12400 i5 12400 110 220 330 440 550 SE +/- 0.23, N = 3 SE +/- 4.10, N = 3 528.5 521.5 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM Core i5 12400 i5 12400 40 80 120 160 200 SE +/- 0.13, N = 3 SE +/- 1.65, N = 3 186.6 183.8 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM Core i5 12400 i5 12400 110 220 330 440 550 SE +/- 0.50, N = 3 SE +/- 0.66, N = 3 530.9 529.2 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 21.10 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM i5 12400 Core i5 12400 40 80 120 160 200 SE +/- 0.32, N = 3 SE +/- 0.12, N = 3 194.4 194.1 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 21.10 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM i5 12400 Core i5 12400 40 80 120 160 200 SE +/- 0.30, N = 3 SE +/- 0.24, N = 3 176.0 175.7 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 21.10 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM i5 12400 Core i5 12400 30 60 90 120 150 SE +/- 0.46, N = 3 SE +/- 0.41, N = 3 114.6 114.1 1. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OSPray Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: San Miguel - Renderer: SciVis i5 12400 Core i5 12400 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 15.63 15.63 MIN: 15.15 / MAX: 15.87 MAX: 15.87
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: NASA Streamlines - Renderer: SciVis i5 12400 Core i5 12400 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 18.18 18.18 MIN: 17.86 / MAX: 18.52 MIN: 17.86 / MAX: 18.52
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.2 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K i5 12400 Core i5 12400 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 8.70 8.68 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.2 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K i5 12400 Core i5 12400 10 20 30 40 50 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 43.20 43.05 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.2 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K i5 12400 Core i5 12400 13 26 39 52 65 SE +/- 0.02, N = 3 SE +/- 0.05, N = 3 59.47 59.38 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.2 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K i5 12400 Core i5 12400 15 30 45 60 75 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 65.60 65.47 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.13 Binary: Pathtracer - Model: Crown i5 12400 Core i5 12400 3 6 9 12 15 SE +/- 0.0261, N = 3 SE +/- 0.0258, N = 3 9.5977 9.5546 MIN: 9.5 / MAX: 9.75 MIN: 9.48 / MAX: 9.75
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.13 Binary: Pathtracer ISPC - Model: Crown Core i5 12400 i5 12400 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 11.35 11.32 MIN: 11.24 / MAX: 11.56 MIN: 11.2 / MAX: 11.55
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K i5 12400 Core i5 12400 4 8 12 16 20 SE +/- 0.08, N = 3 SE +/- 0.10, N = 3 16.57 16.50 1. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p i5 12400 Core i5 12400 12 24 36 48 60 SE +/- 0.15, N = 3 SE +/- 0.22, N = 3 51.49 51.41 1. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p i5 12400 Core i5 12400 30 60 90 120 150 SE +/- 0.72, N = 3 SE +/- 0.74, N = 3 112.65 112.29 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p i5 12400 Core i5 12400 50 100 150 200 250 SE +/- 0.22, N = 3 SE +/- 0.44, N = 3 239.21 238.06 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p i5 12400 Core i5 12400 40 80 120 160 200 SE +/- 1.28, N = 3 SE +/- 2.12, N = 3 189.10 188.33 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p i5 12400 Core i5 12400 40 80 120 160 200 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 193.11 192.05 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 21.06 Test: Decompression Rating i5 12400 Core i5 12400 9K 18K 27K 36K 45K SE +/- 24.91, N = 3 SE +/- 77.93, N = 3 41297 41158 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 13 Total Time Core i5 12400 i5 12400 4M 8M 12M 16M 20M SE +/- 61643.21, N = 3 SE +/- 61327.70, N = 3 20798772 20583647 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
Stargate Digital Audio Workstation Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 21.10.9 Sample Rate: 480000 - Buffer Size: 512 i5 12400 Core i5 12400 0.6661 1.3322 1.9983 2.6644 3.3305 SE +/- 0.003692, N = 3 SE +/- 0.000934, N = 3 2.960248 2.955207 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 21.10.9 Sample Rate: 480000 - Buffer Size: 1024 Core i5 12400 i5 12400 0.6804 1.3608 2.0412 2.7216 3.402 SE +/- 0.000714, N = 3 SE +/- 0.000417, N = 3 3.024036 3.019844 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
Tungsten Renderer Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Hair Core i5 12400 i5 12400 7 14 21 28 35 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 31.34 31.36 1. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Water Caustic Core i5 12400 i5 12400 6 12 18 24 30 SE +/- 0.05, N = 3 SE +/- 0.11, N = 3 26.06 26.14 1. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Non-Exponential i5 12400 Core i5 12400 2 4 6 8 10 SE +/- 0.10643, N = 3 SE +/- 0.02374, N = 3 7.78284 7.88660 1. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Volumetric Caustic Core i5 12400 i5 12400 3 6 9 12 15 SE +/- 0.01114, N = 3 SE +/- 0.01440, N = 3 9.94007 9.94163 1. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lGL -lGLU -ldl
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 1.0.2 Time To Compile i5 12400 Core i5 12400 12 24 36 48 60 SE +/- 0.31, N = 3 SE +/- 0.13, N = 3 55.12 55.23 1. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc
Cython Benchmark Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Cython Benchmark 0.29.21 Test: N-Queens i5 12400 Core i5 12400 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.24, N = 3 16.62 16.85
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS i5 12400 Core i5 12400 70K 140K 210K 280K 350K SE +/- 131.20, N = 3 SE +/- 695.29, N = 3 329186 329151 1. (CC) gcc options: -pedantic -O3
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.0 Algorithm: SHA256 Core i5 12400 i5 12400 2000M 4000M 6000M 8000M 10000M SE +/- 8921620.23, N = 3 SE +/- 9549406.73, N = 3 8766088660 8758914220 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 i5 12400 Core i5 12400 400 800 1200 1600 2000 SE +/- 0.10, N = 3 SE +/- 0.72, N = 3 2092.3 2091.9 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.0 Algorithm: RSA4096 Core i5 12400 i5 12400 30K 60K 90K 120K 150K SE +/- 67.71, N = 3 SE +/- 39.83, N = 3 135580.0 135550.9 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
Node.js V8 Web Tooling Benchmark Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.
Core i5 12400: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: Error: Cannot find module 'web-tooling-benchmark-0.5.3/dist/cli.js'
i5 12400: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: Error: Cannot find module 'web-tooling-benchmark-0.5.3/dist/cli.js'
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Thorough Core i5 12400 i5 12400 2 4 6 8 10 SE +/- 0.0034, N = 3 SE +/- 0.0018, N = 3 7.0849 7.0877 1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 3.2 Preset: Exhaustive Core i5 12400 i5 12400 15 30 45 60 75 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 67.20 67.21 1. (CXX) g++ options: -O3 -flto -pthread
Darktable Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.6.0 Test: Boat - Acceleration: CPU-only i5 12400 Core i5 12400 1.1984 2.3968 3.5952 4.7936 5.992 SE +/- 0.003, N = 3 SE +/- 0.004, N = 3 5.311 5.326
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.6.0 Test: Masskrug - Acceleration: CPU-only i5 12400 Core i5 12400 1.0238 2.0476 3.0714 4.0952 5.119 SE +/- 0.010, N = 3 SE +/- 0.003, N = 3 4.538 4.550
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.6.0 Test: Server Room - Acceleration: CPU-only i5 12400 Core i5 12400 0.7659 1.5318 2.2977 3.0636 3.8295 SE +/- 0.004, N = 3 SE +/- 0.002, N = 3 3.401 3.404
GIMP GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GIMP 2.10.24 Test: resize Core i5 12400 i5 12400 1.3419 2.6838 4.0257 5.3676 6.7095 SE +/- 0.064, N = 3 SE +/- 0.074, N = 3 5.955 5.964
Hugin Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Hugin Panorama Photo Assistant + Stitching Time i5 12400 Core i5 12400 9 18 27 36 45 SE +/- 0.01, N = 3 SE +/- 0.20, N = 3 39.52 39.58
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: mobilenetV3 i5 12400 Core i5 12400 0.2459 0.4918 0.7377 0.9836 1.2295 SE +/- 0.005, N = 3 SE +/- 0.004, N = 3 1.091 1.093 MIN: 1.07 / MAX: 8.02 MIN: 1.07 / MAX: 8.2 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: squeezenetv1.1 i5 12400 Core i5 12400 0.5281 1.0562 1.5843 2.1124 2.6405 SE +/- 0.012, N = 3 SE +/- 0.012, N = 3 2.338 2.347 MIN: 2.3 / MAX: 3.16 MIN: 2.31 / MAX: 2.9 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: resnet-v2-50 i5 12400 Core i5 12400 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 20.00 20.08 MIN: 19.84 / MAX: 29.89 MIN: 19.89 / MAX: 28.8 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: SqueezeNetV1.0 i5 12400 Core i5 12400 0.7916 1.5832 2.3748 3.1664 3.958 SE +/- 0.020, N = 3 SE +/- 0.015, N = 3 3.517 3.518 MIN: 3.45 / MAX: 5.08 MIN: 3.45 / MAX: 7.96 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: MobileNetV2_224 Core i5 12400 i5 12400 0.4525 0.905 1.3575 1.81 2.2625 SE +/- 0.013, N = 3 SE +/- 0.011, N = 3 2.008 2.011 MIN: 1.97 / MAX: 2.38 MIN: 1.96 / MAX: 8.84 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: mobilenet-v1-1.0 Core i5 12400 i5 12400 0.6383 1.2766 1.9149 2.5532 3.1915 SE +/- 0.008, N = 3 SE +/- 0.006, N = 3 2.834 2.837 MIN: 2.8 / MAX: 5.69 MIN: 2.8 / MAX: 5.55 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.2 Model: inception-v3 i5 12400 Core i5 12400 6 12 18 24 30 SE +/- 0.05, N = 3 SE +/- 0.31, N = 3 23.10 23.38 MIN: 22.88 / MAX: 29.41 MIN: 22.92 / MAX: 30.66 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: mobilenet i5 12400 Core i5 12400 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 10.07 10.09 MIN: 9.94 / MAX: 10.3 MIN: 9.95 / MAX: 10.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU-v2-v2 - Model: mobilenet-v2 Core i5 12400 i5 12400 0.603 1.206 1.809 2.412 3.015 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 2.67 2.68 MIN: 2.61 / MAX: 5.72 MIN: 2.62 / MAX: 5.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU-v3-v3 - Model: mobilenet-v3 Core i5 12400 i5 12400 0.549 1.098 1.647 2.196 2.745 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 2.43 2.44 MIN: 2.38 / MAX: 5.55 MIN: 2.39 / MAX: 5.61 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: shufflenet-v2 Core i5 12400 i5 12400 0.648 1.296 1.944 2.592 3.24 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2.86 2.88 MIN: 2.79 / MAX: 5.81 MIN: 2.8 / MAX: 6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: mnasnet Core i5 12400 i5 12400 0.5468 1.0936 1.6404 2.1872 2.734 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 2.43 2.43 MIN: 2.37 / MAX: 5.59 MIN: 2.38 / MAX: 5.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: efficientnet-b0 i5 12400 Core i5 12400 0.846 1.692 2.538 3.384 4.23 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 3.75 3.76 MIN: 3.68 / MAX: 6.87 MIN: 3.68 / MAX: 6.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: blazeface Core i5 12400 i5 12400 0.243 0.486 0.729 0.972 1.215 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 1.08 1.08 MIN: 1.06 / MAX: 1.26 MIN: 1.06 / MAX: 1.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: googlenet Core i5 12400 i5 12400 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 9.17 9.27 MIN: 9.08 / MAX: 9.42 MIN: 9.1 / MAX: 9.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: vgg16 Core i5 12400 i5 12400 8 16 24 32 40 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 36.31 36.34 MIN: 36.16 / MAX: 44.29 MIN: 36.17 / MAX: 42.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: resnet18 Core i5 12400 i5 12400 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 9.83 9.97 MIN: 9.74 / MAX: 10.74 MIN: 9.77 / MAX: 15.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: alexnet Core i5 12400 i5 12400 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 8.38 8.40 MIN: 8.3 / MAX: 8.63 MIN: 8.32 / MAX: 8.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: resnet50 Core i5 12400 i5 12400 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 16.99 17.13 MIN: 16.84 / MAX: 18.86 MIN: 16.9 / MAX: 17.43 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: yolov4-tiny Core i5 12400 i5 12400 4 8 12 16 20 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 16.83 16.90 MIN: 16.68 / MAX: 17.3 MIN: 16.73 / MAX: 25.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: squeezenet_ssd Core i5 12400 i5 12400 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 15.02 15.02 MIN: 14.86 / MAX: 15.35 MIN: 14.88 / MAX: 16.2 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20210720 Target: CPU - Model: regnety_400m Core i5 12400 i5 12400 2 4 6 8 10 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 6.01 6.02 MIN: 5.92 / MAX: 9.07 MIN: 5.94 / MAX: 9.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet Core i5 12400 i5 12400 500 1000 1500 2000 2500 SE +/- 1.69, N = 3 SE +/- 2.74, N = 3 2368.37 2371.12 MIN: 2321.4 / MAX: 2426.03 MIN: 2322.16 / MAX: 2432.53 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 i5 12400 Core i5 12400 50 100 150 200 250 SE +/- 0.50, N = 3 SE +/- 0.46, N = 3 211.06 211.68 MIN: 203.47 / MAX: 220.35 MIN: 203.11 / MAX: 220.53 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 Core i5 12400 i5 12400 11 22 33 44 55 SE +/- 0.04, N = 3 SE +/- 0.14, N = 3 48.20 48.51 MIN: 47.16 / MAX: 50.82 MIN: 47 / MAX: 51.14 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 i5 12400 Core i5 12400 40 80 120 160 200 SE +/- 0.44, N = 3 SE +/- 0.32, N = 3 179.56 180.46 MIN: 174.58 / MAX: 189.52 MIN: 174.11 / MAX: 190.84 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: VGG19 - Device: CPU i5 12400 Core i5 12400 3 6 9 12 15 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 12.87 12.86
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU i5 12400 Core i5 12400 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 8.06 8.00
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.0 Blend File: BMW27 - Compute: CPU-Only i5 12400 Core i5 12400 40 80 120 160 200 SE +/- 0.22, N = 3 SE +/- 0.12, N = 3 165.74 165.81
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.0 Blend File: Fishy Cat - Compute: CPU-Only i5 12400 Core i5 12400 50 100 150 200 250 SE +/- 0.08, N = 3 SE +/- 0.13, N = 3 239.02 239.27
OpenBenchmarking.org M samples/s, More Is Better IndigoBench 4.4 Acceleration: CPU - Scene: Supercar Core i5 12400 i5 12400 0.8197 1.6394 2.4591 3.2788 4.0985 SE +/- 0.001, N = 3 SE +/- 0.004, N = 3 3.643 3.641
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times i5 12400 Core i5 12400 130 260 390 520 650 SE +/- 0.88, N = 3 SE +/- 0.67, N = 3 588 590
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.10 Model: yolov4 - Device: CPU i5 12400 Core i5 12400 60 120 180 240 300 SE +/- 2.25, N = 3 SE +/- 2.42, N = 3 298 296 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.10 Model: fcn-resnet101-11 - Device: CPU i5 12400 Core i5 12400 10 20 30 40 50 SE +/- 0.17, N = 3 SE +/- 0.29, N = 3 42 42 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.10 Model: shufflenet-v2-10 - Device: CPU i5 12400 Core i5 12400 6K 12K 18K 24K 30K SE +/- 302.96, N = 12 SE +/- 346.81, N = 12 29072 28841 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.10 Model: super-resolution-10 - Device: CPU Core i5 12400 i5 12400 700 1400 2100 2800 3500 SE +/- 9.37, N = 3 SE +/- 6.45, N = 3 3111 3105 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite Core i5 12400 i5 12400 300K 600K 900K 1200K 1500K SE +/- 1204.34, N = 3 SE +/- 592.04, N = 3 1209237 1207099
Selenium This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Selenium Benchmark: Kraken - Browser: Google Chrome i5 12400 Core i5 12400 120 240 360 480 600 SE +/- 1.77, N = 3 SE +/- 1.52, N = 3 529.1 532.2 1. chrome 95.0.4638.69
OpenBenchmarking.org Geometric Mean, More Is Better Selenium Benchmark: Octane - Browser: Google Chrome Core i5 12400 i5 12400 20K 40K 60K 80K 100K SE +/- 253.63, N = 3 SE +/- 307.20, N = 3 82448 81981 1. chrome 95.0.4638.69
OpenBenchmarking.org Runs / Minute, More Is Better Selenium Benchmark: StyleBench - Browser: Google Chrome i5 12400 Core i5 12400 11 22 33 44 55 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 50.3 50.1 1. chrome 95.0.4638.69
OpenBenchmarking.org Score, More Is Better Selenium Benchmark: Jetstream 2 - Browser: Google Chrome i5 12400 Core i5 12400 50 100 150 200 250 SE +/- 0.36, N = 3 SE +/- 2.13, N = 3 213.43 213.23 1. chrome 95.0.4638.69
OpenBenchmarking.org Runs Per Minute, More Is Better Selenium Benchmark: Speedometer - Browser: Google Chrome i5 12400 Core i5 12400 50 100 150 200 250 SE +/- 0.58, N = 3 SE +/- 1.00, N = 3 236 234 1. chrome 95.0.4638.69
OpenBenchmarking.org Score, Fewer Is Better Selenium Benchmark: PSPDFKit WASM - Browser: Google Chrome Core i5 12400 i5 12400 600 1200 1800 2400 3000 SE +/- 13.68, N = 3 SE +/- 9.21, N = 3 2657 2664 1. chrome 95.0.4638.69
OpenBenchmarking.org ms, Fewer Is Better Selenium Benchmark: WASM imageConvolute - Browser: Google Chrome Core i5 12400 i5 12400 5 10 15 20 25 SE +/- 0.04, N = 3 SE +/- 0.31, N = 3 21.02 21.39 1. chrome 95.0.4638.69
PyHPC Benchmarks PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State Core i5 12400 i5 12400 0.0358 0.0716 0.1074 0.1432 0.179 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.159 0.159
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing Core i5 12400 i5 12400 0.2099 0.4198 0.6297 0.8396 1.0495 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.933 0.933
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State Core i5 12400 i5 12400 0.3026 0.6052 0.9078 1.2104 1.513 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 1.344 1.345
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing Core i5 12400 i5 12400 0.4318 0.8636 1.2954 1.7272 2.159 SE +/- 0.003, N = 3 SE +/- 0.003, N = 3 1.914 1.919
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State i5 12400 Core i5 12400 0.0439 0.0878 0.1317 0.1756 0.2195 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.194 0.195
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing i5 12400 Core i5 12400 0.3024 0.6048 0.9072 1.2096 1.512 SE +/- 0.003, N = 3 SE +/- 0.001, N = 3 1.343 1.344
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing Core i5 12400 i5 12400 0.3042 0.6084 0.9126 1.2168 1.521 SE +/- 0.006, N = 3 SE +/- 0.011, N = 3 1.339 1.352
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State i5 12400 Core i5 12400 0.0239 0.0478 0.0717 0.0956 0.1195 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 0.104 0.106
Chaos Group V-RAY This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org vsamples, More Is Better Chaos Group V-RAY 5 Mode: CPU Core i5 12400 i5 12400 2K 4K 6K 8K 10K SE +/- 7.75, N = 3 SE +/- 15.51, N = 3 9626 9583
OpenCV This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.5.4 Test: Object Detection Core i5 12400 i5 12400 7K 14K 21K 28K 35K SE +/- 307.61, N = 15 SE +/- 292.67, N = 15 32761 32807 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.5.4 Test: DNN - Deep Neural Network Core i5 12400 i5 12400 3K 6K 9K 12K 15K SE +/- 316.94, N = 15 SE +/- 853.64, N = 12 10955 11945 1. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
Core i5 12400 Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x12 - Thermald 2.4.6Java Notes: OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.21.10)Python Notes: Python 3.9.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 6 January 2022 19:14 by user pts.
i5 12400 Processor: Intel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z690-P WIFI D4 (0605 BIOS), Chipset: Intel Device 7aa7, Memory: 16GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: llvmpipe, Audio: Realtek ALC897, Network: Realtek RTL8125 2.5GbE + Intel Device 7af0
OS: Ubuntu 21.10, Kernel: 5.15.7-051507-generic (x86_64), Desktop: GNOME Shell 40.5, Display Server: X Server 1.20.13, OpenGL: 4.5 Mesa 22.0.0-devel (git-d80c7f3 2021-11-14 impish-oibaf-ppa) (LLVM 13.0.0 256 bits), Vulkan: 1.2.197, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x12 - Thermald 2.4.6Java Notes: OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.21.10)Python Notes: Python 3.9.7Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 7 January 2022 04:16 by user pts.