Tests for a future article. Intel Xeon E-2336 testing with a ASRockRack E3C252D4U (1.22 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: Intel Xeon E-2388G @ 3.20GHz (8 Cores / 16 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: Intel RocketLake-S [UHD ], Monitor: VA2431, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
d e Processor: Intel Xeon E-2336 @ 2.90GHz (6 Cores / 12 Threads) , Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: ASPEED , Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1024x768
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128 c a b e d 1.1906 2.3812 3.5718 4.7624 5.953 4.25222 4.25314 4.25839 5.28367 5.29165 1. (CXX) g++ options: -O3
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 b c d a e 20 40 60 80 100 109.4 109.5 110.0 110.2 110.4 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 b c a d e 13 26 39 52 65 55.4 55.4 55.8 56.4 56.6 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: double - X Y Z: 128 a c b e d 1.1826 2.3652 3.5478 4.7304 5.913 4.22813 4.23098 4.23400 5.25237 5.25616 1. (CXX) g++ options: -O3
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace e d b a c 70 140 210 280 350 314.22 308.79 240.62 238.30 232.39 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model d e c b a 30 60 90 120 150 117.92 117.07 90.71 88.22 87.21 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens d e a b c 30 60 90 120 150 119.62 116.95 91.74 90.65 89.10 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace d e a b c 30 60 90 120 150 149.67 148.18 113.86 111.56 111.02 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace d e b c a 60 120 180 240 300 292.05 291.60 234.44 233.71 231.64 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 1.smt2 d e c b a 6 12 18 24 30 27.26 27.07 24.96 24.96 24.80 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache e d b a 600K 1200K 1800K 2400K 3000K 2101251.45 2102565.10 2662853.59 2781513.07 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math d e b a 110K 220K 330K 440K 550K 382456.62 385865.21 534062.66 535128.11 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching e d b a 800K 1600K 2400K 3200K 4000K 2527236.02 2531776.40 3638488.52 3654011.84 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions d e b a 1.5M 3M 4.5M 6M 7.5M 4792580.04 4856545.20 7009619.95 7013916.66 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing d e b a 3M 6M 9M 12M 15M 9811611.36 9823844.73 13127961.92 13144909.56 1. (CXX) g++ options: -O2 -std=gnu99 -lc
Geekbench This is a benchmark of Geekbench 6 Pro. The test profile automates the execution of Geekbench 6 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 6 Pro. THIS TEST PROFILE WILL NOT WORK WITHOUT A VALID GEEKBENCH 6 PRO LICENSE KEY; test automation / CLI support is only available with the paid version of Geekbench. Learn more via the OpenBenchmarking.org test page.
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric e d b a 30K 60K 90K 120K 150K 88988 89349 129573 130412 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast d e a b 30 60 90 120 150 115.97 116.02 154.04 154.35 1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression e d c a b 0.9653 1.9306 2.8959 3.8612 4.8265 4.03 4.05 4.28 4.29 4.29 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless, Highest Compression d e a b c 0.162 0.324 0.486 0.648 0.81 0.64 0.64 0.71 0.71 0.72 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS e d b a c 70K 140K 210K 280K 350K 320661 321452 341164 341310 341479 1. (CC) gcc options: -pedantic -O3
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M d e c b a 400 800 1200 1600 2000 1792.1 1798.3 1933.8 1934.8 1946.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M d e b a c 700 1400 2100 2800 3500 2773.3 2809.5 3349.9 3410.3 3418.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 d e c b a 80 160 240 320 400 259.43 260.86 377.48 380.11 381.97 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 d e c b a 4 8 12 16 20 10.38 10.43 15.10 15.20 15.28 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam e d c b a 60 120 180 240 300 267.00 264.81 211.84 211.14 210.90
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet d e b a 20 40 60 80 100 78.27 78.48 96.85 96.89
OpenCV This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Core e d a b 13K 26K 39K 52K 65K 59340 58795 55345 55182 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Stitching e d a b 60K 120K 180K 240K 300K 290774 290303 269695 269533 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Image Processing d e b a 20K 40K 60K 80K 100K 100448 99596 80783 80674 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet e d b a 4 8 12 16 20 SE +/- 0.06, N = 3 15.90 15.85 14.02 14.02 MIN: 15.79 / MAX: 17.14 MIN: 15.74 / MAX: 17.68 MIN: 13.9 / MAX: 15.7 MIN: 13.85 / MAX: 16.02 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 e d b a 0.8955 1.791 2.6865 3.582 4.4775 SE +/- 0.00, N = 3 3.98 3.96 3.54 3.52 MIN: 3.88 / MAX: 4.25 MIN: 3.84 / MAX: 4.3 MIN: 3.43 / MAX: 3.83 MIN: 3.41 / MAX: 3.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 e d b a 0.7088 1.4176 2.1264 2.8352 3.544 SE +/- 0.01, N = 3 3.15 3.14 2.85 2.84 MIN: 3.12 / MAX: 3.41 MIN: 3.12 / MAX: 3.34 MIN: 2.82 / MAX: 3.21 MIN: 2.8 / MAX: 3.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 d e b a 0.5355 1.071 1.6065 2.142 2.6775 SE +/- 0.01, N = 3 2.38 2.32 2.18 2.18 MIN: 2.28 / MAX: 10.72 MIN: 2.29 / MAX: 2.54 MIN: 2.16 / MAX: 2.43 MIN: 2.15 / MAX: 3.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet e d b a 0.621 1.242 1.863 2.484 3.105 SE +/- 0.00, N = 3 2.76 2.76 2.54 2.54 MIN: 2.72 / MAX: 2.99 MIN: 2.72 / MAX: 3.22 MIN: 2.51 / MAX: 2.66 MIN: 2.5 / MAX: 2.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 e d b a 1.1948 2.3896 3.5844 4.7792 5.974 SE +/- 0.01, N = 3 5.31 5.31 4.30 4.29 MIN: 4.87 / MAX: 5.72 MIN: 4.86 / MAX: 6.33 MIN: 4.26 / MAX: 4.55 MIN: 4.24 / MAX: 5.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface e d a b 0.189 0.378 0.567 0.756 0.945 SE +/- 0.02, N = 3 0.84 0.82 0.79 0.75 MIN: 0.81 / MAX: 1.15 MIN: 0.8 / MAX: 0.88 MIN: 0.74 / MAX: 1.14 MAX: 1.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet d e a b 3 6 9 12 15 SE +/- 0.13, N = 3 11.55 11.51 10.30 10.07 MIN: 11.39 / MAX: 20.77 MIN: 11.39 / MAX: 11.73 MIN: 9.97 / MAX: 10.91 MIN: 9.97 / MAX: 10.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 d e b a 14 28 42 56 70 SE +/- 0.16, N = 3 61.38 61.36 59.88 59.79 MIN: 61.23 / MAX: 62.51 MIN: 61.17 / MAX: 61.58 MIN: 59.57 / MAX: 60.2 MIN: 59.11 / MAX: 68.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 e d a b 3 6 9 12 15 SE +/- 0.09, N = 3 9.05 9.03 8.01 7.82 MIN: 8.91 / MAX: 18.59 MIN: 8.94 / MAX: 9.21 MIN: 7.78 / MAX: 16.95 MIN: 7.75 / MAX: 8.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet d e a b 2 4 6 8 10 SE +/- 0.18, N = 3 8.55 8.54 8.06 7.85 MIN: 8.47 / MAX: 10.27 MIN: 8.48 / MAX: 8.73 MIN: 7.76 / MAX: 8.77 MIN: 7.76 / MAX: 8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 e d a b 5 10 15 20 25 SE +/- 0.11, N = 3 20.67 20.65 16.49 16.36 MIN: 20.38 / MAX: 30 MIN: 20.52 / MAX: 21.15 MIN: 16.29 / MAX: 17.14 MIN: 16.25 / MAX: 16.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny e d b a 6 12 18 24 30 SE +/- 0.04, N = 3 26.40 26.39 23.81 23.76 MIN: 26.26 / MAX: 34.64 MIN: 26.24 / MAX: 35.28 MIN: 23.64 / MAX: 31.75 MIN: 23.59 / MAX: 24.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd e d b a 3 6 9 12 15 SE +/- 0.05, N = 3 11.96 11.90 10.41 10.34 MIN: 11.84 / MAX: 13.74 MIN: 11.8 / MAX: 12.29 MIN: 10.31 / MAX: 10.73 MIN: 10.17 / MAX: 19.27 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m e d a b 2 4 6 8 10 SE +/- 0.09, N = 3 7.78 7.72 6.82 6.68 MIN: 7.67 / MAX: 8.12 MIN: 7.61 / MAX: 8.99 MIN: 6.6 / MAX: 7.62 MIN: 6.64 / MAX: 6.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer e d b a 20 40 60 80 100 SE +/- 0.12, N = 3 90.07 89.85 72.07 72.05 MIN: 89.54 / MAX: 92.39 MIN: 89.35 / MAX: 91.15 MIN: 71.66 / MAX: 80.49 MIN: 70.65 / MAX: 99.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet e d a b 0.9 1.8 2.7 3.6 4.5 SE +/- 0.03, N = 3 4.00 3.97 3.52 3.51 MIN: 3.94 / MAX: 4.38 MIN: 3.9 / MAX: 4.21 MIN: 3.35 / MAX: 3.83 MIN: 3.37 / MAX: 3.78 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2023 Implementation: MPI CPU - Input: water_GMX50_bare e d a b 0.1782 0.3564 0.5346 0.7128 0.891 0.667 0.671 0.788 0.792 1. (CXX) g++ options: -O3
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms d e c b a 0.5972 1.1944 1.7916 2.3888 2.986 2.65403 2.64431 1.84863 1.84751 1.84476
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU e d b a 0.3015 0.603 0.9045 1.206 1.5075 0.98 0.99 1.33 1.34 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU d e b a 900 1800 2700 3600 4500 4033.28 4021.01 2976.94 2973.15 MIN: 3327.28 / MAX: 4221.11 MIN: 3277.03 / MAX: 4252.51 MIN: 2489.57 / MAX: 3136.21 MIN: 2488.55 / MAX: 3154.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU d e b a 3 6 9 12 15 6.51 6.54 9.14 9.16 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU d e b a 130 260 390 520 650 612.57 610.66 436.28 435.99 MIN: 320.73 / MAX: 641.45 MIN: 445.57 / MAX: 645.59 MIN: 352.34 / MAX: 460.22 MIN: 252.02 / MAX: 599.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU e d a b 90 180 270 360 450 309.25 311.26 415.14 419.36 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU e d a b 3 6 9 12 15 12.92 12.83 9.62 9.53 MIN: 7.1 / MAX: 29.13 MIN: 5.69 / MAX: 29.68 MIN: 6.57 / MAX: 24.31 MIN: 5.85 / MAX: 24.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU d e b a 6 12 18 24 30 20.62 20.64 27.33 27.40 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU d e b a 40 80 120 160 200 193.94 193.60 146.26 145.90 MIN: 150.32 / MAX: 215.81 MIN: 122.33 / MAX: 206.18 MIN: 121.53 / MAX: 161.59 MIN: 89.12 / MAX: 161.2 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU d e a b 200 400 600 800 1000 647.74 648.80 907.43 914.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU d e a b 3 6 9 12 15 9.24 9.23 8.79 8.73 MIN: 4.83 / MAX: 22.73 MIN: 4.94 / MAX: 18.86 MIN: 4.51 / MAX: 68.51 MIN: 4.62 / MAX: 37 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU e d b a 50 100 150 200 250 171.38 172.78 213.56 214.40 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU e d b a 6 12 18 24 30 23.32 23.13 18.71 18.64 MIN: 10.72 / MAX: 42.07 MIN: 10.66 / MAX: 43.31 MIN: 13.18 / MAX: 24.78 MIN: 11.82 / MAX: 40.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU e d b a 2K 4K 6K 8K 10K 5049.65 5067.31 8117.06 8184.72 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU e d b a 0.2633 0.5266 0.7899 1.0532 1.3165 1.17 1.16 0.98 0.97 MIN: 0.64 / MAX: 15.77 MIN: 0.53 / MAX: 14.34 MIN: 0.48 / MAX: 13.47 MIN: 0.5 / MAX: 13.7 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP d e b a 40 80 120 160 200 159.49 160.26 164.75 165.02 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding b a e d 200 400 600 800 1000 964.26 965.57 976.79 978.58 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding e d a b 300 600 900 1200 1500 1338.53 1340.21 1353.70 1353.70 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding d e b a 300 600 900 1200 1500 1178.42 1192.71 1474.13 1482.46 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding d e a b 300 600 900 1200 1500 1222.34 1229.98 1433.86 1433.86 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding d e a b 200 400 600 800 1000 971.74 975.30 1044.14 1052.40 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding d e a b 400 800 1200 1600 2000 1613.67 1643.56 1763.28 1799.03 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 c b a d e 40M 80M 120M 160M 200M 190589800 190611300 192619900 192808700 192950100 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time d e b c a 15 30 45 60 75 67.40 67.38 58.53 58.48 58.46 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time d e b c a 160 320 480 640 800 749.90 749.44 710.27 710.07 708.09 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time d e b c a 110 220 330 440 550 496.74 496.10 441.61 441.50 439.65 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time d e c b a 1300 2600 3900 5200 6500 5889.87 5877.55 5770.77 5770.72 5743.30 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.16 Input: Li2_STO_ae d e a b c 90 180 270 360 450 411.70 408.56 301.65 300.84 296.75 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction d e b c a 11 22 33 44 55 48.47 46.80 44.90 44.89 44.70 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed d e b a c 400 800 1200 1600 2000 1507.9 1514.3 1636.5 1638.7 1643.8 1. (CC) gcc options: -O3 -pthread -lz -llzma
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi e d a b c 70 140 210 280 350 233.12 237.94 334.52 334.68 334.68 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin d e a c b 2K 4K 6K 8K 10K 5748.68 5850.18 8128.92 8152.24 8215.17 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin e d c a b 400 800 1200 1600 2000 1190.30 1203.63 1683.88 1685.16 1686.70 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S d e a b c 130K 260K 390K 520K 650K 417080 417510 592320 593650 594430 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin e d b a c 800 1600 2400 3200 4000 2493.12 2610.22 3273.74 3305.88 3516.83 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl d e c b a 6K 12K 18K 24K 30K 19530 19560 27240 27390 27670 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits e d a b c 12K 24K 36K 48K 60K 38470 39570 54340 54340 54450 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite d e b c a 30K 60K 90K 120K 150K 87680 88000 123930 124050 127140 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin e d a c b 40K 80K 120K 160K 200K 127470 127480 177750 180730 181190 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Kvazaar This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Medium e d a c b 2 4 6 8 10 4.76 4.77 6.72 6.74 6.79 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Very Fast d e a c b 4 8 12 16 20 12.11 12.12 16.91 17.03 17.04 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Super Fast d e c a b 5 10 15 20 25 14.96 14.99 21.05 21.14 21.23 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Ultra Fast e d c b a 7 14 21 28 35 20.89 20.92 29.31 29.35 29.37 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K e d a b c 10 20 30 40 50 36.22 37.09 42.65 42.84 43.07 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K e d c b a 11 22 33 44 55 40.10 40.38 48.18 48.26 48.42 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K e d c a b 9 18 27 36 45 30.41 30.42 39.71 39.72 39.73 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 4 - Input: Bosphorus 4K e d b a c 0.709 1.418 2.127 2.836 3.545 2.328 2.332 3.126 3.141 3.151 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p d e a c b 13 26 39 52 65 50.06 50.47 58.45 59.70 59.97 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 4K d e a b c 0.3668 0.7336 1.1004 1.4672 1.834 1.12 1.12 1.62 1.62 1.63 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K e d b a c 8 16 24 32 40 23.99 24.34 33.09 33.42 33.42 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K e d c a b 15 30 45 60 75 50.49 50.63 65.60 65.83 65.85 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only e d a b 70 140 210 280 350 SE +/- 0.15, N = 3 312.98 312.67 215.45 215.41
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live d e c b a 10 20 30 40 50 45.84 45.61 38.25 38.17 38.05 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live d e c b a 30 60 90 120 150 110.17 110.72 132.03 132.30 132.73 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload e d c b a 30 60 90 120 150 138.67 138.66 113.77 113.53 113.49 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload d e c b a 5 10 15 20 25 18.21 18.21 22.19 22.24 22.25 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform d e b a c 40 80 120 160 200 204.50 204.47 170.46 170.20 169.84 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform d e b a c 10 20 30 40 50 37.04 37.05 44.44 44.51 44.60 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand d e a b c 40 80 120 160 200 204.80 204.68 170.63 170.37 170.09 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand d e a b c 10 20 30 40 50 36.99 37.01 44.39 44.46 44.54 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 4K - Video Preset: Medium e d a c b 1.0733 2.1466 3.2199 4.2932 5.3665 3.32 3.34 4.75 4.76 4.77
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 1080p - Video Preset: Super Fast d e c a b 20 40 60 80 100 50.03 50.14 74.48 74.64 74.90
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 1080p - Video Preset: Ultra Fast e d c b a 20 40 60 80 100 61.59 62.01 93.03 95.58 96.33
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast d e c b a 0.794 1.588 2.382 3.176 3.97 2.577 2.578 3.513 3.527 3.529 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster e d a b c 2 4 6 8 10 5.637 5.662 7.432 7.458 7.472 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast e d a b c 3 6 9 12 15 8.306 8.313 11.094 11.236 11.257 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster e d a c b 6 12 18 24 30 19.47 19.62 25.34 25.42 25.47 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark ISPC d e a c b 30 60 90 120 150 117 117 154 154 155 MIN: 13 / MAX: 1922 MIN: 13 / MAX: 1914 MIN: 18 / MAX: 2431 MIN: 18 / MAX: 2421 MIN: 18 / MAX: 2419
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark Scalar d e a b c 15 30 45 60 75 52 52 69 69 69 MIN: 6 / MAX: 989 MIN: 6 / MAX: 990 MIN: 8 / MAX: 1000 MIN: 8 / MAX: 1000 MIN: 8 / MAX: 1000
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: DLSC - Acceleration: CPU d e a c b 0.4118 0.8236 1.2354 1.6472 2.059 1.25 1.27 1.79 1.79 1.83 MIN: 1.16 / MAX: 1.53 MIN: 1.16 / MAX: 1.57 MIN: 1.66 / MAX: 2.06 MIN: 1.66 / MAX: 2.06 MIN: 1.68 / MAX: 2.12
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Danish Mood - Acceleration: CPU e d b c a 0.261 0.522 0.783 1.044 1.305 0.63 0.66 1.08 1.09 1.16 MIN: 0.13 / MAX: 0.87 MIN: 0.14 / MAX: 0.9 MIN: 0.26 / MAX: 1.38 MIN: 0.29 / MAX: 1.39 MIN: 0.32 / MAX: 1.44
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Orange Juice - Acceleration: CPU d e c a b 0.6413 1.2826 1.9239 2.5652 3.2065 1.97 1.97 2.84 2.85 2.85 MIN: 1.87 / MAX: 2.4 MIN: 1.87 / MAX: 2.4 MIN: 2.71 / MAX: 3.28 MIN: 2.71 / MAX: 3.3 MIN: 2.72 / MAX: 3.29
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: LuxCore Benchmark - Acceleration: CPU d e b c a 0.2925 0.585 0.8775 1.17 1.4625 0.78 0.78 1.29 1.29 1.30 MIN: 0.19 / MAX: 1.03 MIN: 0.19 / MAX: 1.01 MIN: 0.37 / MAX: 1.58 MIN: 0.37 / MAX: 1.59 MIN: 0.38 / MAX: 1.6
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Rainbow Colors and Prism - Acceleration: CPU e d c a b 2 4 6 8 10 5.07 5.18 7.49 7.63 7.63 MIN: 4.62 / MAX: 6.1 MIN: 4.7 / MAX: 6.12 MIN: 6.78 / MAX: 8.19 MIN: 6.75 / MAX: 8.34 MIN: 6.88 / MAX: 8.28
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time d e a c b 0.8483 1.6966 2.5449 3.3932 4.2415 2.67678 2.67891 3.72924 3.74212 3.77037
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time d e a c b 0.8454 1.6908 2.5362 3.3816 4.227 2.64427 2.66344 3.75234 3.75544 3.75720
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time d e a c b 30 60 90 120 150 117.07 117.26 148.26 148.50 148.97
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time d e a c b 0.7271 1.4542 2.1813 2.9084 3.6355 2.22151 2.25929 3.17345 3.22055 3.23141
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time e d a b c 0.7183 1.4366 2.1549 2.8732 3.5915 2.21786 2.22361 3.17488 3.18652 3.19263
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time e d c a b 0.8509 1.7018 2.5527 3.4036 4.2545 2.65809 2.66212 3.77000 3.77067 3.78168
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer d e b a c 800 1600 2400 3200 4000 3894 3884 2704 2696 2691 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer e d c b a 1000 2000 3000 4000 5000 4708 4702 3264 3263 3256 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer e d b c a 14K 28K 42K 56K 70K 65391 65121 43011 42853 42834 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer e d a c b 30K 60K 90K 120K 150K 127177 126949 88765 88742 88710 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer d e a b c 20K 40K 60K 80K 100K 78324 78221 52071 52002 51903 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer e d a b c 30K 60K 90K 120K 150K 153814 153142 107002 106674 106444 1. (CXX) g++ options: -O3 -lm -ldl
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile e d b c a 20 40 60 80 100 81.73 81.03 62.11 62.08 61.38 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 e d a b 12M 24M 36M 48M 60M 50829000 50850000 54005000 54017000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 d e a b 20M 40M 60M 80M 100M 81257000 81272000 86164000 86235000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 d e a b 40M 80M 120M 160M 200M 177370000 177750000 199620000 200200000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 e d a b 60M 120M 180M 240M 300M 227420000 232090000 260980000 265830000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 d e a b 70M 140M 210M 280M 350M 276670000 277180000 331140000 331500000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 e d a b 90M 180M 270M 360M 450M 299160000 299170000 409640000 419540000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 e d b a 5M 10M 15M 20M 25M 20269000 20307000 21593000 21614000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 e d a b 110M 220M 330M 440M 550M 361220000 361340000 502810000 513440000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 e d a b 90M 180M 270M 360M 450M 308330000 308360000 427490000 433120000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 e d a b 20M 40M 60M 80M 100M 71098000 71110000 79299000 79964000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 e d a b 30M 60M 90M 120M 150M 99730000 100700000 131760000 132110000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 d e a b 40M 80M 120M 160M 200M 115700000 115710000 162520000 165290000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 e d a b 20M 40M 60M 80M 100M SE +/- 90000.00, N = 3 98751000 98855000 104940000 105040000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 d e b a 30M 60M 90M 120M 150M SE +/- 57831.17, N = 3 136380000 136420000 144810000 147466667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 d e a b 9M 18M 27M 36M 45M SE +/- 4630.81, N = 3 41039000 41172000 44031333 44040000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
srsRAN Project srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: Downlink Processor Benchmark e d a b c 200 400 600 800 1000 874.0 877.9 932.7 934.8 936.4 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total d e b c a 200 400 600 800 1000 859.1 893.4 1083.3 1135.6 1159.4 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread e d a c b 70 140 210 280 350 296.0 299.4 312.5 317.1 317.9 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 d e a b 15K 30K 45K 60K 75K 47001.04 47003.02 68098.55 68779.84 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 d e a b 13K 26K 39K 52K 65K 43258.70 43482.87 61494.71 61686.55 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 d e a b 11K 22K 33K 44K 55K 40695.40 40726.68 52883.38 53009.35 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 e d a b 10K 20K 30K 40K 50K 38521.51 38750.55 47841.37 48357.65 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA256 d e b a c 1600M 3200M 4800M 6400M 8000M 5363079590 5371686670 7571848920 7666522930 7670067840 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA512 e d c b a 600M 1200M 1800M 2400M 3000M 2073192130 2090214690 2958467610 2971930440 2982380940 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 d e a c b 1000 2000 3000 4000 5000 3304.7 3353.4 4750.7 4781.2 4803.9 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 d e a c b 30K 60K 90K 120K 150K 109044.0 109117.9 155418.1 155507.5 155963.2 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20 e d a b 11000M 22000M 33000M 44000M 55000M 37631001400 37684059340 53275943460 53408719120 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-128-GCM d e b a 20000M 40000M 60000M 80000M 100000M 73615414070 74137558490 103818760740 104474526240 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-256-GCM d e a b 20000M 40000M 60000M 80000M 100000M 65168081990 65455871040 92098198730 92618860950 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20-Poly1305 d e a b 8000M 16000M 24000M 32000M 40000M 25772756170 25784432780 36073985910 36372738320 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 e d b a 40 80 120 160 200 184.80 183.66 149.70 148.00 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 d e b a 60 120 180 240 300 267.83 262.24 212.42 208.65 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 e d a b 200K 400K 600K 800K 1000K SE +/- 6733.33, N = 8 SE +/- 5696.93, N = 12 671390.07 693352.96 780967.47 783425.31
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 e d a b 4 8 12 16 20 SE +/- 0.18, N = 8 SE +/- 0.18, N = 12 15.45 14.42 11.83 11.82 MAX: 841.24 MAX: 871.6 MAX: 845.33 MAX: 788.57
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 d e b a 300K 600K 900K 1200K 1500K SE +/- 14317.27, N = 6 SE +/- 17375.90, N = 3 1342250.61 1349022.28 1446859.96 1447958.54
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 d e b a 6 12 18 24 30 SE +/- 0.32, N = 6 SE +/- 0.33, N = 3 23.25 22.80 20.85 20.68 MAX: 840.31 MAX: 895.66 MAX: 789.31 MAX: 843.04
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 e d a b 300K 600K 900K 1200K 1500K SE +/- 2691.26, N = 3 SE +/- 1722.69, N = 3 1080456.40 1082993.14 1175572.13 1187104.23
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 d e a b 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 11.48 11.39 10.09 9.97 MAX: 654.25 MAX: 661.12 MAX: 670.1 MAX: 637.17
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 d e b a 400K 800K 1200K 1600K 2000K SE +/- 15232.42, N = 3 SE +/- 9885.78, N = 3 1768171.84 1779490.51 1868505.59 1869315.57
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 d e a b 5 10 15 20 25 SE +/- 0.16, N = 3 SE +/- 0.19, N = 3 21.11 21.05 19.82 19.80 MAX: 729.87 MAX: 687.47 MAX: 689.69 MAX: 675.64
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 d e b a 400K 800K 1200K 1600K 2000K SE +/- 14334.70, N = 3 SE +/- 22284.36, N = 3 1609911.22 1699283.73 1767214.52 1810868.13
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 d e b a 3 6 9 12 15 SE +/- 0.10, N = 3 SE +/- 0.14, N = 3 9.44 8.86 8.39 8.08 MAX: 857.31 MAX: 875.98 MAX: 837.13 MAX: 829.62
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 d e b a 500K 1000K 1500K 2000K 2500K SE +/- 22735.34, N = 7 SE +/- 30745.57, N = 4 2139318.30 2166865.16 2554316.08 2558694.08
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 d e a b 5 10 15 20 25 SE +/- 0.22, N = 4 SE +/- 0.15, N = 7 20.38 19.55 16.36 16.29 MAX: 917.41 MAX: 928.08 MAX: 861.67 MAX: 874.18
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 d e a b 7M 14M 21M 28M 35M SE +/- 306044.35, N = 6 SE +/- 309841.40, N = 3 27658814.36 27941009.12 31169657.42 31446324.62
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 d e a b 13 26 39 52 65 SE +/- 0.61, N = 6 SE +/- 0.45, N = 3 56.44 55.42 49.28 48.76 MAX: 1369.78 MAX: 1292.79 MAX: 1102.28 MAX: 1002.85
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 e d a b 8M 16M 24M 32M 40M SE +/- 206780.32, N = 3 SE +/- 40579.48, N = 3 30519342.37 31204403.42 35731058.31 35881779.03
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 e d a b 30 60 90 120 150 SE +/- 0.60, N = 3 SE +/- 0.14, N = 3 144.80 139.93 122.86 122.59 MAX: 1580.7 MAX: 1287.5 MAX: 1107.59 MAX: 1193.3
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 e d a b 7M 14M 21M 28M 35M SE +/- 308105.07, N = 3 SE +/- 236903.12, N = 3 29645835.71 29837035.03 33440678.10 33835269.57
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 d e a b 13 26 39 52 65 SE +/- 0.51, N = 3 SE +/- 0.46, N = 3 58.92 58.71 51.82 51.04 MAX: 1277.7 MAX: 1143 MAX: 1143.96 MAX: 1204.88
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 d e a b 8M 16M 24M 32M 40M SE +/- 244332.82, N = 3 SE +/- 103626.85, N = 3 30570884.25 30665605.41 36535518.62 36624182.35
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 e d a b 30 60 90 120 150 SE +/- 0.77, N = 3 SE +/- 0.82, N = 3 152.54 152.42 128.26 127.24 MAX: 1269.54 MAX: 1686.91 MAX: 1123.23 MAX: 1175.96
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 d e a b 8M 16M 24M 32M 40M SE +/- 243373.38, N = 3 SE +/- 239793.06, N = 3 30418152.44 30686841.47 37703435.51 38375799.87
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 e d a b 14 28 42 56 70 SE +/- 0.09, N = 3 SE +/- 0.32, N = 3 60.80 60.56 48.28 48.04 MAX: 1125.57 MAX: 1126.74 MAX: 1114.29 MAX: 1030.49
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 e d a b 7M 14M 21M 28M 35M SE +/- 306835.51, N = 3 SE +/- 243572.43, N = 3 26463397.35 26725280.14 32700015.96 32929010.53
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 e d a b 40 80 120 160 200 SE +/- 1.26, N = 3 SE +/- 1.53, N = 3 181.42 179.03 145.58 144.40 MAX: 1854.32 MAX: 1288.79 MAX: 1258.59 MAX: 1247.59
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time d e b a 1.0058 2.0116 3.0174 4.0232 5.029 4.47 4.36 3.41 3.40
EnCodec EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 24 kbps e d b a 8 16 24 32 40 36.02 35.68 31.85 31.71
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time e d a b 0.513 1.026 1.539 2.052 2.565 2.28 2.14 1.76 1.66
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache d e a b 20 40 60 80 100 95.87 99.39 108.20 108.92 MIN: 3.96 / MAX: 7500 MIN: 3.98 / MAX: 7500 MIN: 5.7 / MAX: 8571.43 MIN: 5.72 / MAX: 8571.43
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run e d b a 30 60 90 120 150 102.31 103.13 112.91 113.63 MIN: 4.01 / MAX: 8571.43 MIN: 4 / MAX: 7500 MIN: 5.81 / MAX: 8571.43 MIN: 5.82 / MAX: 8571.43
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run d e b a 30 60 90 120 150 103.15 103.64 111.97 113.53 MIN: 4 / MAX: 8571.43 MIN: 4.01 / MAX: 7500 MIN: 5.77 / MAX: 7500 MIN: 5.79 / MAX: 8571.43
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 1:1 d e b a 700K 1400K 2100K 2800K 3500K 2416447.01 2436674.73 3386413.69 3442545.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 1:5 d e a b 700K 1400K 2100K 2800K 3500K 2447513.39 2452945.72 3390692.31 3407584.22 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 5:1 d e a b 800K 1600K 2400K 3200K 4000K 2500811.71 2513794.98 3509365.98 3551639.05 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:5 e d a b 600K 1200K 1800K 2400K 3000K 2031408.76 2037833.52 3000846.79 3012041.19 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:5 e d a b 800K 1600K 2400K 3200K 4000K 2419203.43 2548641.80 3569222.70 3580150.29 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:5 e d b a 700K 1400K 2100K 2800K 3500K 2449997.94 2455266.37 3403187.91 3406857.67 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:5 e d b a 700K 1400K 2100K 2800K 3500K 2411587.54 2412723.51 3331423.51 3339072.06 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:10 e d a b 600K 1200K 1800K 2400K 3000K 2055869.52 2109010.30 2955396.42 3009409.13 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:10 e d a b 800K 1600K 2400K 3200K 4000K 2544125.94 2565892.85 3296080.94 3582493.35 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:10 e d a b 700K 1400K 2100K 2800K 3500K 2466340.57 2468187.63 3310390.02 3418062.57 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:10 e d a b 700K 1400K 2100K 2800K 3500K 2423927.40 2424431.95 3331550.72 3335026.97 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:100 d e b a 600K 1200K 1800K 2400K 3000K 2084872.04 2117511.76 3004192.87 3004715.33 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:100 d e b a 800K 1600K 2400K 3200K 4000K 2571654.79 2595364.56 3457130.11 3654927.90 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:100 e d a b 700K 1400K 2100K 2800K 3500K 2500272.78 2503115.23 3453155.48 3454075.39 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:100 d e a b 700K 1400K 2100K 2800K 3500K 2458737.20 2462268.97 3382163.21 3389154.48 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 d e a b 500K 1000K 1500K 2000K 2500K 1624284.67 1640940.11 2339109.34 2359355.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 d e a b 500K 1000K 1500K 2000K 2500K 1565861.27 1574207.88 2257435.18 2294635.17 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 d e a b 500K 1000K 1500K 2000K 2500K 1555313.89 1560732.04 2205533.41 2212508.18 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 d e b a 600K 1200K 1800K 2400K 3000K SE +/- 4575.83, N = 3 2145215.42 2188722.61 2475916.43 2630980.74 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 d e b a 600K 1200K 1800K 2400K 3000K SE +/- 27602.28, N = 3 2265510.42 2325921.50 2625534.74 2627627.80 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 d e b a 500K 1000K 1500K 2000K 2500K SE +/- 17758.85, N = 15 2058810.43 2131409.25 2391013.39 2447990.86 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 e d a b 500K 1000K 1500K 2000K 2500K SE +/- 8021.75, N = 3 2209577.35 2218941.46 2544206.44 2565838.51 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 e d b a 500K 1000K 1500K 2000K 2500K SE +/- 13714.16, N = 3 2091970.43 2092415.77 2464830.47 2493512.81 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 d e a b 600K 1200K 1800K 2400K 3000K 2633566.50 2637178.25 2854095.50 2857194.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Update Random d e a b 110K 220K 330K 440K 550K 422807 424647 487163 493105 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read While Writing e d b a 400K 800K 1200K 1600K 2000K 1181409 1195846 1714690 1716478 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read Random Write Random d e a b 300K 600K 900K 1200K 1500K 1133541 1133720 1527745 1541034 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency d e b a 0.3071 0.6142 0.9213 1.2284 1.5355 1.365 1.284 1.154 1.128 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only d e b a 110K 220K 330K 440K 550K 312923 322751 431809 516884 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency d e b a 0.6111 1.2222 1.8333 2.4444 3.0555 2.716 2.479 1.934 1.900 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write e d a b 5K 10K 15K 20K 25K 17875 17896 23466 23725 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency d e b a 7 14 21 28 35 28.14 27.97 21.39 21.37 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write e d b a 4K 8K 12K 16K 20K 13796 13903 17895 17926 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency e d b a 13 26 39 52 65 58.03 57.92 44.71 44.63 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency e d b a 120K 240K 360K 480K 600K 389396 392256 546888 580068 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency d e b a 90K 180K 270K 360K 450K 294566 371978 413607 421150 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency d e b a 5K 10K 15K 20K 25K 17766 17884 23377 23394 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency e d a b 4K 8K 12K 16K 20K 13786 13811 18016 18040 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 128 d e b a 200 400 600 800 1000 842 857 1111 1117 1. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -lz -lm -lssl -lcrypto -lpthread -ldl
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya d e a b c 0.9045 1.809 2.7135 3.618 4.5225 3.83 3.83 4.00 4.01 4.02 1. (CXX) g++ options: -O3
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times e d a b 150 300 450 600 750 711 705 664 662
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 August 2023 20:23 by user phoronix.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 August 2023 10:29 by user phoronix.
c Processor: Intel Xeon E-2388G @ 3.20GHz (8 Cores / 16 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: Intel RocketLake-S [UHD ], Monitor: VA2431, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 18 August 2023 07:42 by user phoronix.
d Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 20 August 2023 07:43 by user phoronix.
e Processor: Intel Xeon E-2336 @ 2.90GHz (6 Cores / 12 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: ASPEED, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 August 2023 05:01 by user phoronix.