Tests for a future article. Intel Xeon E-2336 testing with a ASRockRack E3C252D4U (1.22 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: Intel Xeon E-2388G @ 3.20GHz (8 Cores / 16 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: Intel RocketLake-S [UHD ], Monitor: VA2431, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
d e Processor: Intel Xeon E-2336 @ 2.90GHz (6 Cores / 12 Threads) , Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: ASPEED , Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1024x768
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128 e d c b a 1.1906 2.3812 3.5718 4.7624 5.953 5.28367 5.29165 4.25222 4.25839 4.25314 1. (CXX) g++ options: -O3
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 e d c b a 20 40 60 80 100 110.4 110.0 109.5 109.4 110.2 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 e d c b a 13 26 39 52 65 56.6 56.4 55.4 55.4 55.8 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: double - X Y Z: 128 e d c b a 1.1826 2.3652 3.5478 4.7304 5.913 5.25237 5.25616 4.23098 4.23400 4.22813 1. (CXX) g++ options: -O3
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Layered Halfspace e d c b a 70 140 210 280 350 314.22 308.79 232.39 240.62 238.30 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Tomographic Model e d c b a 30 60 90 120 150 117.07 117.92 90.71 88.22 87.21 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Mount St. Helens e d c b a 30 60 90 120 150 116.95 119.62 89.10 90.65 91.74 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
SPECFEM3D simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Homogeneous Halfspace e d c b a 30 60 90 120 150 148.18 149.67 111.02 111.56 113.86 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better SPECFEM3D 4.0 Model: Water-layered Halfspace e d c b a 60 120 180 240 300 291.60 292.05 233.71 234.44 231.64 1. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 1.smt2 e d c b a 6 12 18 24 30 27.07 27.26 24.96 24.96 24.80 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache e d b a 600K 1200K 1800K 2400K 3000K 2101251.45 2102565.10 2662853.59 2781513.07 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math e d b a 110K 220K 330K 440K 550K 385865.21 382456.62 534062.66 535128.11 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching e d b a 800K 1600K 2400K 3200K 4000K 2527236.02 2531776.40 3638488.52 3654011.84 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions e d b a 1.5M 3M 4.5M 6M 7.5M 4856545.20 4792580.04 7009619.95 7013916.66 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing e d b a 3M 6M 9M 12M 15M 9823844.73 9811611.36 13127961.92 13144909.56 1. (CXX) g++ options: -O2 -std=gnu99 -lc
Geekbench This is a benchmark of Geekbench 6 Pro. The test profile automates the execution of Geekbench 6 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 6 Pro. THIS TEST PROFILE WILL NOT WORK WITHOUT A VALID GEEKBENCH 6 PRO LICENSE KEY; test automation / CLI support is only available with the paid version of Geekbench. Learn more via the OpenBenchmarking.org test page.
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric e d b a 30K 60K 90K 120K 150K 88988 89349 129573 130412 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast e d b a 30 60 90 120 150 116.02 115.97 154.35 154.04 1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression e d c b a 0.9653 1.9306 2.8959 3.8612 4.8265 4.03 4.05 4.28 4.29 4.29 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Lossless, Highest Compression e d c b a 0.162 0.324 0.486 0.648 0.81 0.64 0.64 0.72 0.71 0.71 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
SecureMark SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org marks, More Is Better SecureMark 1.0.4 Benchmark: SecureMark-TLS e d c b a 70K 140K 210K 280K 350K 320661 321452 341479 341164 341310 1. (CC) gcc options: -pedantic -O3
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M e d c b a 400 800 1200 1600 2000 1798.3 1792.1 1933.8 1934.8 1946.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M e d c b a 700 1400 2100 2800 3500 2809.5 2773.3 3418.8 3349.9 3410.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 e d c b a 80 160 240 320 400 260.86 259.43 377.48 380.11 381.97 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 e d c b a 4 8 12 16 20 10.43 10.38 15.10 15.20 15.28 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam e d c b a 60 120 180 240 300 267.00 264.81 211.84 211.14 210.90
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: AlexNet e d b a 20 40 60 80 100 78.48 78.27 96.85 96.89
OpenCV This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Core e d b a 13K 26K 39K 52K 65K 59340 58795 55182 55345 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Stitching e d b a 60K 120K 180K 240K 300K 290774 290303 269533 269695 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
OpenBenchmarking.org ms, Fewer Is Better OpenCV 4.7 Test: Image Processing e d b a 20K 40K 60K 80K 100K 99596 100448 80783 80674 1. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet e d b a 4 8 12 16 20 SE +/- 0.06, N = 3 15.83 15.85 14.02 14.02 MIN: 15.7 / MAX: 16.1 MIN: 15.74 / MAX: 17.68 MIN: 13.9 / MAX: 15.7 MIN: 13.85 / MAX: 16.02 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 e d b a 0.8955 1.791 2.6865 3.582 4.4775 SE +/- 0.00, N = 3 3.98 3.96 3.54 3.52 MIN: 3.88 / MAX: 4.25 MIN: 3.84 / MAX: 4.3 MIN: 3.43 / MAX: 3.83 MIN: 3.41 / MAX: 3.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 e d b a 0.7088 1.4176 2.1264 2.8352 3.544 SE +/- 0.01, N = 3 3.14 3.14 2.85 2.84 MIN: 3.11 / MAX: 3.39 MIN: 3.12 / MAX: 3.34 MIN: 2.82 / MAX: 3.21 MIN: 2.8 / MAX: 3.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 e d b a 0.5355 1.071 1.6065 2.142 2.6775 SE +/- 0.01, N = 3 2.30 2.38 2.18 2.18 MIN: 2.28 / MAX: 2.59 MIN: 2.28 / MAX: 10.72 MIN: 2.16 / MAX: 2.43 MIN: 2.15 / MAX: 3.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet e d b a 0.621 1.242 1.863 2.484 3.105 SE +/- 0.00, N = 3 2.76 2.76 2.54 2.54 MIN: 2.72 / MAX: 2.99 MIN: 2.72 / MAX: 3.22 MIN: 2.51 / MAX: 2.66 MIN: 2.5 / MAX: 2.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 e d b a 1.1948 2.3896 3.5844 4.7792 5.974 SE +/- 0.01, N = 3 5.31 5.31 4.30 4.29 MIN: 4.87 / MAX: 5.72 MIN: 4.86 / MAX: 6.33 MIN: 4.26 / MAX: 4.55 MIN: 4.24 / MAX: 5.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface e d b a 0.189 0.378 0.567 0.756 0.945 SE +/- 0.02, N = 3 0.83 0.82 0.75 0.79 MAX: 0.95 MIN: 0.8 / MAX: 0.88 MAX: 1.13 MIN: 0.74 / MAX: 1.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet e d b a 3 6 9 12 15 SE +/- 0.13, N = 3 11.48 11.55 10.07 10.30 MIN: 11.37 / MAX: 11.79 MIN: 11.39 / MAX: 20.77 MIN: 9.97 / MAX: 10.38 MIN: 9.97 / MAX: 10.91 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 e d b a 14 28 42 56 70 SE +/- 0.16, N = 3 61.31 61.38 59.88 59.79 MIN: 61.13 / MAX: 63.09 MIN: 61.23 / MAX: 62.51 MIN: 59.57 / MAX: 60.2 MIN: 59.11 / MAX: 68.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 e d b a 3 6 9 12 15 SE +/- 0.09, N = 3 9.00 9.03 7.82 8.01 MIN: 8.92 / MAX: 9.28 MIN: 8.94 / MAX: 9.21 MIN: 7.75 / MAX: 8.12 MIN: 7.78 / MAX: 16.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet e d b a 2 4 6 8 10 SE +/- 0.18, N = 3 8.52 8.55 7.85 8.06 MIN: 8.47 / MAX: 8.81 MIN: 8.47 / MAX: 10.27 MIN: 7.76 / MAX: 8 MIN: 7.76 / MAX: 8.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 e d b a 5 10 15 20 25 SE +/- 0.11, N = 3 20.67 20.65 16.36 16.49 MIN: 20.38 / MAX: 30 MIN: 20.52 / MAX: 21.15 MIN: 16.25 / MAX: 16.67 MIN: 16.29 / MAX: 17.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny e d b a 6 12 18 24 30 SE +/- 0.04, N = 3 26.35 26.39 23.81 23.76 MIN: 26.24 / MAX: 27.01 MIN: 26.24 / MAX: 35.28 MIN: 23.64 / MAX: 31.75 MIN: 23.59 / MAX: 24.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd e d b a 3 6 9 12 15 SE +/- 0.05, N = 3 11.92 11.90 10.41 10.34 MIN: 11.82 / MAX: 12.46 MIN: 11.8 / MAX: 12.29 MIN: 10.31 / MAX: 10.73 MIN: 10.17 / MAX: 19.27 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m e d b a 2 4 6 8 10 SE +/- 0.09, N = 3 7.75 7.72 6.68 6.82 MIN: 7.65 / MAX: 8.43 MIN: 7.61 / MAX: 8.99 MIN: 6.64 / MAX: 6.98 MIN: 6.6 / MAX: 7.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer e d b a 20 40 60 80 100 SE +/- 0.12, N = 3 90.07 89.85 72.07 72.05 MIN: 89.54 / MAX: 92.39 MIN: 89.35 / MAX: 91.15 MIN: 71.66 / MAX: 80.49 MIN: 70.65 / MAX: 99.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet e d b a 0.9 1.8 2.7 3.6 4.5 SE +/- 0.03, N = 3 4.00 3.97 3.51 3.52 MIN: 3.94 / MAX: 4.38 MIN: 3.9 / MAX: 4.21 MIN: 3.37 / MAX: 3.78 MIN: 3.35 / MAX: 3.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2023 Implementation: MPI CPU - Input: water_GMX50_bare e d b a 0.1782 0.3564 0.5346 0.7128 0.891 0.667 0.671 0.792 0.788 1. (CXX) g++ options: -O3
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms e d c b a 0.5972 1.1944 1.7916 2.3888 2.986 2.64431 2.65403 1.84863 1.84751 1.84476
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU e d b a 0.3015 0.603 0.9045 1.206 1.5075 0.98 0.99 1.33 1.34 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU e d b a 900 1800 2700 3600 4500 4021.01 4033.28 2976.94 2973.15 MIN: 3277.03 / MAX: 4252.51 MIN: 3327.28 / MAX: 4221.11 MIN: 2489.57 / MAX: 3136.21 MIN: 2488.55 / MAX: 3154.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU e d b a 3 6 9 12 15 6.54 6.51 9.14 9.16 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU e d b a 130 260 390 520 650 610.66 612.57 436.28 435.99 MIN: 445.57 / MAX: 645.59 MIN: 320.73 / MAX: 641.45 MIN: 352.34 / MAX: 460.22 MIN: 252.02 / MAX: 599.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU e d b a 90 180 270 360 450 309.25 311.26 419.36 415.14 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU e d b a 3 6 9 12 15 12.92 12.83 9.53 9.62 MIN: 7.1 / MAX: 29.13 MIN: 5.69 / MAX: 29.68 MIN: 5.85 / MAX: 24.48 MIN: 6.57 / MAX: 24.31 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU e d b a 6 12 18 24 30 20.64 20.62 27.33 27.40 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU e d b a 40 80 120 160 200 193.60 193.94 146.26 145.90 MIN: 122.33 / MAX: 206.18 MIN: 150.32 / MAX: 215.81 MIN: 121.53 / MAX: 161.59 MIN: 89.12 / MAX: 161.2 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU e d b a 200 400 600 800 1000 648.80 647.74 914.26 907.43 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU e d b a 3 6 9 12 15 9.23 9.24 8.73 8.79 MIN: 4.94 / MAX: 18.86 MIN: 4.83 / MAX: 22.73 MIN: 4.62 / MAX: 37 MIN: 4.51 / MAX: 68.51 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU e d b a 50 100 150 200 250 171.38 172.78 213.56 214.40 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU e d b a 6 12 18 24 30 23.32 23.13 18.71 18.64 MIN: 10.72 / MAX: 42.07 MIN: 10.66 / MAX: 43.31 MIN: 13.18 / MAX: 24.78 MIN: 11.82 / MAX: 40.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU e d b a 2K 4K 6K 8K 10K 5049.65 5067.31 8117.06 8184.72 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU e d b a 0.2633 0.5266 0.7899 1.0532 1.3165 1.17 1.16 0.98 0.97 MIN: 0.64 / MAX: 15.77 MIN: 0.53 / MAX: 14.34 MIN: 0.48 / MAX: 13.47 MIN: 0.5 / MAX: 13.7 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
ASKAP ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Second, More Is Better ASKAP 1.0 Test: Hogbom Clean OpenMP e d b a 40 80 120 160 200 160.26 159.49 164.75 165.02 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Gridding e d b a 200 400 600 800 1000 976.79 978.58 964.26 965.57 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve MT - Degridding e d b a 300 600 900 1200 1500 1338.53 1340.21 1353.70 1353.70 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Degridding e d b a 300 600 900 1200 1500 1192.71 1178.42 1474.13 1482.46 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Mpix/sec, More Is Better ASKAP 1.0 Test: tConvolve MPI - Gridding e d b a 300 600 900 1200 1500 1229.98 1222.34 1433.86 1433.86 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Gridding e d b a 200 400 600 800 1000 975.30 971.74 1052.40 1044.14 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.org Million Grid Points Per Second, More Is Better ASKAP 1.0 Test: tConvolve OpenMP - Degridding e d b a 400 800 1200 1600 2000 1643.56 1613.67 1799.03 1763.28 1. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
Algebraic Multi-Grid Benchmark AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Figure Of Merit, More Is Better Algebraic Multi-Grid Benchmark 1.2 e d c b a 40M 80M 120M 160M 200M 192950100 192808700 190589800 190611300 192619900 1. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time e d c b a 15 30 45 60 75 67.38 67.40 58.48 58.53 58.46 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time e d c b a 160 320 480 640 800 749.44 749.90 710.07 710.27 708.09 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time e d c b a 110 220 330 440 550 496.10 496.74 441.50 441.61 439.65 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time e d c b a 1300 2600 3900 5200 6500 5877.55 5889.87 5770.77 5770.72 5743.30 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lphysicalProperties -lspecie -lfiniteVolume -lfvModels -lgenericPatchFields -lmeshTools -lsampling -lOpenFOAM -ldl -lm
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.16 Input: Li2_STO_ae e d c b a 90 180 270 360 450 408.56 411.70 296.75 300.84 301.65 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
Xcompact3d Incompact3d Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Xcompact3d Incompact3d 2021-03-11 Input: input.i3d 129 Cells Per Direction e d c b a 11 22 33 44 55 46.80 48.47 44.89 44.90 44.70 1. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.5.4 Compression Level: 19, Long Mode - Decompression Speed e d c b a 400 800 1200 1600 2000 1514.3 1507.9 1643.8 1636.5 1638.7 1. (CC) gcc options: -O3 -pthread -lz -llzma
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi e d c b a 70 140 210 280 350 233.12 237.94 334.68 334.68 334.52 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Deepcoin e d c b a 2K 4K 6K 8K 10K 5850.18 5748.68 8152.24 8215.17 8128.92 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Ringcoin e d c b a 400 800 1200 1600 2000 1190.30 1203.63 1683.88 1686.70 1685.16 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Blake-2 S e d c b a 130K 260K 390K 520K 650K 417510 417080 594430 593650 592320 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Garlicoin e d c b a 800 1600 2400 3200 4000 2493.12 2610.22 3516.83 3273.74 3305.88 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Myriad-Groestl e d c b a 6K 12K 18K 24K 30K 19560 19530 27240 27390 27670 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: LBC, LBRY Credits e d c b a 12K 24K 36K 48K 60K 38470 39570 54450 54340 54340 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Quad SHA-256, Pyrite e d c b a 30K 60K 90K 120K 150K 88000 87680 124050 123930 127140 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Triple SHA-256, Onecoin e d c b a 40K 80K 120K 160K 200K 127470 127480 180730 181190 177750 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Kvazaar This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Medium e d c b a 2 4 6 8 10 4.76 4.77 6.74 6.79 6.72 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Very Fast e d c b a 4 8 12 16 20 12.12 12.11 17.03 17.04 16.91 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Super Fast e d c b a 5 10 15 20 25 14.99 14.96 21.05 21.23 21.14 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.2 Video Input: Bosphorus 4K - Video Preset: Ultra Fast e d c b a 7 14 21 28 35 20.89 20.92 29.31 29.35 29.37 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K e d c b a 10 20 30 40 50 36.22 37.09 43.07 42.84 42.65 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K e d c b a 11 22 33 44 55 40.10 40.38 48.18 48.26 48.42 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K e d c b a 9 18 27 36 45 30.41 30.42 39.71 39.73 39.72 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 4 - Input: Bosphorus 4K e d c b a 0.709 1.418 2.127 2.836 3.545 2.328 2.332 3.151 3.126 3.141 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p e d c b a 13 26 39 52 65 50.47 50.06 59.70 59.97 58.45 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 4K e d c b a 0.3668 0.7336 1.1004 1.4672 1.834 1.12 1.12 1.63 1.62 1.62 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K e d c b a 8 16 24 32 40 23.99 24.34 33.42 33.09 33.42 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K e d c b a 15 30 45 60 75 50.49 50.63 65.60 65.85 65.83 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only e d b a 70 140 210 280 350 SE +/- 0.15, N = 3 312.98 312.67 215.41 215.45
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live e d c b a 10 20 30 40 50 45.61 45.84 38.25 38.17 38.05 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Live e d c b a 30 60 90 120 150 110.72 110.17 132.03 132.30 132.73 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload e d c b a 30 60 90 120 150 138.67 138.66 113.77 113.53 113.49 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Upload e d c b a 5 10 15 20 25 18.21 18.21 22.19 22.24 22.25 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform e d c b a 40 80 120 160 200 204.47 204.50 169.84 170.46 170.20 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Platform e d c b a 10 20 30 40 50 37.05 37.04 44.60 44.44 44.51 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand e d c b a 40 80 120 160 200 204.68 204.80 170.09 170.37 170.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.0 Encoder: libx265 - Scenario: Video On Demand e d c b a 10 20 30 40 50 37.01 36.99 44.54 44.46 44.39 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 4K - Video Preset: Medium e d c b a 1.0733 2.1466 3.2199 4.2932 5.3665 3.32 3.34 4.76 4.77 4.75
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 1080p - Video Preset: Super Fast e d c b a 20 40 60 80 100 50.14 50.03 74.48 74.90 74.64
OpenBenchmarking.org Frames Per Second, More Is Better uvg266 0.4.1 Video Input: Bosphorus 1080p - Video Preset: Ultra Fast e d c b a 20 40 60 80 100 61.59 62.01 93.03 95.58 96.33
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast e d c b a 0.794 1.588 2.382 3.176 3.97 2.578 2.577 3.513 3.527 3.529 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster e d c b a 2 4 6 8 10 5.637 5.662 7.472 7.458 7.432 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast e d c b a 3 6 9 12 15 8.306 8.313 11.257 11.236 11.094 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster e d c b a 6 12 18 24 30 19.47 19.62 25.42 25.47 25.34 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark ISPC e d c b a 30 60 90 120 150 117 117 154 155 154 MIN: 13 / MAX: 1914 MIN: 13 / MAX: 1922 MIN: 18 / MAX: 2421 MIN: 18 / MAX: 2419 MIN: 18 / MAX: 2431
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 1.3.1 Benchmark: vklBenchmark Scalar e d c b a 15 30 45 60 75 52 52 69 69 69 MIN: 6 / MAX: 990 MIN: 6 / MAX: 989 MIN: 8 / MAX: 1000 MIN: 8 / MAX: 1000 MIN: 8 / MAX: 1000
LuxCoreRender LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: DLSC - Acceleration: CPU e d c b a 0.4118 0.8236 1.2354 1.6472 2.059 1.27 1.25 1.79 1.83 1.79 MIN: 1.16 / MAX: 1.57 MIN: 1.16 / MAX: 1.53 MIN: 1.66 / MAX: 2.06 MIN: 1.68 / MAX: 2.12 MIN: 1.66 / MAX: 2.06
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Danish Mood - Acceleration: CPU e d c b a 0.261 0.522 0.783 1.044 1.305 0.63 0.66 1.09 1.08 1.16 MIN: 0.13 / MAX: 0.87 MIN: 0.14 / MAX: 0.9 MIN: 0.29 / MAX: 1.39 MIN: 0.26 / MAX: 1.38 MIN: 0.32 / MAX: 1.44
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Orange Juice - Acceleration: CPU e d c b a 0.6413 1.2826 1.9239 2.5652 3.2065 1.97 1.97 2.84 2.85 2.85 MIN: 1.87 / MAX: 2.4 MIN: 1.87 / MAX: 2.4 MIN: 2.71 / MAX: 3.28 MIN: 2.72 / MAX: 3.29 MIN: 2.71 / MAX: 3.3
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: LuxCore Benchmark - Acceleration: CPU e d c b a 0.2925 0.585 0.8775 1.17 1.4625 0.78 0.78 1.29 1.29 1.30 MIN: 0.19 / MAX: 1.01 MIN: 0.19 / MAX: 1.03 MIN: 0.37 / MAX: 1.59 MIN: 0.37 / MAX: 1.58 MIN: 0.38 / MAX: 1.6
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Rainbow Colors and Prism - Acceleration: CPU e d c b a 2 4 6 8 10 5.07 5.18 7.49 7.63 7.63 MIN: 4.62 / MAX: 6.1 MIN: 4.7 / MAX: 6.12 MIN: 6.78 / MAX: 8.19 MIN: 6.88 / MAX: 8.28 MIN: 6.75 / MAX: 8.34
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time e d c b a 0.8483 1.6966 2.5449 3.3932 4.2415 2.67891 2.67678 3.74212 3.77037 3.72924
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time e d c b a 0.8454 1.6908 2.5362 3.3816 4.227 2.66344 2.64427 3.75544 3.75720 3.75234
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time e d c b a 30 60 90 120 150 117.26 117.07 148.50 148.97 148.26
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time e d c b a 0.7271 1.4542 2.1813 2.9084 3.6355 2.25929 2.22151 3.22055 3.23141 3.17345
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time e d c b a 0.7183 1.4366 2.1549 2.8732 3.5915 2.21786 2.22361 3.19263 3.18652 3.17488
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time e d c b a 0.8509 1.7018 2.5527 3.4036 4.2545 2.65809 2.66212 3.77000 3.78168 3.77067
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer e d c b a 800 1600 2400 3200 4000 3884 3894 2691 2704 2696 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer e d c b a 1000 2000 3000 4000 5000 4708 4702 3264 3263 3256 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer e d c b a 14K 28K 42K 56K 70K 65391 65121 42853 43011 42834 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer e d c b a 30K 60K 90K 120K 150K 127177 126949 88742 88710 88765 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer e d c b a 20K 40K 60K 80K 100K 78221 78324 51903 52002 52071 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer e d c b a 30K 60K 90K 120K 150K 153814 153142 106444 106674 107002 1. (CXX) g++ options: -O3 -lm -ldl
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile e d c b a 20 40 60 80 100 81.73 81.03 62.08 62.11 61.38 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 e d b a 12M 24M 36M 48M 60M 50829000 50850000 54017000 54005000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 e d b a 20M 40M 60M 80M 100M 81272000 81257000 86235000 86164000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 e d b a 40M 80M 120M 160M 200M 177750000 177370000 200200000 199620000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 e d b a 60M 120M 180M 240M 300M 227420000 232090000 265830000 260980000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 e d b a 70M 140M 210M 280M 350M 277180000 276670000 331500000 331140000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 e d b a 90M 180M 270M 360M 450M 299160000 299170000 419540000 409640000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 e d b a 5M 10M 15M 20M 25M 20269000 20307000 21593000 21614000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 e d b a 110M 220M 330M 440M 550M 361220000 361340000 513440000 502810000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 e d b a 90M 180M 270M 360M 450M 308330000 308360000 433120000 427490000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 e d b a 20M 40M 60M 80M 100M 71098000 71110000 79964000 79299000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 e d b a 30M 60M 90M 120M 150M 99730000 100700000 132110000 131760000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 e d b a 40M 80M 120M 160M 200M 115710000 115700000 165290000 162520000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 e d b a 20M 40M 60M 80M 100M SE +/- 90000.00, N = 3 98751000 98855000 105040000 104940000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 e d b a 30M 60M 90M 120M 150M SE +/- 57831.17, N = 3 136420000 136380000 144810000 147466667 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 e d b a 9M 18M 27M 36M 45M SE +/- 4630.81, N = 3 41172000 41039000 44040000 44031333 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
srsRAN Project srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: Downlink Processor Benchmark e d c b a 200 400 600 800 1000 874.0 877.9 936.4 934.8 932.7 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total e d c b a 200 400 600 800 1000 893.4 859.1 1135.6 1083.3 1159.4 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread e d c b a 70 140 210 280 350 296.0 299.4 317.1 317.9 312.5 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 e d b a 15K 30K 45K 60K 75K 47003.02 47001.04 68779.84 68098.55 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 e d b a 13K 26K 39K 52K 65K 43482.87 43258.70 61686.55 61494.71 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 e d b a 11K 22K 33K 44K 55K 40726.68 40695.40 53009.35 52883.38 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 e d b a 10K 20K 30K 40K 50K 38521.51 38750.55 48357.65 47841.37 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA256 e d c b a 1600M 3200M 4800M 6400M 8000M 5371686670 5363079590 7670067840 7571848920 7666522930 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: SHA512 e d c b a 600M 1200M 1800M 2400M 3000M 2073192130 2090214690 2958467610 2971930440 2982380940 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org sign/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 e d c b a 1000 2000 3000 4000 5000 3353.4 3304.7 4781.2 4803.9 4750.7 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org verify/s, More Is Better OpenSSL 3.1 Algorithm: RSA4096 e d c b a 30K 60K 90K 120K 150K 109117.9 109044.0 155507.5 155963.2 155418.1 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20 e d b a 11000M 22000M 33000M 44000M 55000M 37631001400 37684059340 53408719120 53275943460 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-128-GCM e d b a 20000M 40000M 60000M 80000M 100000M 74137558490 73615414070 103818760740 104474526240 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: AES-256-GCM e d b a 20000M 40000M 60000M 80000M 100000M 65455871040 65168081990 92618860950 92098198730 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org byte/s, More Is Better OpenSSL 3.1 Algorithm: ChaCha20-Poly1305 e d b a 8000M 16000M 24000M 32000M 40000M 25784432780 25772756170 36372738320 36073985910 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 e d b a 40 80 120 160 200 184.80 183.66 149.70 148.00 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
OpenBenchmarking.org Seconds, Fewer Is Better Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 e d b a 60 120 180 240 300 262.24 267.83 212.42 208.65 1. (CXX) g++ options: -std=c++17 -lmozjs-91 -lm -lei -fPIC -MMD
Apache IoTDB OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 e d b a 200K 400K 600K 800K 1000K SE +/- 5696.93, N = 12 SE +/- 6733.33, N = 8 671390.07 693352.96 783425.31 780967.47
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 e d b a 4 8 12 16 20 SE +/- 0.18, N = 12 SE +/- 0.18, N = 8 15.45 14.42 11.82 11.83 MAX: 841.24 MAX: 871.6 MAX: 788.57 MAX: 845.33
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 e d b a 300K 600K 900K 1200K 1500K SE +/- 14317.27, N = 6 SE +/- 17375.90, N = 3 1349022.28 1342250.61 1446859.96 1447958.54
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 e d b a 6 12 18 24 30 SE +/- 0.32, N = 6 SE +/- 0.33, N = 3 22.80 23.25 20.85 20.68 MAX: 895.66 MAX: 840.31 MAX: 789.31 MAX: 843.04
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 e d b a 300K 600K 900K 1200K 1500K SE +/- 1722.69, N = 3 SE +/- 2691.26, N = 3 1080456.40 1082993.14 1187104.23 1175572.13
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 e d b a 3 6 9 12 15 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 11.39 11.48 9.97 10.09 MAX: 661.12 MAX: 654.25 MAX: 637.17 MAX: 670.1
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 e d b a 400K 800K 1200K 1600K 2000K SE +/- 15232.42, N = 3 SE +/- 9885.78, N = 3 1779490.51 1768171.84 1868505.59 1869315.57
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 e d b a 5 10 15 20 25 SE +/- 0.19, N = 3 SE +/- 0.16, N = 3 21.05 21.11 19.80 19.82 MAX: 687.47 MAX: 729.87 MAX: 675.64 MAX: 689.69
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 e d b a 400K 800K 1200K 1600K 2000K SE +/- 14334.70, N = 3 SE +/- 22284.36, N = 3 1699283.73 1609911.22 1767214.52 1810868.13
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 e d b a 3 6 9 12 15 SE +/- 0.10, N = 3 SE +/- 0.14, N = 3 8.86 9.44 8.39 8.08 MAX: 875.98 MAX: 857.31 MAX: 837.13 MAX: 829.62
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 e d b a 500K 1000K 1500K 2000K 2500K SE +/- 22735.34, N = 7 SE +/- 30745.57, N = 4 2166865.16 2139318.30 2554316.08 2558694.08
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 e d b a 5 10 15 20 25 SE +/- 0.15, N = 7 SE +/- 0.22, N = 4 19.55 20.38 16.29 16.36 MAX: 928.08 MAX: 917.41 MAX: 874.18 MAX: 861.67
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 e d b a 7M 14M 21M 28M 35M SE +/- 309841.40, N = 3 SE +/- 306044.35, N = 6 27941009.12 27658814.36 31446324.62 31169657.42
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 e d b a 13 26 39 52 65 SE +/- 0.45, N = 3 SE +/- 0.61, N = 6 55.42 56.44 48.76 49.28 MAX: 1292.79 MAX: 1369.78 MAX: 1002.85 MAX: 1102.28
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 e d b a 8M 16M 24M 32M 40M SE +/- 40579.48, N = 3 SE +/- 206780.32, N = 3 30519342.37 31204403.42 35881779.03 35731058.31
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 e d b a 30 60 90 120 150 SE +/- 0.14, N = 3 SE +/- 0.60, N = 3 144.80 139.93 122.59 122.86 MAX: 1580.7 MAX: 1287.5 MAX: 1193.3 MAX: 1107.59
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 e d b a 7M 14M 21M 28M 35M SE +/- 236903.12, N = 3 SE +/- 308105.07, N = 3 29645835.71 29837035.03 33835269.57 33440678.10
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 e d b a 13 26 39 52 65 SE +/- 0.46, N = 3 SE +/- 0.51, N = 3 58.71 58.92 51.04 51.82 MAX: 1143 MAX: 1277.7 MAX: 1204.88 MAX: 1143.96
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 e d b a 8M 16M 24M 32M 40M SE +/- 103626.85, N = 3 SE +/- 244332.82, N = 3 30665605.41 30570884.25 36624182.35 36535518.62
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 e d b a 30 60 90 120 150 SE +/- 0.82, N = 3 SE +/- 0.77, N = 3 152.54 152.42 127.24 128.26 MAX: 1269.54 MAX: 1686.91 MAX: 1175.96 MAX: 1123.23
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 e d b a 8M 16M 24M 32M 40M SE +/- 239793.06, N = 3 SE +/- 243373.38, N = 3 30686841.47 30418152.44 38375799.87 37703435.51
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 e d b a 14 28 42 56 70 SE +/- 0.32, N = 3 SE +/- 0.09, N = 3 60.80 60.56 48.04 48.28 MAX: 1125.57 MAX: 1126.74 MAX: 1030.49 MAX: 1114.29
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 e d b a 7M 14M 21M 28M 35M SE +/- 243572.43, N = 3 SE +/- 306835.51, N = 3 26463397.35 26725280.14 32929010.53 32700015.96
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 e d b a 40 80 120 160 200 SE +/- 1.53, N = 3 SE +/- 1.26, N = 3 181.42 179.03 144.40 145.58 MAX: 1854.32 MAX: 1288.79 MAX: 1247.59 MAX: 1258.59
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time e d b a 1.0058 2.0116 3.0174 4.0232 5.029 4.36 4.47 3.41 3.40
EnCodec EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 24 kbps e d b a 8 16 24 32 40 36.02 35.68 31.85 31.71
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time e d b a 0.513 1.026 1.539 2.052 2.565 2.28 2.14 1.66 1.76
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache e d b a 20 40 60 80 100 99.39 95.87 108.92 108.20 MIN: 3.98 / MAX: 7500 MIN: 3.96 / MAX: 7500 MIN: 5.72 / MAX: 8571.43 MIN: 5.7 / MAX: 8571.43
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run e d b a 30 60 90 120 150 102.31 103.13 112.91 113.63 MIN: 4.01 / MAX: 8571.43 MIN: 4 / MAX: 7500 MIN: 5.81 / MAX: 8571.43 MIN: 5.82 / MAX: 8571.43
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run e d b a 30 60 90 120 150 103.64 103.15 111.97 113.53 MIN: 4.01 / MAX: 7500 MIN: 4 / MAX: 8571.43 MIN: 5.77 / MAX: 7500 MIN: 5.79 / MAX: 8571.43
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 1:1 e d b a 700K 1400K 2100K 2800K 3500K 2436674.73 2416447.01 3386413.69 3442545.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 1:5 e d b a 700K 1400K 2100K 2800K 3500K 2452945.72 2447513.39 3407584.22 3390692.31 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients: 50 - Set To Get Ratio: 5:1 e d b a 800K 1600K 2400K 3200K 4000K 2513794.98 2500811.71 3551639.05 3509365.98 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:5 e d b a 600K 1200K 1800K 2400K 3000K 2031408.76 2037833.52 3012041.19 3000846.79 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:5 e d b a 800K 1600K 2400K 3200K 4000K 2419203.43 2548641.80 3580150.29 3569222.70 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:5 e d b a 700K 1400K 2100K 2800K 3500K 2449997.94 2455266.37 3403187.91 3406857.67 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:5 e d b a 700K 1400K 2100K 2800K 3500K 2411587.54 2412723.51 3331423.51 3339072.06 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:10 e d b a 600K 1200K 1800K 2400K 3000K 2055869.52 2109010.30 3009409.13 2955396.42 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:10 e d b a 800K 1600K 2400K 3200K 4000K 2544125.94 2565892.85 3582493.35 3296080.94 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:10 e d b a 700K 1400K 2100K 2800K 3500K 2466340.57 2468187.63 3418062.57 3310390.02 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:10 e d b a 700K 1400K 2100K 2800K 3500K 2423927.40 2424431.95 3335026.97 3331550.72 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 10 - Set To Get Ratio: 1:100 e d b a 600K 1200K 1800K 2400K 3000K 2117511.76 2084872.04 3004192.87 3004715.33 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 20 - Set To Get Ratio: 1:100 e d b a 800K 1600K 2400K 3200K 4000K 2595364.56 2571654.79 3457130.11 3654927.90 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 50 - Set To Get Ratio: 1:100 e d b a 700K 1400K 2100K 2800K 3500K 2500272.78 2503115.23 3454075.39 3453155.48 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 1.6.2 Clients Per Thread: 60 - Set To Get Ratio: 1:100 e d b a 700K 1400K 2100K 2800K 3500K 2462268.97 2458737.20 3389154.48 3382163.21 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 e d b a 500K 1000K 1500K 2000K 2500K 1640940.11 1624284.67 2359355.53 2339109.34 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 e d b a 500K 1000K 1500K 2000K 2500K 1574207.88 1565861.27 2294635.17 2257435.18 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 e d b a 500K 1000K 1500K 2000K 2500K 1560732.04 1555313.89 2212508.18 2205533.41 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 e d b a 600K 1200K 1800K 2400K 3000K SE +/- 4575.83, N = 3 2188722.61 2145215.42 2475916.43 2630980.74 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 e d b a 600K 1200K 1800K 2400K 3000K SE +/- 27602.28, N = 3 2325921.50 2265510.42 2625534.74 2627627.80 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 e d b a 500K 1000K 1500K 2000K 2500K SE +/- 17758.85, N = 15 2131409.25 2058810.43 2391013.39 2447990.86 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 e d b a 500K 1000K 1500K 2000K 2500K SE +/- 8021.75, N = 3 2209577.35 2218941.46 2565838.51 2544206.44 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 e d b a 500K 1000K 1500K 2000K 2500K SE +/- 13714.16, N = 3 2091970.43 2092415.77 2464830.47 2493512.81 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 e d b a 600K 1200K 1800K 2400K 3000K 2637178.25 2633566.50 2857194.00 2854095.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Update Random e d b a 110K 220K 330K 440K 550K 424647 422807 493105 487163 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read While Writing e d b a 400K 800K 1200K 1600K 2000K 1181409 1195846 1714690 1716478 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better RocksDB 8.0 Test: Read Random Write Random e d b a 300K 600K 900K 1200K 1500K 1133720 1133541 1541034 1527745 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency e d b a 0.3071 0.6142 0.9213 1.2284 1.5355 1.260 1.365 1.154 1.128 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only e d b a 110K 220K 330K 440K 550K 322751 312923 431809 516884 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency e d b a 0.6111 1.2222 1.8333 2.4444 3.0555 2.479 2.557 1.853 1.548 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write e d b a 5K 10K 15K 20K 25K 17875 17896 23725 23466 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency e d b a 7 14 21 28 35 27.97 27.94 21.08 21.31 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write e d b a 4K 8K 12K 16K 20K 13796 13903 17895 17926 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency e d b a 13 26 39 52 65 57.99 57.54 44.71 44.63 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency e d b a 120K 240K 360K 480K 600K 389396 392256 546888 580068 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency e d b a 90K 180K 270K 360K 450K 371978 294566 413607 421150 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency e d b a 5K 10K 15K 20K 25K 17884 17766 23377 23394 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency e d b a 4K 8K 12K 16K 20K 13786 13811 18040 18016 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org Queries Per Second, More Is Better MariaDB 11.0.1 Clients: 128 e d b a 200 400 600 800 1000 857 842 1111 1117 1. (CXX) g++ options: -pie -fPIC -fstack-protector -O3 -lnuma -lpcre2-8 -lcrypt -lz -lm -lssl -lcrypto -lpthread -ldl
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya e d c b a 0.9045 1.809 2.7135 3.618 4.5225 3.83 3.83 4.02 4.01 4.00 1. (CXX) g++ options: -O3
PyBench This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milliseconds, Fewer Is Better PyBench 2018-02-16 Total For Average Test Times e d b a 150 300 450 600 750 711 705 662 664
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 August 2023 20:23 by user phoronix.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 August 2023 10:29 by user phoronix.
c Processor: Intel Xeon E-2388G @ 3.20GHz (8 Cores / 16 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: Intel RocketLake-S [UHD ], Monitor: VA2431, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 18 August 2023 07:42 by user phoronix.
d Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 20 August 2023 07:43 by user phoronix.
e Processor: Intel Xeon E-2336 @ 2.90GHz (6 Cores / 12 Threads), Motherboard: ASRockRack E3C252D4U (1.22 BIOS), Chipset: Intel Tiger Lake-H, Memory: 64GB, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS, Graphics: ASPEED, Network: 2 x Intel I210
OS: Ubuntu 22.04, Kernel: 6.2.0-26-generic (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server 1.21.1.4, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1024x768
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x57 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 August 2023 05:01 by user phoronix.