Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
B C Processor: Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads), Motherboard: ASUS PRIME Z690-P WIFI D4 (0605 BIOS), Chipset: Intel Device 7aa7, Memory: 16GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: ASUS Intel ADL-S GT1 15GB (1450MHz), Audio: Realtek ALC897, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel Device 7af0
OS: Ubuntu 22.04, Kernel: 5.19.0-051900rc6daily20220716-generic (x86_64), Desktop: GNOME Shell 42.1, Display Server: X Server 1.21.1.3 + Wayland, OpenGL: 4.6 Mesa 22.0.1, OpenCL: OpenCL 3.0, Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
12600k sept OpenBenchmarking.org Phoronix Test Suite Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads) ASUS PRIME Z690-P WIFI D4 (0605 BIOS) Intel Device 7aa7 16GB 1000GB Western Digital WDS100T1X0E-00AFY0 ASUS Intel ADL-S GT1 15GB (1450MHz) Realtek ALC897 ASUS MG28U Realtek RTL8125 2.5GbE + Intel Device 7af0 Ubuntu 22.04 5.19.0-051900rc6daily20220716-generic (x86_64) GNOME Shell 42.1 X Server 1.21.1.3 + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.2.204 GCC 11.2.0 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution 12600k Sept Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9 - A: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - B: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - C: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) - Python 3.10.4 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
A B C Result Overview Phoronix Test Suite 100% 103% 106% 108% 111% Redis OpenVINO memtier_benchmark Mobile Neural Network Natron C-Blosc Aircrack-ng GraphicsMagick Node.js V8 Web Tooling Benchmark LAMMPS Molecular Dynamics Simulator 7-Zip Compression SVT-AV1 Facebook RocksDB Apache Spark NCNN Dragonflydb Timed Erlang/OTP Compilation srsRAN Timed Node.js Compilation Timed Wasmer Compilation BRL-CAD Unpacking The Linux Kernel Unvanquished Timed PHP Compilation Timed CPython Compilation AI Benchmark Alpha Primesieve etcd ASTC Encoder Blender
12600k sept srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU natron: Spaceship unvanquished: 1920 x 1080 - High unvanquished: 1920 x 1200 - High unvanquished: 2560 x 1440 - High unvanquished: 3840 x 2160 - High unvanquished: 1920 x 1080 - Ultra unvanquished: 1920 x 1200 - Ultra unvanquished: 2560 x 1440 - Ultra unvanquished: 3840 x 2160 - Ultra unvanquished: 1920 x 1080 - Medium unvanquished: 1920 x 1200 - Medium unvanquished: 2560 x 1440 - Medium unvanquished: 3840 x 2160 - Medium svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space aircrack-ng: blosc: blosclz shuffle blosc: blosclz bitshuffle compress-7zip: Compression Rating compress-7zip: Decompression Rating astcenc: Fast astcenc: Medium astcenc: Thorough astcenc: Exhaustive lammps: 20k Atoms lammps: Rhodopsin Protein rocksdb: Rand Fill rocksdb: Rand Read rocksdb: Update Rand rocksdb: Seq Fill rocksdb: Rand Fill Sync rocksdb: Read While Writing rocksdb: Read Rand Write Rand dragonflydb: 50 - 1:1 dragonflydb: 50 - 1:5 dragonflydb: 50 - 5:1 dragonflydb: 200 - 1:1 dragonflydb: 200 - 1:5 dragonflydb: 200 - 5:1 memtier-benchmark: Redis - 50 - 1:1 memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 50 - 5:1 memtier-benchmark: Redis - 100 - 1:1 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 100 - 5:1 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 500 - 1:1 memtier-benchmark: Redis - 500 - 1:5 memtier-benchmark: Redis - 500 - 5:1 memtier-benchmark: Redis - 100 - 1:10 memtier-benchmark: Redis - 500 - 1:10 redis: GET - 50 redis: SET - 50 redis: GET - 500 redis: LPOP - 50 redis: SADD - 50 redis: SET - 500 redis: GET - 1000 redis: LPOP - 500 redis: LPUSH - 50 redis: SADD - 500 redis: SET - 1000 redis: LPOP - 1000 redis: LPUSH - 500 redis: SADD - 1000 redis: LPUSH - 1000 etcd: PUT - 50 - 100 etcd: PUT - 100 - 100 etcd: PUT - 50 - 1000 etcd: PUT - 500 - 100 etcd: PUT - 100 - 1000 etcd: PUT - 500 - 1000 etcd: RANGE - 50 - 100 etcd: RANGE - 100 - 100 etcd: RANGE - 50 - 1000 etcd: RANGE - 500 - 100 etcd: RANGE - 100 - 1000 etcd: RANGE - 500 - 1000 node-web-tooling: srsran: OFDM_Test ai-benchmark: Device Inference Score ai-benchmark: Device Training Score ai-benchmark: Device AI Score srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM brl-cad: VGR Performance Metric etcd: PUT - 50 - 100 - Average Latency etcd: PUT - 100 - 100 - Average Latency etcd: PUT - 50 - 1000 - Average Latency etcd: PUT - 500 - 100 - Average Latency etcd: PUT - 100 - 1000 - Average Latency etcd: PUT - 500 - 1000 - Average Latency etcd: RANGE - 50 - 100 - Average Latency etcd: RANGE - 100 - 100 - Average Latency etcd: RANGE - 50 - 1000 - Average Latency etcd: RANGE - 500 - 100 - Average Latency etcd: RANGE - 100 - 1000 - Average Latency etcd: RANGE - 500 - 1000 - Average Latency mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - FastestDet openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU unpack-linux: linux-5.19.tar.xz build-nodejs: Time To Compile build-php: Time To Compile build-python: Default build-python: Released Build, PGO + LTO Optimized primesieve: 1e12 primesieve: 1e13 build-erlang: Time To Compile build-wasmer: Time To Compile spark: 1000000 - 100 - SHA-512 Benchmark Time spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 500 - SHA-512 Benchmark Time spark: 1000000 - 500 - Calculate Pi Benchmark spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 500 - Group By Test Time spark: 1000000 - 500 - Repartition Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - SHA-512 Benchmark Time spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 10000000 - 100 - Calculate Pi Benchmark spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Group By Test Time spark: 10000000 - 100 - Repartition Test Time spark: 10000000 - 100 - Inner Join Test Time spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 500 - SHA-512 Benchmark Time spark: 10000000 - 500 - Calculate Pi Benchmark spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 500 - Group By Test Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 20000000 - 100 - SHA-512 Benchmark Time spark: 20000000 - 100 - Calculate Pi Benchmark spark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 100 - Group By Test Time spark: 20000000 - 100 - Repartition Test Time spark: 20000000 - 100 - Inner Join Test Time spark: 20000000 - 100 - Broadcast Inner Join Test Time spark: 20000000 - 500 - SHA-512 Benchmark Time spark: 20000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Group By Test Time spark: 20000000 - 500 - Repartition Test Time spark: 20000000 - 500 - Inner Join Test Time spark: 20000000 - 500 - Broadcast Inner Join Test Time spark: 40000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 100 - Group By Test Time spark: 40000000 - 100 - Repartition Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Broadcast Inner Join Test Time spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Group By Test Time spark: 40000000 - 500 - Repartition Test Time spark: 40000000 - 500 - Inner Join Test Time spark: 40000000 - 500 - Broadcast Inner Join Test Time spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 1000 - Repartition Test Time spark: 10000000 - 1000 - Inner Join Test Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - SHA-512 Benchmark Time spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 2000 - Group By Test Time spark: 10000000 - 2000 - Repartition Test Time spark: 10000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 20000000 - 1000 - SHA-512 Benchmark Time spark: 20000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 1000 - Group By Test Time spark: 20000000 - 1000 - Repartition Test Time spark: 20000000 - 1000 - Inner Join Test Time spark: 20000000 - 1000 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 2000 - Calculate Pi Benchmark spark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 2000 - Group By Test Time spark: 20000000 - 2000 - Repartition Test Time spark: 20000000 - 2000 - Inner Join Test Time spark: 20000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - SHA-512 Benchmark Time spark: 40000000 - 1000 - Calculate Pi Benchmark spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 1000 - Group By Test Time spark: 40000000 - 1000 - Repartition Test Time spark: 40000000 - 1000 - Inner Join Test Time spark: 40000000 - 1000 - Broadcast Inner Join Test Time spark: 40000000 - 2000 - SHA-512 Benchmark Time spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 2000 - Group By Test Time spark: 40000000 - 2000 - Repartition Test Time spark: 40000000 - 2000 - Inner Join Test Time spark: 40000000 - 2000 - Broadcast Inner Join Test Time blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only A B C 523.5 541.2 581.7 594.7 189.4 2.28 1.74 1.74 205.21 10.07 462.34 278.74 31.28 754.73 361.68 5175.16 13292.83 3.6 223.9 204.4 137.2 66.4 114 105.7 73.8 39.4 267.8 243.7 163.7 79.5 2.03 43.531 85.519 121.731 6.359 119.545 260.876 441.12 567 1171 166 270 1148 316 1190 33031.898 16672.5 8941.5 75966 61355 160.1335 60.8457 7.943 0.7483 5.196 5.825 1011444 79117222 577444 1175330 14631 2121137 2080793 4219269.32 4524509.15 4159557.48 3994130.34 4208682.77 3881918.88 2835321.53 2828630.76 2540352.24 2582100.75 2763821.75 2410478.31 2773955.34 2550575.11 2687075.03 2416811.03 2773074.52 2688499.75 5352183.5 3555745.75 4243838 4457927 3638545.75 3388742.5 3881387 5223662.5 2822836.25 3875137.25 3502199 2848729.25 2929168 3297778 2844518.25 117936.3208 112169.9051 168856.1994 112654.9098 162474.2042 126034.683 118377.2467 111362.119 168558.4019 112561.3114 163123.8521 125740.5868 18.57 168600000 1032 1672 2704 151 191.5 172.6 209.8 88 206019 0.8 0.9 5.9 0.9 6.1 7.7 0.8 0.9 5.9 0.9 6.1 7.7 9.296 1.206 3.058 20.05 5.095 2.394 2.454 26.695 11.16 3.46 2.79 3.14 3 5.44 1.02 9.22 37.96 7.51 5.77 14.37 19.86 13.4 8.61 204.96 4.07 482.96 150.34 139.49 79.04 157.2 257.97 32.23 405.89 1906.45 364.07 377.29 940.23 622.75 365.93 188.61 5945.17 79.4 1730.06 2257.35 2257.15 19.48 395.54 8.64 14.34 127.81 13.16 11.05 1.84 0.74 5.688 527.01 52.954 15.506 199.391 20.215 236.721 87.699 51.861 2.69 123.174031753 6.98 2.94 1.75 1.41 1.16 2.89 123.618450656 6.97 3.27 1.87 1.44 1.39 3.07 123.957658703 6.93 3.46 1.98 1.803054303 1.85 3.41 122.348772378 7.00 3.78 2.34 2.34 2.02 12.43 123.758446737 6.99 6.06 8.59 10.22 10.083681032 12.23 122.855103001 6.97 6.51 7.97 9.28 8.86 22.987073933 123.347395163 7.01 9.91 16.078304281 19.615724636 18.546102273 21.98 123.315452123 6.91 9.07 15.07 17.501869869 16.877289487 41.359232589 123.600912032 7.00 24.744527851 30.17 34.20 33.09 42.385326391 123.338861542 6.96 22.05 30.231436134 35.388141288 34.88705537 11.950703479 124.530201879 6.93 6.399788449 8.19 9.48 8.85 12.45 123.853876534 7.00 6.58 8.42 9.99 9.10 21.882513891 124.201627183 6.97 9.44 15.132216491 17.73 17.242279869 21.966948837 123.368306617 6.94 9.43 15.570102078 18.50 16.89 41.970932909 123.980783643 6.88 21.42 29.402919671 34.68 35.698133038 41.86 123.974795224 7.06 21.118961148 29.378328391 35.060375837 36.45 107.38 303.34 154.91 1242.71 377.04 527.8 537.6 578.8 590.7 189.3 2.28 1.74 1.75 206.79 10.06 463.7 192.88 31.37 742.81 359.57 5129.86 13347.46 3.6 222.6 205.6 136.7 66.4 114.1 105.7 73.7 39.4 267.1 245.2 163 79.6 2.016 43.712 86.528 127.041 6.359 120.54 260.782 446.137 573 1210 168 272 1165 319 1239 33109.836 16992.7 9258 77477 61210 160.0289 60.8658 7.9426 0.7482 5.207 5.695 995286 77772723 572446 1163406 14659 2118766 2081251 4209125.68 4489630.35 4136083.87 3972323.46 4208868.69 3825141.56 2555526.92 2696467.92 2336486.4 2511082.41 2649407.05 2512633 2865261.93 2602826.18 3347517.25 2536364.42 2773551.23 2785214.15 3822955.75 3119402.25 4223762 2755629.25 3997500.25 2669989.75 4364222.5 2834518 2851816.25 3854227.75 3273479.5 2840960 2947470.75 3465545.75 2941056.25 118167.755 111324.9806 168241.7701 112419.0915 163025.2241 126006.7027 118320.296 111448.4558 168112.8884 112831.3886 163552.1304 125808.6577 18.89 183000000 1029 1673 2702 151.6 191.2 172.9 209.6 88 205796 0.8 0.9 5.9 0.9 6.1 7.7 0.8 0.9 5.9 0.9 6.1 7.7 9.287 1.2 2.657 20.3 4.928 2.408 2.428 26.695 12.31 3.66 2.9 3.06 3.04 5.44 1.02 9.16 37.99 7.41 5.76 14.4 20 13.6 8.61 204.75 4.03 483.87 150.56 139.27 79.41 157.3 260.09 32.29 405.86 1903.62 365 381.28 940.99 627.21 366.55 189.02 5943.47 79.21 1734.99 2255.03 2252.28 19.33 395.59 8.61 51.76 127.36 13.38 11.11 1.86 0.74 5.699 526.99 53.009 15.489 199.787 20.29 235.757 88.056 51.602 2.84 124.069014404 7.12 2.84 1.88 1.44 1.11 2.95 123.541641397 7.01 3.28 1.90 1.40 1.44 3.07 123.00 6.95 3.52 2.11 1.92 1.54 3.38 124.03 6.98 3.78 2.31 2.36 1.91 12.50 124.093579903 6.98 6.30 8.40 10.20 10.157012655 12.03 124.132910967 6.92 6.29 8.24 8.87 9.09 22.98 123.781975598 6.95 10.03 16.00 19.211734983 19.68 21.60 123.659609155 7.02 9.49 15.30 17.36 17.88 41.57 123.903099348 7.01 24.87 30.88 34.587657908 36.21 41.61 123.436068143 6.91 21.80 30.63 35.60 34.21 12.08 129.233966759 7.30 6.37 8.09 9.92 8.72 12.99 131.922286442 7.36 6.85 8.93 10.57 9.71 21.83999095 124.635599763 6.91 9.13 15.27 17.63 17.05 22.088964217 125.670244459 6.92 9.37 15.33 17.82 16.959085602 41.69 123.944053248 6.962493428 21.00 30.07 34.70 35.82 41.90 125.005596616 7.01 21.05 29.67 34.70 34.61 107.6 303.25 154.74 1241.97 377.7 521.2 534.7 578.7 587.8 188.4 2.28 1.74 1.74 204.23 10.06 463.87 193.22 31.43 741.97 360.51 5136.99 13357.94 3.5 221.4 203.9 136.5 66.4 114 105.8 73.7 39.3 263.3 244.7 162.8 79.6 2.02 43.963 88.558 125.217 6.344 118.967 265.936 457.513 575 1217 167 272 1163 317 1242 33749.547 17013.4 9257.1 77380 59506 160.0589 60.8886 7.9396 0.7486 5.227 5.862 987274 79085341 585584 1195182 14636 2215141 2079557 4206479.32 4460778.29 4107472.56 3982880.38 4176068.34 3864893.24 2476859.81 2468655.32 2415948.89 2427170.31 2679655.73 2106869.82 2806199.11 2966884.56 2738214.13 2382868.01 2813689.56 2670507.06 4344096.5 3243652 4248164.5 2659905 3943910.25 3312414.5 4293396.5 2617574 2718001 4012269 3246355 2747067.5 2168859.75 3752102.5 2918397 118067.3946 111437.66 168534.309 112565.754 163135.4487 125808.3795 118167.1254 111365.0268 168376.887 112498.8743 163558.7114 125868.5806 18.57 185500000 1029 1674 2703 151.4 189.6 173.2 209.7 87 205167 0.8 0.9 5.9 0.9 6.1 7.7 0.8 0.9 5.9 0.9 6.1 7.7 9.375 1.216 3.051 21.969 5.049 2.47 2.444 26.266 11.64 3.49 2.83 3.09 3.02 5.41 1.02 9.53 38.14 7.4 5.73 14.57 19.2 13.31 8.63 221.23 4.67 481.76 150.22 138.77 79.43 156.92 259.64 32.21 405.42 1896.06 364.29 377.61 950.14 623.95 365.07 188.96 5956.11 78.85 1736.86 2256.98 2255.2 19.57 395.44 8.61 51.71 127.17 13.39 11.08 1.85 0.74 5.71 528.234 52.885 15.397 200.43 20.206 236.408 88.369 51.763 3.01 123.94 7.00 2.95 1.76 1.42 1.13 2.83 125.001646515 6.99 3.23 1.89 1.57 1.32 2.98 124.476336981 6.97 3.48 2.02 1.85 1.50 3.37 124.53 6.87 3.78 2.34 2.19 1.99 12.46 125.60 7.00 6.36 8.46 10.06 10.21 11.98 124.29 6.92 6.39 8.07 8.91 8.80 23.02 124.76 6.99 9.61 16.17 19.71 19.17 21.56 124.10 6.90 9.16 15.18 17.87 16.87 41.35 125.164549301 6.98 24.86 31.51 35.46 34.75 41.62 124.09 6.94 21.94 30.67 35.03 34.24 12.23 123.832556311 6.91 6.33 8.15 9.42 8.80 12.44 124.49 6.90 6.62 8.50 9.79 9.13 21.93 124.731757086 6.93 9.41 15.179208759 17.81 16.77 21.90 124.01 6.90 9.24 15.34 18.23 17.58 41.53 124.961603934 6.96 20.95 29.236415558 34.67 33.70 42.09 125.33 6.90 21.07 29.46 34.76 35.53 107.26 303.35 155.23 1242.78 378.41 OpenBenchmarking.org
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 110 220 330 440 550 523.5 527.8 521.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 120 240 360 480 600 541.2 537.6 534.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 130 260 390 520 650 581.7 578.8 578.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 130 260 390 520 650 594.7 590.7 587.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 40 80 120 160 200 189.4 189.3 188.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 0.513 1.026 1.539 2.052 2.565 2.28 2.28 2.28 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 0.3915 0.783 1.1745 1.566 1.9575 1.74 1.74 1.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 0.3938 0.7876 1.1814 1.5752 1.969 1.74 1.75 1.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 50 100 150 200 250 205.21 206.79 204.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 3 6 9 12 15 10.07 10.06 10.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 100 200 300 400 500 462.34 463.70 463.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 60 120 180 240 300 278.74 192.88 193.22 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 7 14 21 28 35 31.28 31.37 31.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 160 320 480 640 800 754.73 742.81 741.97 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 80 160 240 320 400 361.68 359.57 360.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 1100 2200 3300 4400 5500 5175.16 5129.86 5136.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 3K 6K 9K 12K 15K 13292.83 13347.46 13357.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Unvanquished Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High A B C 50 100 150 200 250 223.9 222.6 221.4
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K A B C 0.4568 0.9136 1.3704 1.8272 2.284 2.030 2.016 2.020 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K A B C 10 20 30 40 50 43.53 43.71 43.96 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K A B C 20 40 60 80 100 85.52 86.53 88.56 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K A B C 30 60 90 120 150 121.73 127.04 125.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p A B C 2 4 6 8 10 6.359 6.359 6.344 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p A B C 30 60 90 120 150 119.55 120.54 118.97 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p A B C 60 120 180 240 300 260.88 260.78 265.94 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p A B C 100 200 300 400 500 441.12 446.14 457.51 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate A B C 300 600 900 1200 1500 1171 1210 1217 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen A B C 40 80 120 160 200 166 168 167 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced A B C 60 120 180 240 300 270 272 272 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing A B C 300 600 900 1200 1500 1148 1165 1163 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian A B C 70 140 210 280 350 316 319 317 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space A B C 300 600 900 1200 1500 1190 1239 1242 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Facebook RocksDB OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill A B C 200K 400K 600K 800K 1000K 1011444 995286 987274 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read A B C 20M 40M 60M 80M 100M 79117222 77772723 79085341 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random A B C 130K 260K 390K 520K 650K 577444 572446 585584 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill A B C 300K 600K 900K 1200K 1500K 1175330 1163406 1195182 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing A B C 500K 1000K 1500K 2000K 2500K 2121137 2118766 2215141 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random A B C 400K 800K 1200K 1600K 2000K 2080793 2081251 2079557 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 A B C 900K 1800K 2700K 3600K 4500K 4219269.32 4209125.68 4206479.32 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 A B C 1000K 2000K 3000K 4000K 5000K 4524509.15 4489630.35 4460778.29 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 A B C 900K 1800K 2700K 3600K 4500K 4159557.48 4136083.87 4107472.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 A B C 900K 1800K 2700K 3600K 4500K 3994130.34 3972323.46 3982880.38 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 A B C 900K 1800K 2700K 3600K 4500K 4208682.77 4208868.69 4176068.34 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 A B C 800K 1600K 2400K 3200K 4000K 3881918.88 3825141.56 3864893.24 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2828630.76 2696467.92 2468655.32 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2540352.24 2336486.40 2415948.89 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 A B C 600K 1200K 1800K 2400K 3000K 2582100.75 2511082.41 2427170.31 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2763821.75 2649407.05 2679655.73 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2410478.31 2512633.00 2106869.82 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 A B C 600K 1200K 1800K 2400K 3000K 2773955.34 2865261.93 2806199.11 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 A B C 600K 1200K 1800K 2400K 3000K 2550575.11 2602826.18 2966884.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 A B C 700K 1400K 2100K 2800K 3500K 2687075.03 3347517.25 2738214.13 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2416811.03 2536364.42 2382868.01 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 A B C 600K 1200K 1800K 2400K 3000K 2773074.52 2773551.23 2813689.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 A B C 600K 1200K 1800K 2400K 3000K 2688499.75 2785214.15 2670507.06 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 A B C 800K 1600K 2400K 3200K 4000K 3555745.75 3119402.25 3243652.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 A B C 900K 1800K 2700K 3600K 4500K 4243838.0 4223762.0 4248164.5 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 50 A B C 1000K 2000K 3000K 4000K 5000K 4457927.00 2755629.25 2659905.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 50 A B C 900K 1800K 2700K 3600K 4500K 3638545.75 3997500.25 3943910.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 A B C 700K 1400K 2100K 2800K 3500K 3388742.50 2669989.75 3312414.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 A B C 900K 1800K 2700K 3600K 4500K 3881387.0 4364222.5 4293396.5 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 500 A B C 1.1M 2.2M 3.3M 4.4M 5.5M 5223662.5 2834518.0 2617574.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 A B C 600K 1200K 1800K 2400K 3000K 2822836.25 2851816.25 2718001.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 500 A B C 900K 1800K 2700K 3600K 4500K 3875137.25 3854227.75 4012269.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 A B C 800K 1600K 2400K 3200K 4000K 3502199.0 3273479.5 3246355.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 A B C 600K 1200K 1800K 2400K 3000K 2848729.25 2840960.00 2747067.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 A B C 600K 1200K 1800K 2400K 3000K 2929168.00 2947470.75 2168859.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 1000 A B C 800K 1600K 2400K 3200K 4000K 3297778.00 3465545.75 3752102.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 A B C 600K 1200K 1800K 2400K 3000K 2844518.25 2941056.25 2918397.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
etcd Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 100 A B C 30K 60K 90K 120K 150K 117936.32 118167.76 118067.39
OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 1000 A B C 40K 80K 120K 160K 200K 168558.40 168112.89 168376.89
OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 1000 A B C 40K 80K 120K 160K 200K 163123.85 163552.13 163558.71
OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 1000 A B C 30K 60K 90K 120K 150K 125740.59 125808.66 125868.58
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test A B C 40M 80M 120M 160M 200M 168600000 183000000 185500000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 30 60 90 120 150 151.0 151.6 151.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 40 80 120 160 200 191.5 191.2 189.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 40 80 120 160 200 172.6 172.9 173.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 50 100 150 200 250 209.8 209.6 209.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 20 40 60 80 100 88 88 87 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric A B C 40K 80K 120K 160K 200K 206019 205796 205167 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
etcd Etcd is a distributed, reliable key-value store intended for critical data of a distributed system. Etcd is written in Golang and part of the Cloud Native Computing Foundation (CNCF) and used by Kubernetes, Rook, CoreDNS, and other open-source software. This test profile uses Etcd's built-in benchmark to stress the PUT and RANGE performance of a single node / local system. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 100 - Average Latency A B C 0.18 0.36 0.54 0.72 0.9 0.8 0.8 0.8
OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 100 - Average Latency A B C 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 1000 - Average Latency A B C 1.3275 2.655 3.9825 5.31 6.6375 5.9 5.9 5.9
OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 100 - Average Latency A B C 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B C 3 6 9 12 15 9.296 9.287 9.375 MIN: 9.25 / MAX: 15.62 MIN: 9.25 / MAX: 11.36 MIN: 9.34 / MAX: 9.98 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B C 0.2736 0.5472 0.8208 1.0944 1.368 1.206 1.200 1.216 MIN: 1.19 / MAX: 1.51 MIN: 1.19 / MAX: 1.33 MIN: 1.2 / MAX: 1.45 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B C 0.6881 1.3762 2.0643 2.7524 3.4405 3.058 2.657 3.051 MIN: 3.04 / MAX: 3.85 MIN: 2.64 / MAX: 6.83 MIN: 3.03 / MAX: 4.35 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A B C 5 10 15 20 25 20.05 20.30 21.97 MIN: 19.99 / MAX: 21.33 MIN: 20.23 / MAX: 26.54 MIN: 21.9 / MAX: 27.98 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B C 1.1464 2.2928 3.4392 4.5856 5.732 5.095 4.928 5.049 MIN: 5.06 / MAX: 6.27 MIN: 4.89 / MAX: 6.13 MIN: 5.02 / MAX: 5.87 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B C 0.5558 1.1116 1.6674 2.2232 2.779 2.394 2.408 2.470 MIN: 2.38 / MAX: 2.61 MIN: 2.39 / MAX: 2.65 MIN: 2.38 / MAX: 9.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A B C 0.5522 1.1044 1.6566 2.2088 2.761 2.454 2.428 2.444 MIN: 2.43 / MAX: 3.38 MIN: 2.4 / MAX: 3.34 MIN: 2.42 / MAX: 3.26 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B C 6 12 18 24 30 26.70 26.70 26.27 MIN: 26.6 / MAX: 32.72 MIN: 26.61 / MAX: 32.85 MIN: 26.17 / MAX: 32.34 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 A B C 0.8235 1.647 2.4705 3.294 4.1175 3.46 3.66 3.49 MIN: 3.4 / MAX: 4.32 MIN: 3.61 / MAX: 4.88 MIN: 3.43 / MAX: 4.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 A B C 0.6525 1.305 1.9575 2.61 3.2625 2.79 2.90 2.83 MIN: 2.75 / MAX: 3.65 MIN: 2.86 / MAX: 4.11 MIN: 2.79 / MAX: 3.63 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 A B C 0.7065 1.413 2.1195 2.826 3.5325 3.14 3.06 3.09 MIN: 3.08 / MAX: 3.89 MIN: 3.03 / MAX: 4.06 MIN: 3.06 / MAX: 3.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet A B C 0.684 1.368 2.052 2.736 3.42 3.00 3.04 3.02 MIN: 2.95 / MAX: 3.84 MIN: 2.98 / MAX: 4.18 MIN: 2.96 / MAX: 3.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 A B C 1.224 2.448 3.672 4.896 6.12 5.44 5.44 5.41 MIN: 5.37 / MAX: 6.29 MIN: 5.38 / MAX: 6.65 MIN: 5.34 / MAX: 9.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface A B C 0.2295 0.459 0.6885 0.918 1.1475 1.02 1.02 1.02 MIN: 0.99 / MAX: 1.85 MIN: 1 / MAX: 1.32 MIN: 0.99 / MAX: 1.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet A B C 3 6 9 12 15 9.22 9.16 9.53 MIN: 9.07 / MAX: 10.19 MIN: 9.03 / MAX: 10.05 MIN: 9.36 / MAX: 10.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 A B C 9 18 27 36 45 37.96 37.99 38.14 MIN: 37.71 / MAX: 39.27 MIN: 37.73 / MAX: 39.19 MIN: 37.78 / MAX: 39.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 A B C 2 4 6 8 10 7.51 7.41 7.40 MIN: 7.37 / MAX: 8.45 MIN: 7.23 / MAX: 8.33 MIN: 7.24 / MAX: 8.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet A B C 1.2983 2.5966 3.8949 5.1932 6.4915 5.77 5.76 5.73 MIN: 5.68 / MAX: 6.67 MIN: 5.66 / MAX: 6.99 MIN: 5.62 / MAX: 6.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 A B C 4 8 12 16 20 14.37 14.40 14.57 MIN: 14.18 / MAX: 20.35 MIN: 14.23 / MAX: 15.65 MIN: 14.41 / MAX: 16 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny A B C 5 10 15 20 25 19.86 20.00 19.20 MIN: 19.65 / MAX: 20.15 MIN: 19.86 / MAX: 20.46 MIN: 19.05 / MAX: 19.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd A B C 3 6 9 12 15 13.40 13.60 13.31 MIN: 13.28 / MAX: 14.44 MIN: 13.43 / MAX: 14.86 MIN: 13.15 / MAX: 14.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m A B C 2 4 6 8 10 8.61 8.61 8.63 MIN: 8.53 / MAX: 9.53 MIN: 8.54 / MAX: 9.56 MIN: 8.54 / MAX: 9.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer A B C 50 100 150 200 250 204.96 204.75 221.23 MIN: 204.39 / MAX: 232.99 MIN: 204.28 / MAX: 211.49 MIN: 220.87 / MAX: 226.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet A B C 1.0508 2.1016 3.1524 4.2032 5.254 4.07 4.03 4.67 MIN: 4.04 / MAX: 4.31 MIN: 4 / MAX: 4.13 MIN: 4.63 / MAX: 4.76 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet A B C 100 200 300 400 500 482.96 483.87 481.76 MIN: 434.99 / MAX: 553.44 MIN: 434 / MAX: 553.55 MIN: 433.22 / MAX: 545.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 A B C 30 60 90 120 150 150.34 150.56 150.22 MIN: 140.63 / MAX: 189.41 MIN: 136.95 / MAX: 176.94 MIN: 137.58 / MAX: 175.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 A B C 30 60 90 120 150 139.49 139.27 138.77 MIN: 121.25 / MAX: 175.09 MIN: 119.74 / MAX: 177.41 MIN: 121.34 / MAX: 170.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 A B C 20 40 60 80 100 79.04 79.41 79.43 MIN: 74.66 / MAX: 84.71 MIN: 74.65 / MAX: 83.37 MIN: 75.37 / MAX: 84.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet A B C 30 60 90 120 150 157.20 157.30 156.92 MIN: 147.79 / MAX: 176.36 MIN: 149.12 / MAX: 176.82 MIN: 146.37 / MAX: 173.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 A B C 60 120 180 240 300 257.97 260.09 259.64 MIN: 222.39 / MAX: 299.24 MIN: 219.09 / MAX: 293.03 MIN: 219.28 / MAX: 294.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface A B C 7 14 21 28 35 32.23 32.29 32.21 MIN: 30.58 / MAX: 34.5 MIN: 29.89 / MAX: 34.33 MIN: 30.38 / MAX: 34.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet A B C 90 180 270 360 450 405.89 405.86 405.42 MIN: 383.82 / MAX: 441.86 MIN: 386.45 / MAX: 442.41 MIN: 386.26 / MAX: 433.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 A B C 400 800 1200 1600 2000 1906.45 1903.62 1896.06 MIN: 1832.08 / MAX: 2103 MIN: 1832.38 / MAX: 2111.8 MIN: 1830.58 / MAX: 2121.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 A B C 80 160 240 320 400 364.07 365.00 364.29 MIN: 334.5 / MAX: 411.23 MIN: 340.04 / MAX: 405.57 MIN: 336.97 / MAX: 405.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet A B C 80 160 240 320 400 377.29 381.28 377.61 MIN: 361.87 / MAX: 430.07 MIN: 361.67 / MAX: 447.07 MIN: 361.66 / MAX: 442.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 A B C 200 400 600 800 1000 940.23 940.99 950.14 MIN: 892.01 / MAX: 1080.9 MIN: 895.61 / MAX: 1068.31 MIN: 896.75 / MAX: 1066.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny A B C 140 280 420 560 700 622.75 627.21 623.95 MIN: 584.8 / MAX: 719.01 MIN: 581.6 / MAX: 704.84 MIN: 583.85 / MAX: 708.2 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd A B C 80 160 240 320 400 365.93 366.55 365.07 MIN: 348.21 / MAX: 406.24 MIN: 349.53 / MAX: 408.08 MIN: 342.75 / MAX: 404.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m A B C 40 80 120 160 200 188.61 189.02 188.96 MIN: 180.25 / MAX: 207.14 MIN: 180.67 / MAX: 211.75 MIN: 179.54 / MAX: 213 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer A B C 1300 2600 3900 5200 6500 5945.17 5943.47 5956.11 MIN: 5637.6 / MAX: 6292.95 MIN: 5684.57 / MAX: 6227.11 MIN: 5649.52 / MAX: 6345.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet A B C 20 40 60 80 100 79.40 79.21 78.85 MIN: 72.66 / MAX: 98.15 MIN: 70.61 / MAX: 100.1 MIN: 71.5 / MAX: 93.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 400 800 1200 1600 2000 1730.06 1734.99 1736.86 MIN: 1709.42 / MAX: 1761.16 MIN: 1708.96 / MAX: 1773.08 MIN: 1712.2 / MAX: 1770.88 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 500 1000 1500 2000 2500 2257.35 2255.03 2256.98 MIN: 1892.63 / MAX: 2810.33 MIN: 1890.66 / MAX: 2807.42 MIN: 1901.03 / MAX: 2807.39 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 500 1000 1500 2000 2500 2257.15 2252.28 2255.20 MIN: 1889.93 / MAX: 2814.63 MIN: 1918.36 / MAX: 2814.7 MIN: 1893.65 / MAX: 2806.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 5 10 15 20 25 19.48 19.33 19.57 MIN: 15.62 / MAX: 26.77 MIN: 13.45 / MAX: 26.38 MIN: 15.33 / MAX: 26.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 90 180 270 360 450 395.54 395.59 395.44 MIN: 332.95 / MAX: 848.36 MIN: 331.26 / MAX: 849.69 MIN: 328.43 / MAX: 849.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 2 4 6 8 10 8.64 8.61 8.61 MIN: 7.42 / MAX: 28.74 MIN: 7.45 / MAX: 18.65 MIN: 7.46 / MAX: 19.12 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 12 24 36 48 60 14.34 51.76 51.71 MIN: 12.92 / MAX: 25.33 MIN: 51.27 / MAX: 52.27 MIN: 51.25 / MAX: 52.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 30 60 90 120 150 127.81 127.36 127.17 MIN: 114.95 / MAX: 185.69 MIN: 109.38 / MAX: 183.36 MIN: 105.34 / MAX: 184.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 3 6 9 12 15 13.16 13.38 13.39 MIN: 10.33 / MAX: 13.9 MIN: 8.15 / MAX: 21.54 MIN: 7.89 / MAX: 16.11 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 3 6 9 12 15 11.05 11.11 11.08 MIN: 9.3 / MAX: 19.19 MIN: 9.45 / MAX: 24.64 MIN: 9.4 / MAX: 18.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 0.4185 0.837 1.2555 1.674 2.0925 1.84 1.86 1.85 MIN: 1.02 / MAX: 3 MIN: 1.02 / MAX: 3.04 MIN: 1.07 / MAX: 3.67 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 0.1665 0.333 0.4995 0.666 0.8325 0.74 0.74 0.74 MIN: 0.51 / MAX: 2.51 MIN: 0.51 / MAX: 2.09 MIN: 0.51 / MAX: 2.08 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile A B C 12 24 36 48 60 51.86 51.60 51.76 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 0.6773 1.3546 2.0319 2.7092 3.3865 2.69 2.84 3.01
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time A B C 0.432 0.864 1.296 1.728 2.16 1.803054303 1.920000000 1.850000000