Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2209087-PTS-12600KSE38 12600k sept - Phoronix Test Suite 12600k sept Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209087-PTS-12600KSE38&sro&grw&export=pdf .
12600k sept Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution A B C Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads) ASUS PRIME Z690-P WIFI D4 (0605 BIOS) Intel Device 7aa7 16GB 1000GB Western Digital WDS100T1X0E-00AFY0 ASUS Intel ADL-S GT1 15GB (1450MHz) Realtek ALC897 ASUS MG28U Realtek RTL8125 2.5GbE + Intel Device 7af0 Ubuntu 22.04 5.19.0-051900rc6daily20220716-generic (x86_64) GNOME Shell 42.1 X Server 1.21.1.3 + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.2.204 GCC 11.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9 Java Details - A: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - B: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - C: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
12600k sept unvanquished: 2560 x 1440 - Medium unvanquished: 3840 x 2160 - Medium unvanquished: 1920 x 1080 - Medium unvanquished: 1920 x 1200 - Medium unvanquished: 2560 x 1440 - Ultra unvanquished: 3840 x 2160 - Ultra unvanquished: 1920 x 1080 - Ultra unvanquished: 1920 x 1200 - Ultra unvanquished: 3840 x 2160 - High unvanquished: 2560 x 1440 - High unvanquished: 1920 x 1200 - High brl-cad: VGR Performance Metric astcenc: Fast astcenc: Medium astcenc: Thorough astcenc: Exhaustive ai-benchmark: Device Inference Score ai-benchmark: Device Training Score ai-benchmark: Device AI Score mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - FastestDet lammps: 20k Atoms lammps: Rhodopsin Protein openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU aircrack-ng: primesieve: 1e12 primesieve: 1e13 compress-7zip: Compression Rating compress-7zip: Decompression Rating build-php: Time To Compile graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only natron: Spaceship build-python: Default build-python: Released Build, PGO + LTO Optimized build-erlang: Time To Compile build-nodejs: Time To Compile build-wasmer: Time To Compile blosc: blosclz shuffle blosc: blosclz bitshuffle srsran: OFDM_Test srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM spark: 1000000 - 100 - SHA-512 Benchmark Time spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 500 - SHA-512 Benchmark Time unvanquished: 1920 x 1080 - High spark: 1000000 - 500 - Calculate Pi Benchmark spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 500 - Group By Test Time spark: 1000000 - 500 - Repartition Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - SHA-512 Benchmark Time spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 10000000 - 100 - Calculate Pi Benchmark spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Group By Test Time spark: 10000000 - 100 - Repartition Test Time spark: 10000000 - 100 - Inner Join Test Time spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 500 - SHA-512 Benchmark Time spark: 10000000 - 500 - Calculate Pi Benchmark spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 500 - Group By Test Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 20000000 - 100 - SHA-512 Benchmark Time spark: 20000000 - 100 - Calculate Pi Benchmark spark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 100 - Group By Test Time spark: 20000000 - 100 - Repartition Test Time spark: 20000000 - 100 - Inner Join Test Time spark: 20000000 - 100 - Broadcast Inner Join Test Time spark: 20000000 - 500 - SHA-512 Benchmark Time spark: 20000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Group By Test Time spark: 20000000 - 500 - Repartition Test Time spark: 20000000 - 500 - Inner Join Test Time spark: 20000000 - 500 - Broadcast Inner Join Test Time spark: 40000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 100 - Group By Test Time spark: 40000000 - 100 - Repartition Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Broadcast Inner Join Test Time spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Group By Test Time spark: 40000000 - 500 - Repartition Test Time spark: 40000000 - 500 - Inner Join Test Time spark: 40000000 - 500 - Broadcast Inner Join Test Time spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 1000 - Repartition Test Time spark: 10000000 - 1000 - Inner Join Test Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - SHA-512 Benchmark Time spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 2000 - Group By Test Time spark: 10000000 - 2000 - Repartition Test Time spark: 10000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 20000000 - 1000 - SHA-512 Benchmark Time spark: 20000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 1000 - Group By Test Time spark: 20000000 - 1000 - Repartition Test Time spark: 20000000 - 1000 - Inner Join Test Time spark: 20000000 - 1000 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 2000 - Calculate Pi Benchmark spark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 2000 - Group By Test Time spark: 20000000 - 2000 - Repartition Test Time spark: 20000000 - 2000 - Inner Join Test Time spark: 20000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - SHA-512 Benchmark Time spark: 40000000 - 1000 - Calculate Pi Benchmark spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 1000 - Group By Test Time spark: 40000000 - 1000 - Repartition Test Time spark: 40000000 - 1000 - Inner Join Test Time spark: 40000000 - 1000 - Broadcast Inner Join Test Time spark: 40000000 - 2000 - SHA-512 Benchmark Time spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 2000 - Group By Test Time spark: 40000000 - 2000 - Repartition Test Time spark: 40000000 - 2000 - Inner Join Test Time spark: 40000000 - 2000 - Broadcast Inner Join Test Time dragonflydb: 50 - 1:1 dragonflydb: 50 - 1:5 dragonflydb: 50 - 5:1 dragonflydb: 200 - 1:1 dragonflydb: 200 - 1:5 dragonflydb: 200 - 5:1 etcd: PUT - 50 - 100 etcd: PUT - 50 - 100 - Average Latency etcd: PUT - 100 - 100 etcd: PUT - 100 - 100 - Average Latency etcd: PUT - 50 - 1000 etcd: PUT - 50 - 1000 - Average Latency etcd: PUT - 500 - 100 etcd: PUT - 500 - 100 - Average Latency etcd: PUT - 100 - 1000 etcd: PUT - 100 - 1000 - Average Latency etcd: PUT - 500 - 1000 etcd: PUT - 500 - 1000 - Average Latency etcd: RANGE - 50 - 100 etcd: RANGE - 50 - 100 - Average Latency etcd: RANGE - 100 - 100 etcd: RANGE - 100 - 100 - Average Latency etcd: RANGE - 50 - 1000 etcd: RANGE - 50 - 1000 - Average Latency etcd: RANGE - 500 - 100 etcd: RANGE - 500 - 100 - Average Latency etcd: RANGE - 100 - 1000 etcd: RANGE - 100 - 1000 - Average Latency etcd: RANGE - 500 - 1000 etcd: RANGE - 500 - 1000 - Average Latency memtier-benchmark: Redis - 50 - 1:1 memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 50 - 5:1 memtier-benchmark: Redis - 100 - 1:1 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 100 - 5:1 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 500 - 1:1 memtier-benchmark: Redis - 500 - 1:5 memtier-benchmark: Redis - 500 - 5:1 memtier-benchmark: Redis - 100 - 1:10 memtier-benchmark: Redis - 500 - 1:10 redis: GET - 50 redis: SET - 50 redis: GET - 500 redis: LPOP - 50 redis: SADD - 50 redis: SET - 500 redis: GET - 1000 redis: LPOP - 500 redis: LPUSH - 50 redis: SADD - 500 redis: SET - 1000 redis: LPOP - 1000 redis: LPUSH - 500 redis: LPUSH - 1000 rocksdb: Rand Fill rocksdb: Rand Read rocksdb: Update Rand rocksdb: Seq Fill rocksdb: Rand Fill Sync rocksdb: Read While Writing rocksdb: Read Rand Write Rand node-web-tooling: unpack-linux: linux-5.19.tar.xz redis: SADD - 1000 A B C 163.7 79.5 267.8 243.7 73.8 39.4 114 105.7 66.4 137.2 204.4 206019 160.1335 60.8457 7.943 0.7483 1032 1672 2704 9.296 1.206 3.058 20.05 5.095 2.394 2.454 26.695 11.16 3.46 2.79 3.14 3 5.44 1.02 9.22 37.96 7.51 5.77 14.37 19.86 13.4 8.61 204.96 4.07 482.96 150.34 139.49 79.04 157.2 257.97 32.23 405.89 1906.45 364.07 377.29 940.23 622.75 365.93 188.61 5945.17 79.4 5.196 5.825 2.28 1730.06 1.74 2257.35 1.74 2257.15 205.21 19.48 10.07 395.54 462.34 8.64 278.74 14.34 31.28 127.81 754.73 13.16 361.68 11.05 5175.16 1.84 13292.83 0.74 33031.898 20.215 236.721 75966 61355 52.954 567 1171 166 270 1148 316 1190 2.03 43.531 85.519 121.731 6.359 119.545 260.876 441.12 107.38 303.34 154.91 1242.71 377.04 3.6 15.506 199.391 87.699 527.01 51.861 16672.5 8941.5 168600000 523.5 151 541.2 191.5 581.7 172.6 594.7 209.8 189.4 88 2.69 123.174031753 6.98 2.94 1.75 1.41 1.16 2.89 223.9 123.618450656 6.97 3.27 1.87 1.44 1.39 3.07 123.957658703 6.93 3.46 1.98 1.803054303 1.85 3.41 122.348772378 7.00 3.78 2.34 2.34 2.02 12.43 123.758446737 6.99 6.06 8.59 10.22 10.083681032 12.23 122.855103001 6.97 6.51 7.97 9.28 8.86 22.987073933 123.347395163 7.01 9.91 16.078304281 19.615724636 18.546102273 21.98 123.315452123 6.91 9.07 15.07 17.501869869 16.877289487 41.359232589 123.600912032 7.00 24.744527851 30.17 34.20 33.09 42.385326391 123.338861542 6.96 22.05 30.231436134 35.388141288 34.88705537 11.950703479 124.530201879 6.93 6.399788449 8.19 9.48 8.85 12.45 123.853876534 7.00 6.58 8.42 9.99 9.10 21.882513891 124.201627183 6.97 9.44 15.132216491 17.73 17.242279869 21.966948837 123.368306617 6.94 9.43 15.570102078 18.50 16.89 41.970932909 123.980783643 6.88 21.42 29.402919671 34.68 35.698133038 41.86 123.974795224 7.06 21.118961148 29.378328391 35.060375837 36.45 4219269.32 4524509.15 4159557.48 3994130.34 4208682.77 3881918.88 117936.3208 0.8 112169.9051 0.9 168856.1994 5.9 112654.9098 0.9 162474.2042 6.1 126034.683 7.7 118377.2467 0.8 111362.119 0.9 168558.4019 5.9 112561.3114 0.9 163123.8521 6.1 125740.5868 7.7 2835321.53 2828630.76 2540352.24 2582100.75 2763821.75 2410478.31 2773955.34 2550575.11 2687075.03 2416811.03 2773074.52 2688499.75 5352183.5 3555745.75 4243838 4457927 3638545.75 3388742.5 3881387 5223662.5 2822836.25 3875137.25 3502199 2848729.25 2929168 2844518.25 1011444 79117222 577444 1175330 14631 2121137 2080793 18.57 5.688 3297778 163 79.6 267.1 245.2 73.7 39.4 114.1 105.7 66.4 136.7 205.6 205796 160.0289 60.8658 7.9426 0.7482 1029 1673 2702 9.287 1.2 2.657 20.3 4.928 2.408 2.428 26.695 12.31 3.66 2.9 3.06 3.04 5.44 1.02 9.16 37.99 7.41 5.76 14.4 20 13.6 8.61 204.75 4.03 483.87 150.56 139.27 79.41 157.3 260.09 32.29 405.86 1903.62 365 381.28 940.99 627.21 366.55 189.02 5943.47 79.21 5.207 5.695 2.28 1734.99 1.74 2255.03 1.75 2252.28 206.79 19.33 10.06 395.59 463.7 8.61 192.88 51.76 31.37 127.36 742.81 13.38 359.57 11.11 5129.86 1.86 13347.46 0.74 33109.836 20.29 235.757 77477 61210 53.009 573 1210 168 272 1165 319 1239 2.016 43.712 86.528 127.041 6.359 120.54 260.782 446.137 107.6 303.25 154.74 1241.97 377.7 3.6 15.489 199.787 88.056 526.99 51.602 16992.7 9258 183000000 527.8 151.6 537.6 191.2 578.8 172.9 590.7 209.6 189.3 88 2.84 124.069014404 7.12 2.84 1.88 1.44 1.11 2.95 222.6 123.541641397 7.01 3.28 1.90 1.40 1.44 3.07 123.00 6.95 3.52 2.11 1.92 1.54 3.38 124.03 6.98 3.78 2.31 2.36 1.91 12.50 124.093579903 6.98 6.30 8.40 10.20 10.157012655 12.03 124.132910967 6.92 6.29 8.24 8.87 9.09 22.98 123.781975598 6.95 10.03 16.00 19.211734983 19.68 21.60 123.659609155 7.02 9.49 15.30 17.36 17.88 41.57 123.903099348 7.01 24.87 30.88 34.587657908 36.21 41.61 123.436068143 6.91 21.80 30.63 35.60 34.21 12.08 129.233966759 7.30 6.37 8.09 9.92 8.72 12.99 131.922286442 7.36 6.85 8.93 10.57 9.71 21.83999095 124.635599763 6.91 9.13 15.27 17.63 17.05 22.088964217 125.670244459 6.92 9.37 15.33 17.82 16.959085602 41.69 123.944053248 6.962493428 21.00 30.07 34.70 35.82 41.90 125.005596616 7.01 21.05 29.67 34.70 34.61 4209125.68 4489630.35 4136083.87 3972323.46 4208868.69 3825141.56 118167.755 0.8 111324.9806 0.9 168241.7701 5.9 112419.0915 0.9 163025.2241 6.1 126006.7027 7.7 118320.296 0.8 111448.4558 0.9 168112.8884 5.9 112831.3886 0.9 163552.1304 6.1 125808.6577 7.7 2555526.92 2696467.92 2336486.4 2511082.41 2649407.05 2512633 2865261.93 2602826.18 3347517.25 2536364.42 2773551.23 2785214.15 3822955.75 3119402.25 4223762 2755629.25 3997500.25 2669989.75 4364222.5 2834518 2851816.25 3854227.75 3273479.5 2840960 2947470.75 2941056.25 995286 77772723 572446 1163406 14659 2118766 2081251 18.89 5.699 3465545.75 162.8 79.6 263.3 244.7 73.7 39.3 114 105.8 66.4 136.5 203.9 205167 160.0589 60.8886 7.9396 0.7486 1029 1674 2703 9.375 1.216 3.051 21.969 5.049 2.47 2.444 26.266 11.64 3.49 2.83 3.09 3.02 5.41 1.02 9.53 38.14 7.4 5.73 14.57 19.2 13.31 8.63 221.23 4.67 481.76 150.22 138.77 79.43 156.92 259.64 32.21 405.42 1896.06 364.29 377.61 950.14 623.95 365.07 188.96 5956.11 78.85 5.227 5.862 2.28 1736.86 1.74 2256.98 1.74 2255.2 204.23 19.57 10.06 395.44 463.87 8.61 193.22 51.71 31.43 127.17 741.97 13.39 360.51 11.08 5136.99 1.85 13357.94 0.74 33749.547 20.206 236.408 77380 59506 52.885 575 1217 167 272 1163 317 1242 2.02 43.963 88.558 125.217 6.344 118.967 265.936 457.513 107.26 303.35 155.23 1242.78 378.41 3.5 15.397 200.43 88.369 528.234 51.763 17013.4 9257.1 185500000 521.2 151.4 534.7 189.6 578.7 173.2 587.8 209.7 188.4 87 3.01 123.94 7.00 2.95 1.76 1.42 1.13 2.83 221.4 125.001646515 6.99 3.23 1.89 1.57 1.32 2.98 124.476336981 6.97 3.48 2.02 1.85 1.50 3.37 124.53 6.87 3.78 2.34 2.19 1.99 12.46 125.60 7.00 6.36 8.46 10.06 10.21 11.98 124.29 6.92 6.39 8.07 8.91 8.80 23.02 124.76 6.99 9.61 16.17 19.71 19.17 21.56 124.10 6.90 9.16 15.18 17.87 16.87 41.35 125.164549301 6.98 24.86 31.51 35.46 34.75 41.62 124.09 6.94 21.94 30.67 35.03 34.24 12.23 123.832556311 6.91 6.33 8.15 9.42 8.80 12.44 124.49 6.90 6.62 8.50 9.79 9.13 21.93 124.731757086 6.93 9.41 15.179208759 17.81 16.77 21.90 124.01 6.90 9.24 15.34 18.23 17.58 41.53 124.961603934 6.96 20.95 29.236415558 34.67 33.70 42.09 125.33 6.90 21.07 29.46 34.76 35.53 4206479.32 4460778.29 4107472.56 3982880.38 4176068.34 3864893.24 118067.3946 0.8 111437.66 0.9 168534.309 5.9 112565.754 0.9 163135.4487 6.1 125808.3795 7.7 118167.1254 0.8 111365.0268 0.9 168376.887 5.9 112498.8743 0.9 163558.7114 6.1 125868.5806 7.7 2476859.81 2468655.32 2415948.89 2427170.31 2679655.73 2106869.82 2806199.11 2966884.56 2738214.13 2382868.01 2813689.56 2670507.06 4344096.5 3243652 4248164.5 2659905 3943910.25 3312414.5 4293396.5 2617574 2718001 4012269 3246355 2747067.5 2168859.75 2918397 987274 79085341 585584 1195182 14636 2215141 2079557 18.57 5.71 3752102.5 OpenBenchmarking.org
Unvanquished Resolution: 2560 x 1440 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 2560 x 1440 - Effects Quality: Medium A B C 40 80 120 160 200 163.7 163.0 162.8
Unvanquished Resolution: 3840 x 2160 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 3840 x 2160 - Effects Quality: Medium A B C 20 40 60 80 100 79.5 79.6 79.6
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Medium A B C 60 120 180 240 300 267.8 267.1 263.3
Unvanquished Resolution: 1920 x 1200 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1200 - Effects Quality: Medium A B C 50 100 150 200 250 243.7 245.2 244.7
Unvanquished Resolution: 2560 x 1440 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 2560 x 1440 - Effects Quality: Ultra A B C 16 32 48 64 80 73.8 73.7 73.7
Unvanquished Resolution: 3840 x 2160 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 3840 x 2160 - Effects Quality: Ultra A B C 9 18 27 36 45 39.4 39.4 39.3
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Ultra A B C 30 60 90 120 150 114.0 114.1 114.0
Unvanquished Resolution: 1920 x 1200 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1200 - Effects Quality: Ultra A B C 20 40 60 80 100 105.7 105.7 105.8
Unvanquished Resolution: 3840 x 2160 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 3840 x 2160 - Effects Quality: High A B C 15 30 45 60 75 66.4 66.4 66.4
Unvanquished Resolution: 2560 x 1440 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 2560 x 1440 - Effects Quality: High A B C 30 60 90 120 150 137.2 136.7 136.5
Unvanquished Resolution: 1920 x 1200 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1200 - Effects Quality: High A B C 50 100 150 200 250 204.4 205.6 203.9
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric A B C 40K 80K 120K 160K 200K 206019 205796 205167 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast A B C 40 80 120 160 200 160.13 160.03 160.06 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium A B C 14 28 42 56 70 60.85 60.87 60.89 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough A B C 2 4 6 8 10 7.9430 7.9426 7.9396 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive A B C 0.1684 0.3368 0.5052 0.6736 0.842 0.7483 0.7482 0.7486 1. (CXX) g++ options: -O3 -flto -pthread
AI Benchmark Alpha Device Inference Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Inference Score A B C 200 400 600 800 1000 1032 1029 1029
AI Benchmark Alpha Device Training Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Training Score A B C 400 800 1200 1600 2000 1672 1673 1674
AI Benchmark Alpha Device AI Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device AI Score A B C 600 1200 1800 2400 3000 2704 2702 2703
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B C 3 6 9 12 15 9.296 9.287 9.375 MIN: 9.25 / MAX: 15.62 MIN: 9.25 / MAX: 11.36 MIN: 9.34 / MAX: 9.98 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B C 0.2736 0.5472 0.8208 1.0944 1.368 1.206 1.200 1.216 MIN: 1.19 / MAX: 1.51 MIN: 1.19 / MAX: 1.33 MIN: 1.2 / MAX: 1.45 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B C 0.6881 1.3762 2.0643 2.7524 3.4405 3.058 2.657 3.051 MIN: 3.04 / MAX: 3.85 MIN: 2.64 / MAX: 6.83 MIN: 3.03 / MAX: 4.35 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A B C 5 10 15 20 25 20.05 20.30 21.97 MIN: 19.99 / MAX: 21.33 MIN: 20.23 / MAX: 26.54 MIN: 21.9 / MAX: 27.98 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B C 1.1464 2.2928 3.4392 4.5856 5.732 5.095 4.928 5.049 MIN: 5.06 / MAX: 6.27 MIN: 4.89 / MAX: 6.13 MIN: 5.02 / MAX: 5.87 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B C 0.5558 1.1116 1.6674 2.2232 2.779 2.394 2.408 2.470 MIN: 2.38 / MAX: 2.61 MIN: 2.39 / MAX: 2.65 MIN: 2.38 / MAX: 9.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A B C 0.5522 1.1044 1.6566 2.2088 2.761 2.454 2.428 2.444 MIN: 2.43 / MAX: 3.38 MIN: 2.4 / MAX: 3.34 MIN: 2.42 / MAX: 3.26 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B C 6 12 18 24 30 26.70 26.70 26.27 MIN: 26.6 / MAX: 32.72 MIN: 26.61 / MAX: 32.85 MIN: 26.17 / MAX: 32.34 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet A B C 3 6 9 12 15 11.16 12.31 11.64 MIN: 11.07 / MAX: 11.31 MIN: 12.19 / MAX: 17.29 MIN: 11.49 / MAX: 18.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 A B C 0.8235 1.647 2.4705 3.294 4.1175 3.46 3.66 3.49 MIN: 3.4 / MAX: 4.32 MIN: 3.61 / MAX: 4.88 MIN: 3.43 / MAX: 4.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 A B C 0.6525 1.305 1.9575 2.61 3.2625 2.79 2.90 2.83 MIN: 2.75 / MAX: 3.65 MIN: 2.86 / MAX: 4.11 MIN: 2.79 / MAX: 3.63 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 A B C 0.7065 1.413 2.1195 2.826 3.5325 3.14 3.06 3.09 MIN: 3.08 / MAX: 3.89 MIN: 3.03 / MAX: 4.06 MIN: 3.06 / MAX: 3.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet A B C 0.684 1.368 2.052 2.736 3.42 3.00 3.04 3.02 MIN: 2.95 / MAX: 3.84 MIN: 2.98 / MAX: 4.18 MIN: 2.96 / MAX: 3.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 A B C 1.224 2.448 3.672 4.896 6.12 5.44 5.44 5.41 MIN: 5.37 / MAX: 6.29 MIN: 5.38 / MAX: 6.65 MIN: 5.34 / MAX: 9.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface A B C 0.2295 0.459 0.6885 0.918 1.1475 1.02 1.02 1.02 MIN: 0.99 / MAX: 1.85 MIN: 1 / MAX: 1.32 MIN: 0.99 / MAX: 1.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet A B C 3 6 9 12 15 9.22 9.16 9.53 MIN: 9.07 / MAX: 10.19 MIN: 9.03 / MAX: 10.05 MIN: 9.36 / MAX: 10.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 A B C 9 18 27 36 45 37.96 37.99 38.14 MIN: 37.71 / MAX: 39.27 MIN: 37.73 / MAX: 39.19 MIN: 37.78 / MAX: 39.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 A B C 2 4 6 8 10 7.51 7.41 7.40 MIN: 7.37 / MAX: 8.45 MIN: 7.23 / MAX: 8.33 MIN: 7.24 / MAX: 8.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet A B C 1.2983 2.5966 3.8949 5.1932 6.4915 5.77 5.76 5.73 MIN: 5.68 / MAX: 6.67 MIN: 5.66 / MAX: 6.99 MIN: 5.62 / MAX: 6.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 A B C 4 8 12 16 20 14.37 14.40 14.57 MIN: 14.18 / MAX: 20.35 MIN: 14.23 / MAX: 15.65 MIN: 14.41 / MAX: 16 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny A B C 5 10 15 20 25 19.86 20.00 19.20 MIN: 19.65 / MAX: 20.15 MIN: 19.86 / MAX: 20.46 MIN: 19.05 / MAX: 19.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd A B C 3 6 9 12 15 13.40 13.60 13.31 MIN: 13.28 / MAX: 14.44 MIN: 13.43 / MAX: 14.86 MIN: 13.15 / MAX: 14.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m A B C 2 4 6 8 10 8.61 8.61 8.63 MIN: 8.53 / MAX: 9.53 MIN: 8.54 / MAX: 9.56 MIN: 8.54 / MAX: 9.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer A B C 50 100 150 200 250 204.96 204.75 221.23 MIN: 204.39 / MAX: 232.99 MIN: 204.28 / MAX: 211.49 MIN: 220.87 / MAX: 226.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet A B C 1.0508 2.1016 3.1524 4.2032 5.254 4.07 4.03 4.67 MIN: 4.04 / MAX: 4.31 MIN: 4 / MAX: 4.13 MIN: 4.63 / MAX: 4.76 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet A B C 100 200 300 400 500 482.96 483.87 481.76 MIN: 434.99 / MAX: 553.44 MIN: 434 / MAX: 553.55 MIN: 433.22 / MAX: 545.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 A B C 30 60 90 120 150 150.34 150.56 150.22 MIN: 140.63 / MAX: 189.41 MIN: 136.95 / MAX: 176.94 MIN: 137.58 / MAX: 175.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 A B C 30 60 90 120 150 139.49 139.27 138.77 MIN: 121.25 / MAX: 175.09 MIN: 119.74 / MAX: 177.41 MIN: 121.34 / MAX: 170.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 A B C 20 40 60 80 100 79.04 79.41 79.43 MIN: 74.66 / MAX: 84.71 MIN: 74.65 / MAX: 83.37 MIN: 75.37 / MAX: 84.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet A B C 30 60 90 120 150 157.20 157.30 156.92 MIN: 147.79 / MAX: 176.36 MIN: 149.12 / MAX: 176.82 MIN: 146.37 / MAX: 173.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 A B C 60 120 180 240 300 257.97 260.09 259.64 MIN: 222.39 / MAX: 299.24 MIN: 219.09 / MAX: 293.03 MIN: 219.28 / MAX: 294.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface A B C 7 14 21 28 35 32.23 32.29 32.21 MIN: 30.58 / MAX: 34.5 MIN: 29.89 / MAX: 34.33 MIN: 30.38 / MAX: 34.64 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet A B C 90 180 270 360 450 405.89 405.86 405.42 MIN: 383.82 / MAX: 441.86 MIN: 386.45 / MAX: 442.41 MIN: 386.26 / MAX: 433.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 A B C 400 800 1200 1600 2000 1906.45 1903.62 1896.06 MIN: 1832.08 / MAX: 2103 MIN: 1832.38 / MAX: 2111.8 MIN: 1830.58 / MAX: 2121.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 A B C 80 160 240 320 400 364.07 365.00 364.29 MIN: 334.5 / MAX: 411.23 MIN: 340.04 / MAX: 405.57 MIN: 336.97 / MAX: 405.48 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet A B C 80 160 240 320 400 377.29 381.28 377.61 MIN: 361.87 / MAX: 430.07 MIN: 361.67 / MAX: 447.07 MIN: 361.66 / MAX: 442.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 A B C 200 400 600 800 1000 940.23 940.99 950.14 MIN: 892.01 / MAX: 1080.9 MIN: 895.61 / MAX: 1068.31 MIN: 896.75 / MAX: 1066.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny A B C 140 280 420 560 700 622.75 627.21 623.95 MIN: 584.8 / MAX: 719.01 MIN: 581.6 / MAX: 704.84 MIN: 583.85 / MAX: 708.2 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd A B C 80 160 240 320 400 365.93 366.55 365.07 MIN: 348.21 / MAX: 406.24 MIN: 349.53 / MAX: 408.08 MIN: 342.75 / MAX: 404.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m A B C 40 80 120 160 200 188.61 189.02 188.96 MIN: 180.25 / MAX: 207.14 MIN: 180.67 / MAX: 211.75 MIN: 179.54 / MAX: 213 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer A B C 1300 2600 3900 5200 6500 5945.17 5943.47 5956.11 MIN: 5637.6 / MAX: 6292.95 MIN: 5684.57 / MAX: 6227.11 MIN: 5649.52 / MAX: 6345.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet A B C 20 40 60 80 100 79.40 79.21 78.85 MIN: 72.66 / MAX: 98.15 MIN: 70.61 / MAX: 100.1 MIN: 71.5 / MAX: 93.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
LAMMPS Molecular Dynamics Simulator Model: 20k Atoms OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms A B C 1.1761 2.3522 3.5283 4.7044 5.8805 5.196 5.207 5.227 1. (CXX) g++ options: -O3 -lm -ldl
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein A B C 1.319 2.638 3.957 5.276 6.595 5.825 5.695 5.862 1. (CXX) g++ options: -O3 -lm -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 0.513 1.026 1.539 2.052 2.565 2.28 2.28 2.28 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 400 800 1200 1600 2000 1730.06 1734.99 1736.86 MIN: 1709.42 / MAX: 1761.16 MIN: 1708.96 / MAX: 1773.08 MIN: 1712.2 / MAX: 1770.88 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 0.3915 0.783 1.1745 1.566 1.9575 1.74 1.74 1.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 500 1000 1500 2000 2500 2257.35 2255.03 2256.98 MIN: 1892.63 / MAX: 2810.33 MIN: 1890.66 / MAX: 2807.42 MIN: 1901.03 / MAX: 2807.39 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 0.3938 0.7876 1.1814 1.5752 1.969 1.74 1.75 1.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 500 1000 1500 2000 2500 2257.15 2252.28 2255.20 MIN: 1889.93 / MAX: 2814.63 MIN: 1918.36 / MAX: 2814.7 MIN: 1893.65 / MAX: 2806.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 50 100 150 200 250 205.21 206.79 204.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 5 10 15 20 25 19.48 19.33 19.57 MIN: 15.62 / MAX: 26.77 MIN: 13.45 / MAX: 26.38 MIN: 15.33 / MAX: 26.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 3 6 9 12 15 10.07 10.06 10.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 90 180 270 360 450 395.54 395.59 395.44 MIN: 332.95 / MAX: 848.36 MIN: 331.26 / MAX: 849.69 MIN: 328.43 / MAX: 849.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 100 200 300 400 500 462.34 463.70 463.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 2 4 6 8 10 8.64 8.61 8.61 MIN: 7.42 / MAX: 28.74 MIN: 7.45 / MAX: 18.65 MIN: 7.46 / MAX: 19.12 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 60 120 180 240 300 278.74 192.88 193.22 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 12 24 36 48 60 14.34 51.76 51.71 MIN: 12.92 / MAX: 25.33 MIN: 51.27 / MAX: 52.27 MIN: 51.25 / MAX: 52.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 7 14 21 28 35 31.28 31.37 31.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 30 60 90 120 150 127.81 127.36 127.17 MIN: 114.95 / MAX: 185.69 MIN: 109.38 / MAX: 183.36 MIN: 105.34 / MAX: 184.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 160 320 480 640 800 754.73 742.81 741.97 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 3 6 9 12 15 13.16 13.38 13.39 MIN: 10.33 / MAX: 13.9 MIN: 8.15 / MAX: 21.54 MIN: 7.89 / MAX: 16.11 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 80 160 240 320 400 361.68 359.57 360.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 3 6 9 12 15 11.05 11.11 11.08 MIN: 9.3 / MAX: 19.19 MIN: 9.45 / MAX: 24.64 MIN: 9.4 / MAX: 18.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 1100 2200 3300 4400 5500 5175.16 5129.86 5136.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 0.4185 0.837 1.2555 1.674 2.0925 1.84 1.86 1.85 MIN: 1.02 / MAX: 3 MIN: 1.02 / MAX: 3.04 MIN: 1.07 / MAX: 3.67 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 3K 6K 9K 12K 15K 13292.83 13347.46 13357.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 0.1665 0.333 0.4995 0.666 0.8325 0.74 0.74 0.74 MIN: 0.51 / MAX: 2.51 MIN: 0.51 / MAX: 2.09 MIN: 0.51 / MAX: 2.08 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Aircrack-ng OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 A B C 7K 14K 21K 28K 35K 33031.90 33109.84 33749.55 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
Primesieve Length: 1e12 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e12 A B C 5 10 15 20 25 20.22 20.29 20.21 1. (CXX) g++ options: -O3
Primesieve Length: 1e13 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e13 A B C 50 100 150 200 250 236.72 235.76 236.41 1. (CXX) g++ options: -O3
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating A B C 17K 34K 51K 68K 85K 75966 77477 77380 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating A B C 13K 26K 39K 52K 65K 61355 61210 59506 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile A B C 12 24 36 48 60 52.95 53.01 52.89
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl A B C 120 240 360 480 600 567 573 575 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate A B C 300 600 900 1200 1500 1171 1210 1217 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen A B C 40 80 120 160 200 166 168 167 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced A B C 60 120 180 240 300 270 272 272 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing A B C 300 600 900 1200 1500 1148 1165 1163 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian A B C 70 140 210 280 350 316 319 317 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space A B C 300 600 900 1200 1500 1190 1239 1242 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K A B C 0.4568 0.9136 1.3704 1.8272 2.284 2.030 2.016 2.020 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K A B C 10 20 30 40 50 43.53 43.71 43.96 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K A B C 20 40 60 80 100 85.52 86.53 88.56 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K A B C 30 60 90 120 150 121.73 127.04 125.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p A B C 2 4 6 8 10 6.359 6.359 6.344 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p A B C 30 60 90 120 150 119.55 120.54 118.97 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p A B C 60 120 180 240 300 260.88 260.78 265.94 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p A B C 100 200 300 400 500 441.12 446.14 457.51 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: BMW27 - Compute: CPU-Only A B C 20 40 60 80 100 107.38 107.60 107.26
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Classroom - Compute: CPU-Only A B C 70 140 210 280 350 303.34 303.25 303.35
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Fishy Cat - Compute: CPU-Only A B C 30 60 90 120 150 154.91 154.74 155.23
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Barbershop - Compute: CPU-Only A B C 300 600 900 1200 1500 1242.71 1241.97 1242.78
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Pabellon Barcelona - Compute: CPU-Only A B C 80 160 240 320 400 377.04 377.70 378.41
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship A B C 0.81 1.62 2.43 3.24 4.05 3.6 3.6 3.5
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default A B C 4 8 12 16 20 15.51 15.49 15.40
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized A B C 40 80 120 160 200 199.39 199.79 200.43
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile A B C 20 40 60 80 100 87.70 88.06 88.37
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile A B C 110 220 330 440 550 527.01 526.99 528.23
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile A B C 12 24 36 48 60 51.86 51.60 51.76 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle A B C 4K 8K 12K 16K 20K 16672.5 16992.7 17013.4 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle A B C 2K 4K 6K 8K 10K 8941.5 9258.0 9257.1 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test A B C 40M 80M 120M 160M 200M 168600000 183000000 185500000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 110 220 330 440 550 523.5 527.8 521.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 30 60 90 120 150 151.0 151.6 151.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 120 240 360 480 600 541.2 537.6 534.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 40 80 120 160 200 191.5 191.2 189.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 130 260 390 520 650 581.7 578.8 578.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 40 80 120 160 200 172.6 172.9 173.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 130 260 390 520 650 594.7 590.7 587.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 50 100 150 200 250 209.8 209.6 209.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 40 80 120 160 200 189.4 189.3 188.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 20 40 60 80 100 88 88 87 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 0.6773 1.3546 2.0319 2.7092 3.3865 2.69 2.84 3.01
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.17 124.07 123.94
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.98 7.12 7.00
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time A B C 0.6638 1.3276 1.9914 2.6552 3.319 2.94 2.84 2.95
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time A B C 0.423 0.846 1.269 1.692 2.115 1.75 1.88 1.76
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time A B C 0.324 0.648 0.972 1.296 1.62 1.41 1.44 1.42
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 0.261 0.522 0.783 1.044 1.305 1.16 1.11 1.13
Apache Spark Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 0.6638 1.3276 1.9914 2.6552 3.319 2.89 2.95 2.83
Unvanquished Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High A B C 50 100 150 200 250 223.9 222.6 221.4
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.62 123.54 125.00
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.97 7.01 6.99
Apache Spark Row Count: 1000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time A B C 0.738 1.476 2.214 2.952 3.69 3.27 3.28 3.23
Apache Spark Row Count: 1000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time A B C 0.4275 0.855 1.2825 1.71 2.1375 1.87 1.90 1.89
Apache Spark Row Count: 1000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time A B C 0.3533 0.7066 1.0599 1.4132 1.7665 1.44 1.40 1.57
Apache Spark Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 0.324 0.648 0.972 1.296 1.62 1.39 1.44 1.32
Apache Spark Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 0.6908 1.3816 2.0724 2.7632 3.454 3.07 3.07 2.98
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.96 123.00 124.48
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.93 6.95 6.97
Apache Spark Row Count: 1000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time A B C 0.792 1.584 2.376 3.168 3.96 3.46 3.52 3.48
Apache Spark Row Count: 1000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time A B C 0.4748 0.9496 1.4244 1.8992 2.374 1.98 2.11 2.02
Apache Spark Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time A B C 0.432 0.864 1.296 1.728 2.16 1.803054303 1.920000000 1.850000000
Apache Spark Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 0.4163 0.8326 1.2489 1.6652 2.0815 1.85 1.54 1.50
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 0.7673 1.5346 2.3019 3.0692 3.8365 3.41 3.38 3.37
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 122.35 124.03 124.53
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.00 6.98 6.87
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time A B C 0.8505 1.701 2.5515 3.402 4.2525 3.78 3.78 3.78
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time A B C 0.5265 1.053 1.5795 2.106 2.6325 2.34 2.31 2.34
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time A B C 0.531 1.062 1.593 2.124 2.655 2.34 2.36 2.19
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 0.4545 0.909 1.3635 1.818 2.2725 2.02 1.91 1.99
Apache Spark Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 3 6 9 12 15 12.43 12.50 12.46
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.76 124.09 125.60
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.99 6.98 7.00
Apache Spark Row Count: 10000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time A B C 2 4 6 8 10 6.06 6.30 6.36
Apache Spark Row Count: 10000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time A B C 2 4 6 8 10 8.59 8.40 8.46
Apache Spark Row Count: 10000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time A B C 3 6 9 12 15 10.22 10.20 10.06
Apache Spark Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 10.08 10.16 10.21
Apache Spark Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 3 6 9 12 15 12.23 12.03 11.98
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 122.86 124.13 124.29
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.97 6.92 6.92
Apache Spark Row Count: 10000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time A B C 2 4 6 8 10 6.51 6.29 6.39
Apache Spark Row Count: 10000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time A B C 2 4 6 8 10 7.97 8.24 8.07
Apache Spark Row Count: 10000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time A B C 3 6 9 12 15 9.28 8.87 8.91
Apache Spark Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 8.86 9.09 8.80
Apache Spark Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 6 12 18 24 30 22.99 22.98 23.02
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.35 123.78 124.76
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.01 6.95 6.99
Apache Spark Row Count: 20000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Group By Test Time A B C 3 6 9 12 15 9.91 10.03 9.61
Apache Spark Row Count: 20000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Repartition Test Time A B C 4 8 12 16 20 16.08 16.00 16.17
Apache Spark Row Count: 20000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Inner Join Test Time A B C 5 10 15 20 25 19.62 19.21 19.71
Apache Spark Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 5 10 15 20 25 18.55 19.68 19.17
Apache Spark Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 5 10 15 20 25 21.98 21.60 21.56
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.32 123.66 124.10
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.91 7.02 6.90
Apache Spark Row Count: 20000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Group By Test Time A B C 3 6 9 12 15 9.07 9.49 9.16
Apache Spark Row Count: 20000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Repartition Test Time A B C 4 8 12 16 20 15.07 15.30 15.18
Apache Spark Row Count: 20000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Inner Join Test Time A B C 4 8 12 16 20 17.50 17.36 17.87
Apache Spark Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 16.88 17.88 16.87
Apache Spark Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 9 18 27 36 45 41.36 41.57 41.35
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.60 123.90 125.16
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.00 7.01 6.98
Apache Spark Row Count: 40000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time A B C 6 12 18 24 30 24.74 24.87 24.86
Apache Spark Row Count: 40000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time A B C 7 14 21 28 35 30.17 30.88 31.51
Apache Spark Row Count: 40000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time A B C 8 16 24 32 40 34.20 34.59 35.46
Apache Spark Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 8 16 24 32 40 33.09 36.21 34.75
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 10 20 30 40 50 42.39 41.61 41.62
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.34 123.44 124.09
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.96 6.91 6.94
Apache Spark Row Count: 40000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Group By Test Time A B C 5 10 15 20 25 22.05 21.80 21.94
Apache Spark Row Count: 40000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Repartition Test Time A B C 7 14 21 28 35 30.23 30.63 30.67
Apache Spark Row Count: 40000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Inner Join Test Time A B C 8 16 24 32 40 35.39 35.60 35.03
Apache Spark Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 8 16 24 32 40 34.89 34.21 34.24
Apache Spark Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 3 6 9 12 15 11.95 12.08 12.23
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.53 129.23 123.83
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.93 7.30 6.91
Apache Spark Row Count: 10000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time A B C 2 4 6 8 10 6.399788449 6.370000000 6.330000000
Apache Spark Row Count: 10000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time A B C 2 4 6 8 10 8.19 8.09 8.15
Apache Spark Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time A B C 3 6 9 12 15 9.48 9.92 9.42
Apache Spark Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 2 4 6 8 10 8.85 8.72 8.80
Apache Spark Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 3 6 9 12 15 12.45 12.99 12.44
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.85 131.92 124.49
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.00 7.36 6.90
Apache Spark Row Count: 10000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time A B C 2 4 6 8 10 6.58 6.85 6.62
Apache Spark Row Count: 10000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time A B C 2 4 6 8 10 8.42 8.93 8.50
Apache Spark Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time A B C 3 6 9 12 15 9.99 10.57 9.79
Apache Spark Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 3 6 9 12 15 9.10 9.71 9.13
Apache Spark Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 5 10 15 20 25 21.88 21.84 21.93
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 124.20 124.64 124.73
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.97 6.91 6.93
Apache Spark Row Count: 20000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Group By Test Time A B C 3 6 9 12 15 9.44 9.13 9.41
Apache Spark Row Count: 20000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Repartition Test Time A B C 4 8 12 16 20 15.13 15.27 15.18
Apache Spark Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time A B C 4 8 12 16 20 17.73 17.63 17.81
Apache Spark Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 17.24 17.05 16.77
Apache Spark Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 5 10 15 20 25 21.97 22.09 21.90
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.37 125.67 124.01
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.94 6.92 6.90
Apache Spark Row Count: 20000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Group By Test Time A B C 3 6 9 12 15 9.43 9.37 9.24
Apache Spark Row Count: 20000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Repartition Test Time A B C 4 8 12 16 20 15.57 15.33 15.34
Apache Spark Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time A B C 5 10 15 20 25 18.50 17.82 18.23
Apache Spark Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 16.89 16.96 17.58
Apache Spark Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 10 20 30 40 50 41.97 41.69 41.53
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.98 123.94 124.96
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 6.880000000 6.962493428 6.960000000
Apache Spark Row Count: 40000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Group By Test Time A B C 5 10 15 20 25 21.42 21.00 20.95
Apache Spark Row Count: 40000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Repartition Test Time A B C 7 14 21 28 35 29.40 30.07 29.24
Apache Spark Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time A B C 8 16 24 32 40 34.68 34.70 34.67
Apache Spark Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 8 16 24 32 40 35.70 35.82 33.70
Apache Spark Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 10 20 30 40 50 41.86 41.90 42.09
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 30 60 90 120 150 123.97 125.01 125.33
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 2 4 6 8 10 7.06 7.01 6.90
Apache Spark Row Count: 40000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time A B C 5 10 15 20 25 21.12 21.05 21.07
Apache Spark Row Count: 40000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time A B C 7 14 21 28 35 29.38 29.67 29.46
Apache Spark Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time A B C 8 16 24 32 40 35.06 34.70 34.76
Apache Spark Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 8 16 24 32 40 36.45 34.61 35.53
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 A B C 900K 1800K 2700K 3600K 4500K 4219269.32 4209125.68 4206479.32 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 A B C 1000K 2000K 3000K 4000K 5000K 4524509.15 4489630.35 4460778.29 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 A B C 900K 1800K 2700K 3600K 4500K 4159557.48 4136083.87 4107472.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 A B C 900K 1800K 2700K 3600K 4500K 3994130.34 3972323.46 3982880.38 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 A B C 900K 1800K 2700K 3600K 4500K 4208682.77 4208868.69 4176068.34 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 A B C 800K 1600K 2400K 3200K 4000K 3881918.88 3825141.56 3864893.24 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
etcd Test: PUT - Connections: 50 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 100 A B C 30K 60K 90K 120K 150K 117936.32 118167.76 118067.39
etcd Test: PUT - Connections: 50 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 100 - Average Latency A B C 0.18 0.36 0.54 0.72 0.9 0.8 0.8 0.8
etcd Test: PUT - Connections: 100 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 100 A B C 20K 40K 60K 80K 100K 112169.91 111324.98 111437.66
etcd Test: PUT - Connections: 100 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 100 - Average Latency A B C 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: PUT - Connections: 50 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 1000 A B C 40K 80K 120K 160K 200K 168856.20 168241.77 168534.31
etcd Test: PUT - Connections: 50 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 1000 - Average Latency A B C 1.3275 2.655 3.9825 5.31 6.6375 5.9 5.9 5.9
etcd Test: PUT - Connections: 500 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 100 A B C 20K 40K 60K 80K 100K 112654.91 112419.09 112565.75
etcd Test: PUT - Connections: 500 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 100 - Average Latency A B C 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: PUT - Connections: 100 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 1000 A B C 30K 60K 90K 120K 150K 162474.20 163025.22 163135.45
etcd Test: PUT - Connections: 100 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 1000 - Average Latency A B C 2 4 6 8 10 6.1 6.1 6.1
etcd Test: PUT - Connections: 500 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 1000 A B C 30K 60K 90K 120K 150K 126034.68 126006.70 125808.38
etcd Test: PUT - Connections: 500 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 1000 - Average Latency A B C 2 4 6 8 10 7.7 7.7 7.7
etcd Test: RANGE - Connections: 50 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 100 A B C 30K 60K 90K 120K 150K 118377.25 118320.30 118167.13
etcd Test: RANGE - Connections: 50 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 100 - Average Latency A B C 0.18 0.36 0.54 0.72 0.9 0.8 0.8 0.8
etcd Test: RANGE - Connections: 100 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 100 A B C 20K 40K 60K 80K 100K 111362.12 111448.46 111365.03
etcd Test: RANGE - Connections: 100 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 100 - Average Latency A B C 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: RANGE - Connections: 50 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 1000 A B C 40K 80K 120K 160K 200K 168558.40 168112.89 168376.89
etcd Test: RANGE - Connections: 50 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 1000 - Average Latency A B C 1.3275 2.655 3.9825 5.31 6.6375 5.9 5.9 5.9
etcd Test: RANGE - Connections: 500 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 100 A B C 20K 40K 60K 80K 100K 112561.31 112831.39 112498.87
etcd Test: RANGE - Connections: 500 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 100 - Average Latency A B C 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: RANGE - Connections: 100 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 1000 A B C 40K 80K 120K 160K 200K 163123.85 163552.13 163558.71
etcd Test: RANGE - Connections: 100 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 1000 - Average Latency A B C 2 4 6 8 10 6.1 6.1 6.1
etcd Test: RANGE - Connections: 500 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 1000 A B C 30K 60K 90K 120K 150K 125740.59 125808.66 125868.58
etcd Test: RANGE - Connections: 500 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 1000 - Average Latency A B C 2 4 6 8 10 7.7 7.7 7.7
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 A B C 600K 1200K 1800K 2400K 3000K 2835321.53 2555526.92 2476859.81 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2828630.76 2696467.92 2468655.32 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2540352.24 2336486.40 2415948.89 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 A B C 600K 1200K 1800K 2400K 3000K 2582100.75 2511082.41 2427170.31 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 A B C 600K 1200K 1800K 2400K 3000K 2763821.75 2649407.05 2679655.73 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2410478.31 2512633.00 2106869.82 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 A B C 600K 1200K 1800K 2400K 3000K 2773955.34 2865261.93 2806199.11 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 A B C 600K 1200K 1800K 2400K 3000K 2550575.11 2602826.18 2966884.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 A B C 700K 1400K 2100K 2800K 3500K 2687075.03 3347517.25 2738214.13 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 A B C 500K 1000K 1500K 2000K 2500K 2416811.03 2536364.42 2382868.01 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 A B C 600K 1200K 1800K 2400K 3000K 2773074.52 2773551.23 2813689.56 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 A B C 600K 1200K 1800K 2400K 3000K 2688499.75 2785214.15 2670507.06 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 A B C 1.1M 2.2M 3.3M 4.4M 5.5M 5352183.50 3822955.75 4344096.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 A B C 800K 1600K 2400K 3200K 4000K 3555745.75 3119402.25 3243652.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 A B C 900K 1800K 2700K 3600K 4500K 4243838.0 4223762.0 4248164.5 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 50 A B C 1000K 2000K 3000K 4000K 5000K 4457927.00 2755629.25 2659905.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 50 A B C 900K 1800K 2700K 3600K 4500K 3638545.75 3997500.25 3943910.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 A B C 700K 1400K 2100K 2800K 3500K 3388742.50 2669989.75 3312414.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 A B C 900K 1800K 2700K 3600K 4500K 3881387.0 4364222.5 4293396.5 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 500 A B C 1.1M 2.2M 3.3M 4.4M 5.5M 5223662.5 2834518.0 2617574.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 A B C 600K 1200K 1800K 2400K 3000K 2822836.25 2851816.25 2718001.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 500 A B C 900K 1800K 2700K 3600K 4500K 3875137.25 3854227.75 4012269.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 A B C 800K 1600K 2400K 3200K 4000K 3502199.0 3273479.5 3246355.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 A B C 600K 1200K 1800K 2400K 3000K 2848729.25 2840960.00 2747067.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 A B C 600K 1200K 1800K 2400K 3000K 2929168.00 2947470.75 2168859.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 A B C 600K 1200K 1800K 2400K 3000K 2844518.25 2941056.25 2918397.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Facebook RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill A B C 200K 400K 600K 800K 1000K 1011444 995286 987274 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read A B C 20M 40M 60M 80M 100M 79117222 77772723 79085341 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random A B C 130K 260K 390K 520K 650K 577444 572446 585584 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill A B C 300K 600K 900K 1200K 1500K 1175330 1163406 1195182 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync A B C 3K 6K 9K 12K 15K 14631 14659 14636 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing A B C 500K 1000K 1500K 2000K 2500K 2121137 2118766 2215141 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random A B C 400K 800K 1200K 1600K 2000K 2080793 2081251 2079557 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark A B C 5 10 15 20 25 18.57 18.89 18.57
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz A B C 1.2848 2.5696 3.8544 5.1392 6.424 5.688 5.699 5.710
Redis Test: SADD - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 1000 A B C 800K 1600K 2400K 3200K 4000K 3297778.00 3465545.75 3752102.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Phoronix Test Suite v10.8.4