Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2209087-PTS-12600KSE38 12600k sept - Phoronix Test Suite 12600k sept Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209087-PTS-12600KSE38&grr&export=txt&rdt&rro .
12600k sept Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution A B C Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads) ASUS PRIME Z690-P WIFI D4 (0605 BIOS) Intel Device 7aa7 16GB 1000GB Western Digital WDS100T1X0E-00AFY0 ASUS Intel ADL-S GT1 15GB (1450MHz) Realtek ALC897 ASUS MG28U Realtek RTL8125 2.5GbE + Intel Device 7af0 Ubuntu 22.04 5.19.0-051900rc6daily20220716-generic (x86_64) GNOME Shell 42.1 X Server 1.21.1.3 + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.2.204 GCC 11.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x1f - Thermald 2.4.9 Java Details - A: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - B: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - C: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
12600k sept ncnn: Vulkan GPU - FastestDet ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU - mobilenet lammps: 20k Atoms blender: Barbershop - CPU-Only ai-benchmark: Device AI Score ai-benchmark: Device Training Score ai-benchmark: Device Inference Score brl-cad: VGR Performance Metric build-nodejs: Time To Compile blender: Pabellon Barcelona - CPU-Only spark: 40000000 - 100 - Broadcast Inner Join Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Repartition Test Time spark: 40000000 - 100 - Group By Test Time spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - SHA-512 Benchmark Time blender: Classroom - CPU-Only spark: 40000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 2000 - Inner Join Test Time spark: 40000000 - 2000 - Repartition Test Time spark: 40000000 - 2000 - Group By Test Time spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 40000000 - 2000 - SHA-512 Benchmark Time spark: 40000000 - 500 - Broadcast Inner Join Test Time spark: 40000000 - 500 - Inner Join Test Time spark: 40000000 - 500 - Repartition Test Time spark: 40000000 - 500 - Group By Test Time spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 1000 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - Inner Join Test Time spark: 40000000 - 1000 - Repartition Test Time spark: 40000000 - 1000 - Group By Test Time spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 1000 - Calculate Pi Benchmark spark: 40000000 - 1000 - SHA-512 Benchmark Time primesieve: 1e13 spark: 20000000 - 100 - Broadcast Inner Join Test Time spark: 20000000 - 100 - Inner Join Test Time spark: 20000000 - 100 - Repartition Test Time spark: 20000000 - 100 - Group By Test Time spark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 100 - Calculate Pi Benchmark spark: 20000000 - 100 - SHA-512 Benchmark Time spark: 20000000 - 2000 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - Inner Join Test Time spark: 20000000 - 2000 - Repartition Test Time spark: 20000000 - 2000 - Group By Test Time spark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 2000 - Calculate Pi Benchmark spark: 20000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 1000 - Broadcast Inner Join Test Time spark: 20000000 - 1000 - Inner Join Test Time spark: 20000000 - 1000 - Repartition Test Time spark: 20000000 - 1000 - Group By Test Time spark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 1000 - SHA-512 Benchmark Time spark: 20000000 - 500 - Broadcast Inner Join Test Time spark: 20000000 - 500 - Inner Join Test Time spark: 20000000 - 500 - Repartition Test Time spark: 20000000 - 500 - Group By Test Time spark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 500 - SHA-512 Benchmark Time build-python: Released Build, PGO + LTO Optimized spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Repartition Test Time spark: 10000000 - 2000 - Group By Test Time spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 2000 - SHA-512 Benchmark Time spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 100 - Inner Join Test Time spark: 10000000 - 100 - Repartition Test Time spark: 10000000 - 100 - Group By Test Time spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Calculate Pi Benchmark spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 1000 - Inner Join Test Time spark: 10000000 - 1000 - Repartition Test Time spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 500 - Group By Test Time spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 500 - Calculate Pi Benchmark spark: 10000000 - 500 - SHA-512 Benchmark Time unvanquished: 3840 x 2160 - Ultra blender: Fishy Cat - CPU-Only spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - SHA-512 Benchmark Time mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 500 - Repartition Test Time spark: 1000000 - 500 - Group By Test Time spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 500 - Calculate Pi Benchmark spark: 1000000 - 500 - SHA-512 Benchmark Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - SHA-512 Benchmark Time ncnn: CPU - FastestDet ncnn: CPU - vision_transformer ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet blender: BMW27 - CPU-Only unvanquished: 3840 x 2160 - High unvanquished: 2560 x 1440 - Ultra build-erlang: Time To Compile unvanquished: 3840 x 2160 - Medium svt-av1: Preset 4 - Bosphorus 4K memtier-benchmark: Redis - 500 - 1:1 memtier-benchmark: Redis - 500 - 1:5 memtier-benchmark: Redis - 500 - 5:1 memtier-benchmark: Redis - 500 - 1:10 dragonflydb: 200 - 1:1 unvanquished: 1920 x 1200 - Ultra memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 100 - 5:1 memtier-benchmark: Redis - 100 - 1:1 memtier-benchmark: Redis - 100 - 1:10 dragonflydb: 200 - 1:5 dragonflydb: 200 - 5:1 memtier-benchmark: Redis - 50 - 5:1 memtier-benchmark: Redis - 50 - 1:1 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 50 - 1:5 dragonflydb: 50 - 5:1 dragonflydb: 50 - 1:5 dragonflydb: 50 - 1:1 astcenc: Exhaustive openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU unvanquished: 1920 x 1080 - Ultra openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU graphics-magick: Sharpen graphics-magick: Enhanced openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU rocksdb: Rand Fill Sync openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU rocksdb: Rand Fill graphics-magick: Swirl rocksdb: Update Rand graphics-magick: Noise-Gaussian rocksdb: Read Rand Write Rand rocksdb: Read While Writing graphics-magick: Resizing rocksdb: Rand Read graphics-magick: Rotate graphics-magick: HWB Color Space node-web-tooling: unvanquished: 2560 x 1440 - High build-php: Time To Compile build-wasmer: Time To Compile unvanquished: 2560 x 1440 - Medium etcd: RANGE - 100 - 100 - Average Latency etcd: RANGE - 100 - 100 etcd: PUT - 100 - 100 - Average Latency etcd: PUT - 100 - 100 etcd: PUT - 500 - 100 - Average Latency etcd: PUT - 500 - 100 etcd: RANGE - 500 - 100 - Average Latency etcd: RANGE - 500 - 100 etcd: PUT - 50 - 100 - Average Latency etcd: PUT - 50 - 100 etcd: RANGE - 50 - 100 - Average Latency etcd: RANGE - 50 - 100 etcd: RANGE - 500 - 1000 - Average Latency etcd: RANGE - 500 - 1000 etcd: PUT - 500 - 1000 - Average Latency etcd: PUT - 500 - 1000 unvanquished: 1920 x 1200 - High unvanquished: 1920 x 1080 - High astcenc: Thorough unvanquished: 1920 x 1200 - Medium srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM etcd: PUT - 100 - 1000 - Average Latency etcd: PUT - 100 - 1000 etcd: RANGE - 100 - 1000 - Average Latency etcd: RANGE - 100 - 1000 unvanquished: 1920 x 1080 - Medium etcd: RANGE - 50 - 1000 - Average Latency etcd: RANGE - 50 - 1000 etcd: PUT - 50 - 1000 - Average Latency etcd: PUT - 50 - 1000 aircrack-ng: natron: Spaceship srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM compress-7zip: Decompression Rating compress-7zip: Compression Rating svt-av1: Preset 4 - Bosphorus 1080p redis: LPUSH - 500 srsran: OFDM_Test redis: LPOP - 1000 redis: LPUSH - 1000 redis: LPUSH - 50 astcenc: Fast redis: SET - 500 redis: LPOP - 50 redis: SET - 1000 redis: LPOP - 500 redis: SADD - 1000 redis: SET - 50 primesieve: 1e12 redis: SADD - 500 redis: SADD - 50 redis: GET - 1000 redis: GET - 500 redis: GET - 50 build-python: Default srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM svt-av1: Preset 8 - Bosphorus 4K srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM rocksdb: Seq Fill srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM astcenc: Medium blosc: blosclz bitshuffle svt-av1: Preset 10 - Bosphorus 4K unpack-linux: linux-5.19.tar.xz svt-av1: Preset 12 - Bosphorus 4K blosc: blosclz shuffle svt-av1: Preset 8 - Bosphorus 1080p lammps: Rhodopsin Protein svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p A B C 79.4 5945.17 188.61 365.93 622.75 940.23 377.29 364.07 1906.45 405.89 32.23 257.97 157.2 79.04 139.49 150.34 482.96 5.196 1242.71 2704 1672 1032 206019 527.01 377.04 33.09 34.20 30.17 24.744527851 7.00 123.600912032 41.359232589 303.34 36.45 35.060375837 29.378328391 21.118961148 7.06 123.974795224 41.86 34.88705537 35.388141288 30.231436134 22.05 6.96 123.338861542 42.385326391 35.698133038 34.68 29.402919671 21.42 6.88 123.980783643 41.970932909 236.721 18.546102273 19.615724636 16.078304281 9.91 7.01 123.347395163 22.987073933 16.89 18.50 15.570102078 9.43 6.94 123.368306617 21.966948837 17.242279869 17.73 15.132216491 9.44 6.97 124.201627183 21.882513891 16.877289487 17.501869869 15.07 9.07 6.91 123.315452123 21.98 199.391 9.10 9.99 8.42 6.58 7.00 123.853876534 12.45 10.083681032 10.22 8.59 6.06 6.99 123.758446737 12.43 8.85 9.48 8.19 6.399788449 6.93 124.530201879 11.950703479 8.86 9.28 7.97 6.51 6.97 122.855103001 12.23 39.4 154.91 2.02 2.34 2.34 3.78 7.00 122.348772378 3.41 26.695 2.454 2.394 5.095 20.05 3.058 1.206 9.296 1.85 1.803054303 1.98 3.46 6.93 123.957658703 3.07 1.39 1.44 1.87 3.27 6.97 123.618450656 2.89 1.16 1.41 1.75 2.94 6.98 123.174031753 2.69 4.07 204.96 8.61 13.4 19.86 14.37 5.77 7.51 37.96 9.22 1.02 5.44 3 3.14 2.79 3.46 11.16 107.38 66.4 73.8 87.699 79.5 2.03 2550575.11 2687075.03 2416811.03 2688499.75 3994130.34 105.7 2763821.75 2410478.31 2582100.75 2773074.52 4208682.77 3881918.88 2540352.24 2835321.53 2773955.34 2828630.76 4159557.48 4524509.15 4219269.32 0.7483 1730.06 2.28 2257.35 1.74 2257.15 1.74 114 395.54 10.07 127.81 31.28 11.05 361.68 14.34 278.74 8.64 462.34 19.48 205.21 166 270 13.16 754.73 14631 1.84 5175.16 0.74 13292.83 1011444 567 577444 316 2080793 2121137 1148 79117222 1171 1190 18.57 137.2 52.954 51.861 163.7 0.9 111362.119 0.9 112169.9051 0.9 112654.9098 0.9 112561.3114 0.8 117936.3208 0.8 118377.2467 7.7 125740.5868 7.7 126034.683 204.4 223.9 7.943 243.7 172.6 581.7 6.1 162474.2042 6.1 163123.8521 267.8 5.9 168558.4019 5.9 168856.1994 33031.898 3.6 151 523.5 61355 75966 6.359 2929168 168600000 2848729.25 2844518.25 2822836.25 160.1335 3388742.5 4457927 3502199 5223662.5 3297778 3555745.75 20.215 3875137.25 3638545.75 3881387 4243838 5352183.5 15.506 88 189.4 43.531 209.8 594.7 1175330 191.5 541.2 60.8457 8941.5 85.519 5.688 121.731 16672.5 119.545 5.825 260.876 441.12 79.21 5943.47 189.02 366.55 627.21 940.99 381.28 365 1903.62 405.86 32.29 260.09 157.3 79.41 139.27 150.56 483.87 5.207 1241.97 2702 1673 1029 205796 526.99 377.7 36.21 34.587657908 30.88 24.87 7.01 123.903099348 41.57 303.25 34.61 34.70 29.67 21.05 7.01 125.005596616 41.90 34.21 35.60 30.63 21.80 6.91 123.436068143 41.61 35.82 34.70 30.07 21.00 6.962493428 123.944053248 41.69 235.757 19.68 19.211734983 16.00 10.03 6.95 123.781975598 22.98 16.959085602 17.82 15.33 9.37 6.92 125.670244459 22.088964217 17.05 17.63 15.27 9.13 6.91 124.635599763 21.83999095 17.88 17.36 15.30 9.49 7.02 123.659609155 21.60 199.787 9.71 10.57 8.93 6.85 7.36 131.922286442 12.99 10.157012655 10.20 8.40 6.30 6.98 124.093579903 12.50 8.72 9.92 8.09 6.37 7.30 129.233966759 12.08 9.09 8.87 8.24 6.29 6.92 124.132910967 12.03 39.4 154.74 1.91 2.36 2.31 3.78 6.98 124.03 3.38 26.695 2.428 2.408 4.928 20.3 2.657 1.2 9.287 1.54 1.92 2.11 3.52 6.95 123.00 3.07 1.44 1.40 1.90 3.28 7.01 123.541641397 2.95 1.11 1.44 1.88 2.84 7.12 124.069014404 2.84 4.03 204.75 8.61 13.6 20 14.4 5.76 7.41 37.99 9.16 1.02 5.44 3.04 3.06 2.9 3.66 12.31 107.6 66.4 73.7 88.056 79.6 2.016 2602826.18 3347517.25 2536364.42 2785214.15 3972323.46 105.7 2649407.05 2512633 2511082.41 2773551.23 4208868.69 3825141.56 2336486.4 2555526.92 2865261.93 2696467.92 4136083.87 4489630.35 4209125.68 0.7482 1734.99 2.28 2255.03 1.74 2252.28 1.75 114.1 395.59 10.06 127.36 31.37 11.11 359.57 51.76 192.88 8.61 463.7 19.33 206.79 168 272 13.38 742.81 14659 1.86 5129.86 0.74 13347.46 995286 573 572446 319 2081251 2118766 1165 77772723 1210 1239 18.89 136.7 53.009 51.602 163 0.9 111448.4558 0.9 111324.9806 0.9 112419.0915 0.9 112831.3886 0.8 118167.755 0.8 118320.296 7.7 125808.6577 7.7 126006.7027 205.6 222.6 7.9426 245.2 172.9 578.8 6.1 163025.2241 6.1 163552.1304 267.1 5.9 168112.8884 5.9 168241.7701 33109.836 3.6 151.6 527.8 61210 77477 6.359 2947470.75 183000000 2840960 2941056.25 2851816.25 160.0289 2669989.75 2755629.25 3273479.5 2834518 3465545.75 3119402.25 20.29 3854227.75 3997500.25 4364222.5 4223762 3822955.75 15.489 88 189.3 43.712 209.6 590.7 1163406 191.2 537.6 60.8658 9258 86.528 5.699 127.041 16992.7 120.54 5.695 260.782 446.137 78.85 5956.11 188.96 365.07 623.95 950.14 377.61 364.29 1896.06 405.42 32.21 259.64 156.92 79.43 138.77 150.22 481.76 5.227 1242.78 2703 1674 1029 205167 528.234 378.41 34.75 35.46 31.51 24.86 6.98 125.164549301 41.35 303.35 35.53 34.76 29.46 21.07 6.90 125.33 42.09 34.24 35.03 30.67 21.94 6.94 124.09 41.62 33.70 34.67 29.236415558 20.95 6.96 124.961603934 41.53 236.408 19.17 19.71 16.17 9.61 6.99 124.76 23.02 17.58 18.23 15.34 9.24 6.90 124.01 21.90 16.77 17.81 15.179208759 9.41 6.93 124.731757086 21.93 16.87 17.87 15.18 9.16 6.90 124.10 21.56 200.43 9.13 9.79 8.50 6.62 6.90 124.49 12.44 10.21 10.06 8.46 6.36 7.00 125.60 12.46 8.80 9.42 8.15 6.33 6.91 123.832556311 12.23 8.80 8.91 8.07 6.39 6.92 124.29 11.98 39.3 155.23 1.99 2.19 2.34 3.78 6.87 124.53 3.37 26.266 2.444 2.47 5.049 21.969 3.051 1.216 9.375 1.50 1.85 2.02 3.48 6.97 124.476336981 2.98 1.32 1.57 1.89 3.23 6.99 125.001646515 2.83 1.13 1.42 1.76 2.95 7.00 123.94 3.01 4.67 221.23 8.63 13.31 19.2 14.57 5.73 7.4 38.14 9.53 1.02 5.41 3.02 3.09 2.83 3.49 11.64 107.26 66.4 73.7 88.369 79.6 2.02 2966884.56 2738214.13 2382868.01 2670507.06 3982880.38 105.8 2679655.73 2106869.82 2427170.31 2813689.56 4176068.34 3864893.24 2415948.89 2476859.81 2806199.11 2468655.32 4107472.56 4460778.29 4206479.32 0.7486 1736.86 2.28 2256.98 1.74 2255.2 1.74 114 395.44 10.06 127.17 31.43 11.08 360.51 51.71 193.22 8.61 463.87 19.57 204.23 167 272 13.39 741.97 14636 1.85 5136.99 0.74 13357.94 987274 575 585584 317 2079557 2215141 1163 79085341 1217 1242 18.57 136.5 52.885 51.763 162.8 0.9 111365.0268 0.9 111437.66 0.9 112565.754 0.9 112498.8743 0.8 118067.3946 0.8 118167.1254 7.7 125868.5806 7.7 125808.3795 203.9 221.4 7.9396 244.7 173.2 578.7 6.1 163135.4487 6.1 163558.7114 263.3 5.9 168376.887 5.9 168534.309 33749.547 3.5 151.4 521.2 59506 77380 6.344 2168859.75 185500000 2747067.5 2918397 2718001 160.0589 3312414.5 2659905 3246355 2617574 3752102.5 3243652 20.206 4012269 3943910.25 4293396.5 4248164.5 4344096.5 15.397 87 188.4 43.963 209.7 587.8 1195182 189.6 534.7 60.8886 9257.1 88.558 5.71 125.217 17013.4 118.967 5.862 265.936 457.513 OpenBenchmarking.org
NCNN Target: Vulkan GPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet C B A 20 40 60 80 100 78.85 79.21 79.40 MIN: 71.5 / MAX: 93.15 MIN: 70.61 / MAX: 100.1 MIN: 72.66 / MAX: 98.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer C B A 1300 2600 3900 5200 6500 5956.11 5943.47 5945.17 MIN: 5649.52 / MAX: 6345.98 MIN: 5684.57 / MAX: 6227.11 MIN: 5637.6 / MAX: 6292.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m C B A 40 80 120 160 200 188.96 189.02 188.61 MIN: 179.54 / MAX: 213 MIN: 180.67 / MAX: 211.75 MIN: 180.25 / MAX: 207.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd C B A 80 160 240 320 400 365.07 366.55 365.93 MIN: 342.75 / MAX: 404.41 MIN: 349.53 / MAX: 408.08 MIN: 348.21 / MAX: 406.24 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny C B A 140 280 420 560 700 623.95 627.21 622.75 MIN: 583.85 / MAX: 708.2 MIN: 581.6 / MAX: 704.84 MIN: 584.8 / MAX: 719.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 C B A 200 400 600 800 1000 950.14 940.99 940.23 MIN: 896.75 / MAX: 1066.4 MIN: 895.61 / MAX: 1068.31 MIN: 892.01 / MAX: 1080.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet C B A 80 160 240 320 400 377.61 381.28 377.29 MIN: 361.66 / MAX: 442.69 MIN: 361.67 / MAX: 447.07 MIN: 361.87 / MAX: 430.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 C B A 80 160 240 320 400 364.29 365.00 364.07 MIN: 336.97 / MAX: 405.48 MIN: 340.04 / MAX: 405.57 MIN: 334.5 / MAX: 411.23 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 C B A 400 800 1200 1600 2000 1896.06 1903.62 1906.45 MIN: 1830.58 / MAX: 2121.95 MIN: 1832.38 / MAX: 2111.8 MIN: 1832.08 / MAX: 2103 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet C B A 90 180 270 360 450 405.42 405.86 405.89 MIN: 386.26 / MAX: 433.31 MIN: 386.45 / MAX: 442.41 MIN: 383.82 / MAX: 441.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface C B A 7 14 21 28 35 32.21 32.29 32.23 MIN: 30.38 / MAX: 34.64 MIN: 29.89 / MAX: 34.33 MIN: 30.58 / MAX: 34.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 C B A 60 120 180 240 300 259.64 260.09 257.97 MIN: 219.28 / MAX: 294.88 MIN: 219.09 / MAX: 293.03 MIN: 222.39 / MAX: 299.24 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet C B A 30 60 90 120 150 156.92 157.30 157.20 MIN: 146.37 / MAX: 173.33 MIN: 149.12 / MAX: 176.82 MIN: 147.79 / MAX: 176.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 C B A 20 40 60 80 100 79.43 79.41 79.04 MIN: 75.37 / MAX: 84.34 MIN: 74.65 / MAX: 83.37 MIN: 74.66 / MAX: 84.71 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 C B A 30 60 90 120 150 138.77 139.27 139.49 MIN: 121.34 / MAX: 170.96 MIN: 119.74 / MAX: 177.41 MIN: 121.25 / MAX: 175.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 C B A 30 60 90 120 150 150.22 150.56 150.34 MIN: 137.58 / MAX: 175.46 MIN: 136.95 / MAX: 176.94 MIN: 140.63 / MAX: 189.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: Vulkan GPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet C B A 100 200 300 400 500 481.76 483.87 482.96 MIN: 433.22 / MAX: 545.28 MIN: 434 / MAX: 553.55 MIN: 434.99 / MAX: 553.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
LAMMPS Molecular Dynamics Simulator Model: 20k Atoms OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms C B A 1.1761 2.3522 3.5283 4.7044 5.8805 5.227 5.207 5.196 1. (CXX) g++ options: -O3 -lm -ldl
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Barbershop - Compute: CPU-Only C B A 300 600 900 1200 1500 1242.78 1241.97 1242.71
AI Benchmark Alpha Device AI Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device AI Score C B A 600 1200 1800 2400 3000 2703 2702 2704
AI Benchmark Alpha Device Training Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Training Score C B A 400 800 1200 1600 2000 1674 1673 1672
AI Benchmark Alpha Device Inference Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Inference Score C B A 200 400 600 800 1000 1029 1029 1032
BRL-CAD VGR Performance Metric OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric C B A 40K 80K 120K 160K 200K 205167 205796 206019 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile C B A 110 220 330 440 550 528.23 526.99 527.01
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Pabellon Barcelona - Compute: CPU-Only C B A 80 160 240 320 400 378.41 377.70 377.04
Apache Spark Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time C B A 8 16 24 32 40 34.75 36.21 33.09
Apache Spark Row Count: 40000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time C B A 8 16 24 32 40 35.46 34.59 34.20
Apache Spark Row Count: 40000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time C B A 7 14 21 28 35 31.51 30.88 30.17
Apache Spark Row Count: 40000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time C B A 6 12 18 24 30 24.86 24.87 24.74
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.98 7.01 7.00
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark C B A 30 60 90 120 150 125.16 123.90 123.60
Apache Spark Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time C B A 9 18 27 36 45 41.35 41.57 41.36
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Classroom - Compute: CPU-Only C B A 70 140 210 280 350 303.35 303.25 303.34
Apache Spark Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time C B A 8 16 24 32 40 35.53 34.61 36.45
Apache Spark Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time C B A 8 16 24 32 40 34.76 34.70 35.06
Apache Spark Row Count: 40000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time C B A 7 14 21 28 35 29.46 29.67 29.38
Apache Spark Row Count: 40000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time C B A 5 10 15 20 25 21.07 21.05 21.12
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.90 7.01 7.06
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark C B A 30 60 90 120 150 125.33 125.01 123.97
Apache Spark Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time C B A 10 20 30 40 50 42.09 41.90 41.86
Apache Spark Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time C B A 8 16 24 32 40 34.24 34.21 34.89
Apache Spark Row Count: 40000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Inner Join Test Time C B A 8 16 24 32 40 35.03 35.60 35.39
Apache Spark Row Count: 40000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Repartition Test Time C B A 7 14 21 28 35 30.67 30.63 30.23
Apache Spark Row Count: 40000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Group By Test Time C B A 5 10 15 20 25 21.94 21.80 22.05
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.94 6.91 6.96
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.09 123.44 123.34
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time C B A 10 20 30 40 50 41.62 41.61 42.39
Apache Spark Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time C B A 8 16 24 32 40 33.70 35.82 35.70
Apache Spark Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time C B A 8 16 24 32 40 34.67 34.70 34.68
Apache Spark Row Count: 40000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Repartition Test Time C B A 7 14 21 28 35 29.24 30.07 29.40
Apache Spark Row Count: 40000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Group By Test Time C B A 5 10 15 20 25 20.95 21.00 21.42
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.960000000 6.962493428 6.880000000
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.96 123.94 123.98
Apache Spark Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time C B A 10 20 30 40 50 41.53 41.69 41.97
Primesieve Length: 1e13 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e13 C B A 50 100 150 200 250 236.41 235.76 236.72 1. (CXX) g++ options: -O3
Apache Spark Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time C B A 5 10 15 20 25 19.17 19.68 18.55
Apache Spark Row Count: 20000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Inner Join Test Time C B A 5 10 15 20 25 19.71 19.21 19.62
Apache Spark Row Count: 20000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Repartition Test Time C B A 4 8 12 16 20 16.17 16.00 16.08
Apache Spark Row Count: 20000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Group By Test Time C B A 3 6 9 12 15 9.61 10.03 9.91
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.99 6.95 7.01
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.76 123.78 123.35
Apache Spark Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time C B A 6 12 18 24 30 23.02 22.98 22.99
Apache Spark Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time C B A 4 8 12 16 20 17.58 16.96 16.89
Apache Spark Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time C B A 5 10 15 20 25 18.23 17.82 18.50
Apache Spark Row Count: 20000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Repartition Test Time C B A 4 8 12 16 20 15.34 15.33 15.57
Apache Spark Row Count: 20000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Group By Test Time C B A 3 6 9 12 15 9.24 9.37 9.43
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.90 6.92 6.94
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.01 125.67 123.37
Apache Spark Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time C B A 5 10 15 20 25 21.90 22.09 21.97
Apache Spark Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time C B A 4 8 12 16 20 16.77 17.05 17.24
Apache Spark Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time C B A 4 8 12 16 20 17.81 17.63 17.73
Apache Spark Row Count: 20000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Repartition Test Time C B A 4 8 12 16 20 15.18 15.27 15.13
Apache Spark Row Count: 20000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Group By Test Time C B A 3 6 9 12 15 9.41 9.13 9.44
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.93 6.91 6.97
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.73 124.64 124.20
Apache Spark Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time C B A 5 10 15 20 25 21.93 21.84 21.88
Apache Spark Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time C B A 4 8 12 16 20 16.87 17.88 16.88
Apache Spark Row Count: 20000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Inner Join Test Time C B A 4 8 12 16 20 17.87 17.36 17.50
Apache Spark Row Count: 20000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Repartition Test Time C B A 4 8 12 16 20 15.18 15.30 15.07
Apache Spark Row Count: 20000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Group By Test Time C B A 3 6 9 12 15 9.16 9.49 9.07
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.90 7.02 6.91
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.10 123.66 123.32
Apache Spark Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time C B A 5 10 15 20 25 21.56 21.60 21.98
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized C B A 40 80 120 160 200 200.43 199.79 199.39
Apache Spark Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time C B A 3 6 9 12 15 9.13 9.71 9.10
Apache Spark Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time C B A 3 6 9 12 15 9.79 10.57 9.99
Apache Spark Row Count: 10000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time C B A 2 4 6 8 10 8.50 8.93 8.42
Apache Spark Row Count: 10000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time C B A 2 4 6 8 10 6.62 6.85 6.58
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.90 7.36 7.00
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.49 131.92 123.85
Apache Spark Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time C B A 3 6 9 12 15 12.44 12.99 12.45
Apache Spark Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time C B A 3 6 9 12 15 10.21 10.16 10.08
Apache Spark Row Count: 10000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time C B A 3 6 9 12 15 10.06 10.20 10.22
Apache Spark Row Count: 10000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time C B A 2 4 6 8 10 8.46 8.40 8.59
Apache Spark Row Count: 10000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time C B A 2 4 6 8 10 6.36 6.30 6.06
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 7.00 6.98 6.99
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark C B A 30 60 90 120 150 125.60 124.09 123.76
Apache Spark Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time C B A 3 6 9 12 15 12.46 12.50 12.43
Apache Spark Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time C B A 2 4 6 8 10 8.80 8.72 8.85
Apache Spark Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time C B A 3 6 9 12 15 9.42 9.92 9.48
Apache Spark Row Count: 10000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time C B A 2 4 6 8 10 8.15 8.09 8.19
Apache Spark Row Count: 10000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time C B A 2 4 6 8 10 6.330000000 6.370000000 6.399788449
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.91 7.30 6.93
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark C B A 30 60 90 120 150 123.83 129.23 124.53
Apache Spark Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time C B A 3 6 9 12 15 12.23 12.08 11.95
Apache Spark Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time C B A 3 6 9 12 15 8.80 9.09 8.86
Apache Spark Row Count: 10000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time C B A 3 6 9 12 15 8.91 8.87 9.28
Apache Spark Row Count: 10000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time C B A 2 4 6 8 10 8.07 8.24 7.97
Apache Spark Row Count: 10000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time C B A 2 4 6 8 10 6.39 6.29 6.51
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.92 6.92 6.97
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.29 124.13 122.86
Apache Spark Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time C B A 3 6 9 12 15 11.98 12.03 12.23
Unvanquished Resolution: 3840 x 2160 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 3840 x 2160 - Effects Quality: Ultra C B A 9 18 27 36 45 39.3 39.4 39.4
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: Fishy Cat - Compute: CPU-Only C B A 30 60 90 120 150 155.23 154.74 154.91
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time C B A 0.4545 0.909 1.3635 1.818 2.2725 1.99 1.91 2.02
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time C B A 0.531 1.062 1.593 2.124 2.655 2.19 2.36 2.34
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time C B A 0.5265 1.053 1.5795 2.106 2.6325 2.34 2.31 2.34
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time C B A 0.8505 1.701 2.5515 3.402 4.2525 3.78 3.78 3.78
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.87 6.98 7.00
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.53 124.03 122.35
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time C B A 0.7673 1.5346 2.3019 3.0692 3.8365 3.37 3.38 3.41
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 C B A 6 12 18 24 30 26.27 26.70 26.70 MIN: 26.17 / MAX: 32.34 MIN: 26.61 / MAX: 32.85 MIN: 26.6 / MAX: 32.72 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 C B A 0.5522 1.1044 1.6566 2.2088 2.761 2.444 2.428 2.454 MIN: 2.42 / MAX: 3.26 MIN: 2.4 / MAX: 3.34 MIN: 2.43 / MAX: 3.38 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 C B A 0.5558 1.1116 1.6674 2.2232 2.779 2.470 2.408 2.394 MIN: 2.38 / MAX: 9.06 MIN: 2.39 / MAX: 2.65 MIN: 2.38 / MAX: 2.61 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 C B A 1.1464 2.2928 3.4392 4.5856 5.732 5.049 4.928 5.095 MIN: 5.02 / MAX: 5.87 MIN: 4.89 / MAX: 6.13 MIN: 5.06 / MAX: 6.27 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 C B A 5 10 15 20 25 21.97 20.30 20.05 MIN: 21.9 / MAX: 27.98 MIN: 20.23 / MAX: 26.54 MIN: 19.99 / MAX: 21.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 C B A 0.6881 1.3762 2.0643 2.7524 3.4405 3.051 2.657 3.058 MIN: 3.03 / MAX: 4.35 MIN: 2.64 / MAX: 6.83 MIN: 3.04 / MAX: 3.85 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 C B A 0.2736 0.5472 0.8208 1.0944 1.368 1.216 1.200 1.206 MIN: 1.2 / MAX: 1.45 MIN: 1.19 / MAX: 1.33 MIN: 1.19 / MAX: 1.51 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet C B A 3 6 9 12 15 9.375 9.287 9.296 MIN: 9.34 / MAX: 9.98 MIN: 9.25 / MAX: 11.36 MIN: 9.25 / MAX: 15.62 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Apache Spark Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time C B A 0.4163 0.8326 1.2489 1.6652 2.0815 1.50 1.54 1.85
Apache Spark Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time C B A 0.432 0.864 1.296 1.728 2.16 1.850000000 1.920000000 1.803054303
Apache Spark Row Count: 1000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time C B A 0.4748 0.9496 1.4244 1.8992 2.374 2.02 2.11 1.98
Apache Spark Row Count: 1000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time C B A 0.792 1.584 2.376 3.168 3.96 3.48 3.52 3.46
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.97 6.95 6.93
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark C B A 30 60 90 120 150 124.48 123.00 123.96
Apache Spark Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time C B A 0.6908 1.3816 2.0724 2.7632 3.454 2.98 3.07 3.07
Apache Spark Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time C B A 0.324 0.648 0.972 1.296 1.62 1.32 1.44 1.39
Apache Spark Row Count: 1000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time C B A 0.3533 0.7066 1.0599 1.4132 1.7665 1.57 1.40 1.44
Apache Spark Row Count: 1000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time C B A 0.4275 0.855 1.2825 1.71 2.1375 1.89 1.90 1.87
Apache Spark Row Count: 1000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time C B A 0.738 1.476 2.214 2.952 3.69 3.23 3.28 3.27
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 6.99 7.01 6.97
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark C B A 30 60 90 120 150 125.00 123.54 123.62
Apache Spark Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time C B A 0.6638 1.3276 1.9914 2.6552 3.319 2.83 2.95 2.89
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time C B A 0.261 0.522 0.783 1.044 1.305 1.13 1.11 1.16
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time C B A 0.324 0.648 0.972 1.296 1.62 1.42 1.44 1.41
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time C B A 0.423 0.846 1.269 1.692 2.115 1.76 1.88 1.75
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time C B A 0.6638 1.3276 1.9914 2.6552 3.319 2.95 2.84 2.94
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe C B A 2 4 6 8 10 7.00 7.12 6.98
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark C B A 30 60 90 120 150 123.94 124.07 123.17
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time C B A 0.6773 1.3546 2.0319 2.7092 3.3865 3.01 2.84 2.69
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet C B A 1.0508 2.1016 3.1524 4.2032 5.254 4.67 4.03 4.07 MIN: 4.63 / MAX: 4.76 MIN: 4 / MAX: 4.13 MIN: 4.04 / MAX: 4.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer C B A 50 100 150 200 250 221.23 204.75 204.96 MIN: 220.87 / MAX: 226.29 MIN: 204.28 / MAX: 211.49 MIN: 204.39 / MAX: 232.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m C B A 2 4 6 8 10 8.63 8.61 8.61 MIN: 8.54 / MAX: 9.9 MIN: 8.54 / MAX: 9.56 MIN: 8.53 / MAX: 9.53 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd C B A 3 6 9 12 15 13.31 13.60 13.40 MIN: 13.15 / MAX: 14.36 MIN: 13.43 / MAX: 14.86 MIN: 13.28 / MAX: 14.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny C B A 5 10 15 20 25 19.20 20.00 19.86 MIN: 19.05 / MAX: 19.44 MIN: 19.86 / MAX: 20.46 MIN: 19.65 / MAX: 20.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 C B A 4 8 12 16 20 14.57 14.40 14.37 MIN: 14.41 / MAX: 16 MIN: 14.23 / MAX: 15.65 MIN: 14.18 / MAX: 20.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet C B A 1.2983 2.5966 3.8949 5.1932 6.4915 5.73 5.76 5.77 MIN: 5.62 / MAX: 6.64 MIN: 5.66 / MAX: 6.99 MIN: 5.68 / MAX: 6.67 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 C B A 2 4 6 8 10 7.40 7.41 7.51 MIN: 7.24 / MAX: 8.41 MIN: 7.23 / MAX: 8.33 MIN: 7.37 / MAX: 8.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 C B A 9 18 27 36 45 38.14 37.99 37.96 MIN: 37.78 / MAX: 39.62 MIN: 37.73 / MAX: 39.19 MIN: 37.71 / MAX: 39.27 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet C B A 3 6 9 12 15 9.53 9.16 9.22 MIN: 9.36 / MAX: 10.48 MIN: 9.03 / MAX: 10.05 MIN: 9.07 / MAX: 10.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface C B A 0.2295 0.459 0.6885 0.918 1.1475 1.02 1.02 1.02 MIN: 0.99 / MAX: 1.8 MIN: 1 / MAX: 1.32 MIN: 0.99 / MAX: 1.85 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 C B A 1.224 2.448 3.672 4.896 6.12 5.41 5.44 5.44 MIN: 5.34 / MAX: 9.03 MIN: 5.38 / MAX: 6.65 MIN: 5.37 / MAX: 6.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet C B A 0.684 1.368 2.052 2.736 3.42 3.02 3.04 3.00 MIN: 2.96 / MAX: 3.8 MIN: 2.98 / MAX: 4.18 MIN: 2.95 / MAX: 3.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 C B A 0.7065 1.413 2.1195 2.826 3.5325 3.09 3.06 3.14 MIN: 3.06 / MAX: 3.82 MIN: 3.03 / MAX: 4.06 MIN: 3.08 / MAX: 3.89 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 C B A 0.6525 1.305 1.9575 2.61 3.2625 2.83 2.90 2.79 MIN: 2.79 / MAX: 3.63 MIN: 2.86 / MAX: 4.11 MIN: 2.75 / MAX: 3.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 C B A 0.8235 1.647 2.4705 3.294 4.1175 3.49 3.66 3.46 MIN: 3.43 / MAX: 4.35 MIN: 3.61 / MAX: 4.88 MIN: 3.4 / MAX: 4.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet C B A 3 6 9 12 15 11.64 12.31 11.16 MIN: 11.49 / MAX: 18.31 MIN: 12.19 / MAX: 17.29 MIN: 11.07 / MAX: 11.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.3 Blend File: BMW27 - Compute: CPU-Only C B A 20 40 60 80 100 107.26 107.60 107.38
Unvanquished Resolution: 3840 x 2160 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 3840 x 2160 - Effects Quality: High C B A 15 30 45 60 75 66.4 66.4 66.4
Unvanquished Resolution: 2560 x 1440 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 2560 x 1440 - Effects Quality: Ultra C B A 16 32 48 64 80 73.7 73.7 73.8
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile C B A 20 40 60 80 100 88.37 88.06 87.70
Unvanquished Resolution: 3840 x 2160 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 3840 x 2160 - Effects Quality: Medium C B A 20 40 60 80 100 79.6 79.6 79.5
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K C B A 0.4568 0.9136 1.3704 1.8272 2.284 2.020 2.016 2.030 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 C B A 600K 1200K 1800K 2400K 3000K 2966884.56 2602826.18 2550575.11 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 C B A 700K 1400K 2100K 2800K 3500K 2738214.13 3347517.25 2687075.03 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 C B A 500K 1000K 1500K 2000K 2500K 2382868.01 2536364.42 2416811.03 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 C B A 600K 1200K 1800K 2400K 3000K 2670507.06 2785214.15 2688499.75 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 C B A 900K 1800K 2700K 3600K 4500K 3982880.38 3972323.46 3994130.34 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Unvanquished Resolution: 1920 x 1200 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1200 - Effects Quality: Ultra C B A 20 40 60 80 100 105.8 105.7 105.7
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 C B A 600K 1200K 1800K 2400K 3000K 2679655.73 2649407.05 2763821.75 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 C B A 500K 1000K 1500K 2000K 2500K 2106869.82 2512633.00 2410478.31 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 C B A 600K 1200K 1800K 2400K 3000K 2427170.31 2511082.41 2582100.75 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 C B A 600K 1200K 1800K 2400K 3000K 2813689.56 2773551.23 2773074.52 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 C B A 900K 1800K 2700K 3600K 4500K 4176068.34 4208868.69 4208682.77 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 C B A 800K 1600K 2400K 3200K 4000K 3864893.24 3825141.56 3881918.88 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 C B A 500K 1000K 1500K 2000K 2500K 2415948.89 2336486.40 2540352.24 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 C B A 600K 1200K 1800K 2400K 3000K 2476859.81 2555526.92 2835321.53 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 C B A 600K 1200K 1800K 2400K 3000K 2806199.11 2865261.93 2773955.34 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 C B A 600K 1200K 1800K 2400K 3000K 2468655.32 2696467.92 2828630.76 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 C B A 900K 1800K 2700K 3600K 4500K 4107472.56 4136083.87 4159557.48 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 C B A 1000K 2000K 3000K 4000K 5000K 4460778.29 4489630.35 4524509.15 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 C B A 900K 1800K 2700K 3600K 4500K 4206479.32 4209125.68 4219269.32 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive C B A 0.1684 0.3368 0.5052 0.6736 0.842 0.7486 0.7482 0.7483 1. (CXX) g++ options: -O3 -flto -pthread
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU C B A 400 800 1200 1600 2000 1736.86 1734.99 1730.06 MIN: 1712.2 / MAX: 1770.88 MIN: 1708.96 / MAX: 1773.08 MIN: 1709.42 / MAX: 1761.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU C B A 0.513 1.026 1.539 2.052 2.565 2.28 2.28 2.28 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU C B A 500 1000 1500 2000 2500 2256.98 2255.03 2257.35 MIN: 1901.03 / MAX: 2807.39 MIN: 1890.66 / MAX: 2807.42 MIN: 1892.63 / MAX: 2810.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU C B A 0.3915 0.783 1.1745 1.566 1.9575 1.74 1.74 1.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU C B A 500 1000 1500 2000 2500 2255.20 2252.28 2257.15 MIN: 1893.65 / MAX: 2806.3 MIN: 1918.36 / MAX: 2814.7 MIN: 1889.93 / MAX: 2814.63 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU C B A 0.3938 0.7876 1.1814 1.5752 1.969 1.74 1.75 1.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Ultra C B A 30 60 90 120 150 114.0 114.1 114.0
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU C B A 90 180 270 360 450 395.44 395.59 395.54 MIN: 328.43 / MAX: 849.76 MIN: 331.26 / MAX: 849.69 MIN: 332.95 / MAX: 848.36 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU C B A 3 6 9 12 15 10.06 10.06 10.07 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU C B A 30 60 90 120 150 127.17 127.36 127.81 MIN: 105.34 / MAX: 184.06 MIN: 109.38 / MAX: 183.36 MIN: 114.95 / MAX: 185.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU C B A 7 14 21 28 35 31.43 31.37 31.28 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU C B A 3 6 9 12 15 11.08 11.11 11.05 MIN: 9.4 / MAX: 18.35 MIN: 9.45 / MAX: 24.64 MIN: 9.3 / MAX: 19.19 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU C B A 80 160 240 320 400 360.51 359.57 361.68 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU C B A 12 24 36 48 60 51.71 51.76 14.34 MIN: 51.25 / MAX: 52.3 MIN: 51.27 / MAX: 52.27 MIN: 12.92 / MAX: 25.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU C B A 60 120 180 240 300 193.22 192.88 278.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU C B A 2 4 6 8 10 8.61 8.61 8.64 MIN: 7.46 / MAX: 19.12 MIN: 7.45 / MAX: 18.65 MIN: 7.42 / MAX: 28.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU C B A 100 200 300 400 500 463.87 463.70 462.34 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU C B A 5 10 15 20 25 19.57 19.33 19.48 MIN: 15.33 / MAX: 26.78 MIN: 13.45 / MAX: 26.38 MIN: 15.62 / MAX: 26.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU C B A 50 100 150 200 250 204.23 206.79 205.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen C B A 40 80 120 160 200 167 168 166 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced C B A 60 120 180 240 300 272 272 270 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU C B A 3 6 9 12 15 13.39 13.38 13.16 MIN: 7.89 / MAX: 16.11 MIN: 8.15 / MAX: 21.54 MIN: 10.33 / MAX: 13.9 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU C B A 160 320 480 640 800 741.97 742.81 754.73 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Facebook RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync C B A 3K 6K 9K 12K 15K 14636 14659 14631 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU C B A 0.4185 0.837 1.2555 1.674 2.0925 1.85 1.86 1.84 MIN: 1.07 / MAX: 3.67 MIN: 1.02 / MAX: 3.04 MIN: 1.02 / MAX: 3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU C B A 1100 2200 3300 4400 5500 5136.99 5129.86 5175.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU C B A 0.1665 0.333 0.4995 0.666 0.8325 0.74 0.74 0.74 MIN: 0.51 / MAX: 2.08 MIN: 0.51 / MAX: 2.09 MIN: 0.51 / MAX: 2.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU C B A 3K 6K 9K 12K 15K 13357.94 13347.46 13292.83 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -flto -shared
Facebook RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill C B A 200K 400K 600K 800K 1000K 987274 995286 1011444 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl C B A 120 240 360 480 600 575 573 567 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Facebook RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random C B A 130K 260K 390K 520K 650K 585584 572446 577444 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian C B A 70 140 210 280 350 317 319 316 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Facebook RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random C B A 400K 800K 1200K 1600K 2000K 2079557 2081251 2080793 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Facebook RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing C B A 500K 1000K 1500K 2000K 2500K 2215141 2118766 2121137 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing C B A 300 600 900 1200 1500 1163 1165 1148 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Facebook RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read C B A 20M 40M 60M 80M 100M 79085341 77772723 79117222 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate C B A 300 600 900 1200 1500 1217 1210 1171 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space C B A 300 600 900 1200 1500 1242 1239 1190 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark C B A 5 10 15 20 25 18.57 18.89 18.57
Unvanquished Resolution: 2560 x 1440 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 2560 x 1440 - Effects Quality: High C B A 30 60 90 120 150 136.5 136.7 137.2
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile C B A 12 24 36 48 60 52.89 53.01 52.95
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile C B A 12 24 36 48 60 51.76 51.60 51.86 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Unvanquished Resolution: 2560 x 1440 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 2560 x 1440 - Effects Quality: Medium C B A 40 80 120 160 200 162.8 163.0 163.7
etcd Test: RANGE - Connections: 100 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 100 - Average Latency C B A 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: RANGE - Connections: 100 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 100 C B A 20K 40K 60K 80K 100K 111365.03 111448.46 111362.12
etcd Test: PUT - Connections: 100 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 100 - Average Latency C B A 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: PUT - Connections: 100 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 100 C B A 20K 40K 60K 80K 100K 111437.66 111324.98 112169.91
etcd Test: PUT - Connections: 500 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 100 - Average Latency C B A 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: PUT - Connections: 500 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 100 C B A 20K 40K 60K 80K 100K 112565.75 112419.09 112654.91
etcd Test: RANGE - Connections: 500 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 100 - Average Latency C B A 0.2025 0.405 0.6075 0.81 1.0125 0.9 0.9 0.9
etcd Test: RANGE - Connections: 500 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 100 C B A 20K 40K 60K 80K 100K 112498.87 112831.39 112561.31
etcd Test: PUT - Connections: 50 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 100 - Average Latency C B A 0.18 0.36 0.54 0.72 0.9 0.8 0.8 0.8
etcd Test: PUT - Connections: 50 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 100 C B A 30K 60K 90K 120K 150K 118067.39 118167.76 117936.32
etcd Test: RANGE - Connections: 50 - Clients: 100 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 100 - Average Latency C B A 0.18 0.36 0.54 0.72 0.9 0.8 0.8 0.8
etcd Test: RANGE - Connections: 50 - Clients: 100 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 100 C B A 30K 60K 90K 120K 150K 118167.13 118320.30 118377.25
etcd Test: RANGE - Connections: 500 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 1000 - Average Latency C B A 2 4 6 8 10 7.7 7.7 7.7
etcd Test: RANGE - Connections: 500 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 500 - Clients: 1000 C B A 30K 60K 90K 120K 150K 125868.58 125808.66 125740.59
etcd Test: PUT - Connections: 500 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 1000 - Average Latency C B A 2 4 6 8 10 7.7 7.7 7.7
etcd Test: PUT - Connections: 500 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 500 - Clients: 1000 C B A 30K 60K 90K 120K 150K 125808.38 126006.70 126034.68
Unvanquished Resolution: 1920 x 1200 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1200 - Effects Quality: High C B A 50 100 150 200 250 203.9 205.6 204.4
Unvanquished Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High C B A 50 100 150 200 250 221.4 222.6 223.9
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough C B A 2 4 6 8 10 7.9396 7.9426 7.9430 1. (CXX) g++ options: -O3 -flto -pthread
Unvanquished Resolution: 1920 x 1200 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1200 - Effects Quality: Medium C B A 50 100 150 200 250 244.7 245.2 243.7
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM C B A 40 80 120 160 200 173.2 172.9 172.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM C B A 130 260 390 520 650 578.7 578.8 581.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
etcd Test: PUT - Connections: 100 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 1000 - Average Latency C B A 2 4 6 8 10 6.1 6.1 6.1
etcd Test: PUT - Connections: 100 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 100 - Clients: 1000 C B A 30K 60K 90K 120K 150K 163135.45 163025.22 162474.20
etcd Test: RANGE - Connections: 100 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 1000 - Average Latency C B A 2 4 6 8 10 6.1 6.1 6.1
etcd Test: RANGE - Connections: 100 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 100 - Clients: 1000 C B A 40K 80K 120K 160K 200K 163558.71 163552.13 163123.85
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Medium C B A 60 120 180 240 300 263.3 267.1 267.8
etcd Test: RANGE - Connections: 50 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 1000 - Average Latency C B A 1.3275 2.655 3.9825 5.31 6.6375 5.9 5.9 5.9
etcd Test: RANGE - Connections: 50 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: RANGE - Connections: 50 - Clients: 1000 C B A 40K 80K 120K 160K 200K 168376.89 168112.89 168558.40
etcd Test: PUT - Connections: 50 - Clients: 1000 - Average Latency OpenBenchmarking.org ms, Fewer Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 1000 - Average Latency C B A 1.3275 2.655 3.9825 5.31 6.6375 5.9 5.9 5.9
etcd Test: PUT - Connections: 50 - Clients: 1000 OpenBenchmarking.org Requests/sec, More Is Better etcd 3.5.4 Test: PUT - Connections: 50 - Clients: 1000 C B A 40K 80K 120K 160K 200K 168534.31 168241.77 168856.20
Aircrack-ng OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 C B A 7K 14K 21K 28K 35K 33749.55 33109.84 33031.90 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship C B A 0.81 1.62 2.43 3.24 4.05 3.5 3.6 3.6
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM C B A 30 60 90 120 150 151.4 151.6 151.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM C B A 110 220 330 440 550 521.2 527.8 523.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating C B A 13K 26K 39K 52K 65K 59506 61210 61355 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating C B A 17K 34K 51K 68K 85K 77380 77477 75966 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p C B A 2 4 6 8 10 6.344 6.359 6.359 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Redis Test: LPUSH - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 C B A 600K 1200K 1800K 2400K 3000K 2168859.75 2947470.75 2929168.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
srsRAN Test: OFDM_Test OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test C B A 40M 80M 120M 160M 200M 185500000 183000000 168600000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
Redis Test: LPOP - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 C B A 600K 1200K 1800K 2400K 3000K 2747067.50 2840960.00 2848729.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 C B A 600K 1200K 1800K 2400K 3000K 2918397.00 2941056.25 2844518.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 C B A 600K 1200K 1800K 2400K 3000K 2718001.00 2851816.25 2822836.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast C B A 40 80 120 160 200 160.06 160.03 160.13 1. (CXX) g++ options: -O3 -flto -pthread
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 C B A 700K 1400K 2100K 2800K 3500K 3312414.50 2669989.75 3388742.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 50 C B A 1000K 2000K 3000K 4000K 5000K 2659905.00 2755629.25 4457927.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 C B A 800K 1600K 2400K 3200K 4000K 3246355.0 3273479.5 3502199.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 500 C B A 1.1M 2.2M 3.3M 4.4M 5.5M 2617574.0 2834518.0 5223662.5 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 1000 C B A 800K 1600K 2400K 3200K 4000K 3752102.50 3465545.75 3297778.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 C B A 800K 1600K 2400K 3200K 4000K 3243652.00 3119402.25 3555745.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Primesieve Length: 1e12 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e12 C B A 5 10 15 20 25 20.21 20.29 20.22 1. (CXX) g++ options: -O3
Redis Test: SADD - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 500 C B A 900K 1800K 2700K 3600K 4500K 4012269.00 3854227.75 3875137.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 50 C B A 900K 1800K 2700K 3600K 4500K 3943910.25 3997500.25 3638545.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 C B A 900K 1800K 2700K 3600K 4500K 4293396.5 4364222.5 3881387.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 C B A 900K 1800K 2700K 3600K 4500K 4248164.5 4223762.0 4243838.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 C B A 1.1M 2.2M 3.3M 4.4M 5.5M 4344096.50 3822955.75 5352183.50 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default C B A 4 8 12 16 20 15.40 15.49 15.51
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM C B A 20 40 60 80 100 87 88 88 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM C B A 40 80 120 160 200 188.4 189.3 189.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K C B A 10 20 30 40 50 43.96 43.71 43.53 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM C B A 50 100 150 200 250 209.7 209.6 209.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM C B A 130 260 390 520 650 587.8 590.7 594.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
Facebook RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill C B A 300K 600K 900K 1200K 1500K 1195182 1163406 1175330 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM C B A 40 80 120 160 200 189.6 191.2 191.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
srsRAN Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM C B A 120 240 360 480 600 534.7 537.6 541.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium C B A 14 28 42 56 70 60.89 60.87 60.85 1. (CXX) g++ options: -O3 -flto -pthread
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle C B A 2K 4K 6K 8K 10K 9257.1 9258.0 8941.5 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K C B A 20 40 60 80 100 88.56 86.53 85.52 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz C B A 1.2848 2.5696 3.8544 5.1392 6.424 5.710 5.699 5.688
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K C B A 30 60 90 120 150 125.22 127.04 121.73 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle C B A 4K 8K 12K 16K 20K 17013.4 16992.7 16672.5 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p C B A 30 60 90 120 150 118.97 120.54 119.55 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein C B A 1.319 2.638 3.957 5.276 6.595 5.862 5.695 5.825 1. (CXX) g++ options: -O3 -lm -ldl
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p C B A 60 120 180 240 300 265.94 260.78 260.88 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p C B A 100 200 300 400 500 457.51 446.14 441.12 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Phoronix Test Suite v10.8.4