sfsd Intel Core i7-8700K testing with a ASUS TUF Z370-PLUS GAMING (2001 BIOS) and ASUS Intel UHD 630 CFL GT2 16GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209060-NE-SFSD7970842&sro&grr .
sfsd Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B C Intel Core i7-8700K @ 4.70GHz (6 Cores / 12 Threads) ASUS TUF Z370-PLUS GAMING (2001 BIOS) Intel 8th Gen Core 16GB 128GB Toshiba THNSN5128GPU7 ASUS Intel UHD 630 CFL GT2 16GB (1200MHz) Realtek ALC887-VD DELL S2409W Intel I219-V Ubuntu 22.04 5.19.0-rc6-phx-retbleed (x86_64) GNOME Shell 42.4 X Server + Wayland 4.6 Mesa 22.0.1 1.2.204 GCC 11.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 Java Details - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) Python Details - Python 3.10.4 Security Details - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled
sfsd build-nodejs: Time To Compile ai-benchmark: Device AI Score ai-benchmark: Device Training Score ai-benchmark: Device Inference Score spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - SHA-512 Benchmark Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 500 - Repartition Test Time spark: 1000000 - 500 - Group By Test Time spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 500 - Calculate Pi Benchmark spark: 1000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 100 - Broadcast Inner Join Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Repartition Test Time spark: 40000000 - 100 - Group By Test Time spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - SHA-512 Benchmark Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 2000 - Inner Join Test Time spark: 40000000 - 2000 - Repartition Test Time spark: 40000000 - 2000 - Group By Test Time spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 40000000 - 2000 - SHA-512 Benchmark Time spark: 40000000 - 500 - Broadcast Inner Join Test Time spark: 40000000 - 500 - Inner Join Test Time spark: 40000000 - 500 - Repartition Test Time spark: 40000000 - 500 - Group By Test Time spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 1000 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - Inner Join Test Time spark: 40000000 - 1000 - Repartition Test Time spark: 40000000 - 1000 - Group By Test Time spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 1000 - Calculate Pi Benchmark spark: 40000000 - 1000 - SHA-512 Benchmark Time spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 20000000 - 100 - Broadcast Inner Join Test Time spark: 20000000 - 100 - Inner Join Test Time spark: 20000000 - 100 - Repartition Test Time spark: 20000000 - 100 - Group By Test Time spark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 100 - Calculate Pi Benchmark spark: 20000000 - 100 - SHA-512 Benchmark Time spark: 20000000 - 2000 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - Inner Join Test Time spark: 20000000 - 2000 - Repartition Test Time spark: 20000000 - 2000 - Group By Test Time spark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 2000 - Calculate Pi Benchmark spark: 20000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 500 - Broadcast Inner Join Test Time spark: 20000000 - 500 - Inner Join Test Time spark: 20000000 - 500 - Repartition Test Time spark: 20000000 - 500 - Group By Test Time spark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 500 - SHA-512 Benchmark Time spark: 20000000 - 1000 - Broadcast Inner Join Test Time spark: 20000000 - 1000 - Inner Join Test Time spark: 20000000 - 1000 - Repartition Test Time spark: 20000000 - 1000 - Group By Test Time spark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 1000 - SHA-512 Benchmark Time primesieve: 1e13 spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Repartition Test Time spark: 10000000 - 2000 - Group By Test Time spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 2000 - SHA-512 Benchmark Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 1000 - Inner Join Test Time spark: 10000000 - 1000 - Repartition Test Time spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 100 - Inner Join Test Time spark: 10000000 - 100 - Repartition Test Time spark: 10000000 - 100 - Group By Test Time spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Calculate Pi Benchmark spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 500 - Group By Test Time spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 500 - Calculate Pi Benchmark spark: 10000000 - 500 - SHA-512 Benchmark Time clickhouse: 100M Rows Web Analytics Dataset, Third Run clickhouse: 100M Rows Web Analytics Dataset, Second Run clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet build-python: Released Build, PGO + LTO Optimized build-erlang: Time To Compile ncnn: CPU - FastestDet ncnn: CPU - vision_transformer ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet svt-av1: Preset 4 - Bosphorus 4K redis: LPUSH - 1000 unvanquished: 1920 x 1080 - Ultra redis: LPUSH - 50 redis: LPUSH - 500 redis: LPOP - 1000 redis: SET - 1000 astcenc: Exhaustive redis: SADD - 50 redis: GET - 1000 build-php: Time To Compile build-wasmer: Time To Compile redis: GET - 50 redis: SET - 500 redis: SADD - 500 redis: LPOP - 50 redis: LPOP - 500 node-web-tooling: dragonflydb: 200 - 5:1 memtier-benchmark: Redis - 500 - 5:1 dragonflydb: 200 - 1:1 dragonflydb: 200 - 1:5 memtier-benchmark: Redis - 100 - 1:1 memtier-benchmark: Redis - 100 - 1:10 memtier-benchmark: Redis - 100 - 5:1 memtier-benchmark: Redis - 500 - 1:1 memtier-benchmark: Redis - 500 - 1:5 memtier-benchmark: Redis - 500 - 1:10 dragonflydb: 50 - 1:5 dragonflydb: 50 - 5:1 openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU memtier-benchmark: Redis - 100 - 1:5 dragonflydb: 50 - 1:1 memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 50 - 5:1 openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 50 - 1:1 openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU graphics-magick: Enhanced graphics-magick: Noise-Gaussian openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU graphics-magick: Swirl openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU graphics-magick: Resizing graphics-magick: Sharpen graphics-magick: Rotate graphics-magick: HWB Color Space unvanquished: 1920 x 1080 - High astcenc: Thorough unvanquished: 1920 x 1080 - Medium natron: Spaceship redis: SET - 50 svt-av1: Preset 4 - Bosphorus 1080p astcenc: Fast compress-7zip: Decompression Rating compress-7zip: Compression Rating primesieve: 1e12 aircrack-ng: redis: SADD - 1000 inkscape: SVG Files To PNG svt-av1: Preset 8 - Bosphorus 4K redis: GET - 500 astcenc: Medium build-python: Default svt-av1: Preset 10 - Bosphorus 4K blosc: blosclz bitshuffle svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 4K unpack-linux: linux-5.19.tar.xz blosc: blosclz shuffle svt-av1: Preset 10 - Bosphorus 1080p lammps: Rhodopsin Protein svt-av1: Preset 12 - Bosphorus 1080p A B C 957.971 1948 986 962 3.27 3.598246173 4.06 5.23 14.043861196 263.652568934 4.94 2.23 2.57 3.57 4.48 13.905990534 264.226547757 4.67 57.02 58.60 51.916134444 35.68 13.94 263.315801133 64.387573869 1.86 2.34 3.31 3.82 14.04 265.153226354 4.41 55.738831618 59.15 50.01 33.32 13.86 263.767059395 61.26 57.41 55.31 48.58 34.58 13.99 264.981969954 62.29 58.097028311 59.05 46.65523172 32.95 14.00 264.005641076 62.08 2.56956962 2.80 3.66 4.80 14.07 264.052941617 4.664221858 31.60 29.613820412 25.986921848 14.970582393 14.05 264.381780057 35.44 27.95 29.12 25.23 15.34 13.924694595 265.642731262 33.48 28.696135429 29.19 24.88 14.63298411 14.099430503 263.639657167 33.70 27.33 29.20 24.350182488 14.51 13.909854317 264.540998402 33.09102933 369.551 14.59 15.87 13.66539762 9.60 14.20 266.095305464 18.73 14.51 15.05 13.53 9.96 13.75 263.348910661 18.77 14.96 15.78 13.807076658 9.47 14.05 263.652088598 18.72 13.44 14.396405772 13.14097122 9.05 14.02 263.754411675 17.86 115.74 113.48 102.81 33.978 4.259 3.066 4.867 35.392 3.379 1.635 13.323 288.398 141.863 3.97 230.15 10.19 16.35 24.34 21.18 8.23 10.26 55.23 12.35 1.1 6.82 3.47 3.31 3.59 4.64 15.52 1.195 1848508.38 57 1906321.12 2046449.25 2090338.5 2074037.5 0.4897 2629399.25 2912617 86.842 87.613 2732588 2331370 2627615.75 3444536.75 3301260.75 12.16 1992128.41 1512323.24 2087416.28 2211895.73 1806170.52 1992499.12 1716481.41 1623068.24 1761822.77 1790527.72 2144617.32 1927867.31 3194.29 1.24 1970123.24 1988043.9 1989301.28 1726134.01 3200.37 1.23 1950043.18 1945581.31 2022.73 1.97 1051.99 3.8 182 21.96 16.13 247.67 18.22 219.33 37.49 106.65 20.61 193.9 143 172 15.76 380.4 264 0.6 9929.88 1.11 5321.4 640 91 655 628 128.4 4.7866 188.3 2.1 2361094.5 4.079 100.6353 38738 48196 31.15 23361.854 2609105.5 27.854 24.938 2815841.5 37.1805 24.532 51.151 5989 68.855 76.443 7.241 10148.2 158.082 4.942 285.723 957.363 1950 985 965 3.55 3.70 3.92 5.26 14.35 263.960444258 5.31 2.34 2.45 3.56 4.30 13.78 264.140951315 4.51 56.923808525 58.34445017 57.92 35.08 14.02 263.854947099 64.85 1.93 2.28 3.10 3.99 13.88 263.540749989 3.98 55.24 60.762904748 51.186432962 33.01 14.865667127 263.763690232 61.72 55.53 56.99 47.215243854 34.54 13.96 263.961978455 62.61 56.38 55.65 50.60 33.87 13.99 262.920880355 61.82910047 2.40 3.00 3.63 4.48 14.10 263.776367306 4.64 29.25 28.94 25.17 15.19 14.085522088 265.850551676 34.93 28.79 28.99 24.87 14.743993459 13.99 266.225163733 33.37 28.750545093 28.79 26.84 15.19 13.957632938 263.105880123 33.75 26.95 28.25 23.97 14.244176663 13.80 264.298589194 32.985493107 373.915 14.71 16.04 13.78 10.10 13.80 263.786086437 18.902454953 14.59 15.16 13.69 10.07 14.02 264.015118007 18.98 14.629265648 15.495994056 13.47 9.06 14.05 263.143421389 18.702075341 13.73 14.94 13.257431128 8.99 14.04 263.971714723 17.762720002 121.84 111.19 100.99 33.952 4.271 3.123 4.851 35.304 3.399 1.65 13.653 288.253 142.044 3.86 228.89 10 16.34 24.35 21.05 8.24 10.27 55.21 12.27 1.1 6.78 3.41 3.18 3.57 4.6 15.59 1.203 1897336 56.6 1786065.12 1778839.88 2043605.62 2222163.25 0.4895 2605961.25 2347019 87.021 86.669 2861609.25 2327680 2492194.5 2141549.5 2035452.62 12.43 2005750.94 1533754.23 2051704.42 2194182.24 1761644.31 2022525.58 1737144.54 1593958.74 1772298.56 1789992.05 2154148.5 1931122.18 3299.94 1.2 1919288.21 1994368.21 1942966.66 1713885.81 3196.88 1.23 1899683.69 1711625.32 2019.27 1.98 1049.73 3.81 181.13 22.06 16.18 246.98 18.1 220.79 32.5 122.98 20.43 195.66 143 172 15.76 380.42 264 0.6 9884.04 1.12 5297.96 644 91 625 638 132.7 4.7876 182 2.1 1924962.25 4.068 100.3432 39049 48345 30.724 23363.461 2158587.25 27.965 25.457 2823473.75 37.2343 24.495 53.315 5946.4 68.34 78.248 7.108 10174.6 158.056 4.941 285.843 955.946 1933 984 949 3.38 3.62 4.03 5.03 14.07 265.811726043 5.32 2.37 2.62 3.41 4.37 14.00 266.50 4.69 58.75 57.94 50.03 36.32 14.19 266.04 65.48 1.91 2.28 3.28 3.90 14.10 265.96 4.24 54.84 58.05 47.48 33.89 13.88 266.18 61.87 54.76 57.19 48.18 34.29 13.97 266.71 62.43 55.67 55.99 47.20 33.18 13.95 266.48 62.14 2.52 2.94 3.66 4.65 14.11 266.025028162 4.76 31.18 29.71 26.74 15.35 13.99 265.83 34.57 27.65 29.13 25.41 15.30 13.93 266.71 33.54 27.69 28.32 24.56 14.78 13.89 265.920856740 33.71 27.76 27.81 24.45 14.58 14.12 265.99 32.77 382.859 14.57 16.60 13.90 10.15 13.98 265.85 18.80 14.23 15.10 13.59 9.82 14.80 266.07 18.81 15.68 15.85 13.90 9.34 13.96 265.63 18.64 14.14 14.96 13.29 9.35 14.00 266.283175236 17.89 115.00 113.79 100.03 33.905 4.275 3.080 4.877 35.282 3.408 1.636 13.104 289.768 142.316 3.97 228.59 10.23 16.33 24.31 21.03 8.23 10.26 55.11 12.38 1.10 6.84 3.48 3.32 3.61 4.62 15.61 1.201 1945768.33 57.0 1878914.92 1933436.95 2028878.43 2226848.23 0.4859 2552054.12 2732949.30 86.245 85.902 2830668.70 2201393.84 2568888.29 3105232.87 3208768.77 12.14 2032385.14 1545189.59 2074466.95 2217552.26 1813065.59 1996437.79 1711961.58 1620962.12 1720148.88 1748466.51 2159425.65 1934424.72 3338.27 1.17 1943695.24 1996857.54 1929613.47 1695938.52 3310.58 1.18 1944401.05 1819804.24 2056.36 1.93 1100.09 3.63 190.39 21.00 16.49 242.22 18.68 213.97 33.55 119.13 21.76 183.65 143 172 16.46 364.28 265 0.63 9454.83 1.16 5106.22 643 91 656 639 134.9 4.7847 179.2 2.1 2306357.42 4.072 100.2210 38962 47249 30.989 23366.598 2594956.17 27.666 25.652 2799995.58 37.1875 24.302 53.407 5977.6 69.095 78.108 7.062 10229.1 158.634 4.912 285.997 OpenBenchmarking.org
Timed Node.js Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Node.js Compilation 18.8 Time To Compile A B C 200 400 600 800 1000 SE +/- 0.84, N = 3 957.97 957.36 955.95
AI Benchmark Alpha Device AI Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device AI Score A B C 400 800 1200 1600 2000 1948 1950 1933
AI Benchmark Alpha Device Training Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Training Score A B C 200 400 600 800 1000 986 985 984
AI Benchmark Alpha Device Inference Score OpenBenchmarking.org Score, More Is Better AI Benchmark Alpha 0.1.2 Device Inference Score A B C 200 400 600 800 1000 962 965 949
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 0.7988 1.5976 2.3964 3.1952 3.994 SE +/- 0.06, N = 9 3.27 3.55 3.38
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time A B C 0.8325 1.665 2.4975 3.33 4.1625 SE +/- 0.039029208, N = 9 3.598246173 3.700000000 3.620000000
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time A B C 0.9135 1.827 2.7405 3.654 4.5675 SE +/- 0.05, N = 9 4.06 3.92 4.03
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time A B C 1.1835 2.367 3.5505 4.734 5.9175 SE +/- 0.03, N = 9 5.23 5.26 5.03
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.10, N = 9 14.04 14.35 14.07
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.16, N = 9 263.65 263.96 265.81
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 1.197 2.394 3.591 4.788 5.985 SE +/- 0.05, N = 9 4.94 5.31 5.32
Apache Spark Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 0.5333 1.0666 1.5999 2.1332 2.6665 SE +/- 0.11, N = 9 2.23 2.34 2.37
Apache Spark Row Count: 1000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time A B C 0.5895 1.179 1.7685 2.358 2.9475 SE +/- 0.03, N = 9 2.57 2.45 2.62
Apache Spark Row Count: 1000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time A B C 0.8033 1.6066 2.4099 3.2132 4.0165 SE +/- 0.02, N = 9 3.57 3.56 3.41
Apache Spark Row Count: 1000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time A B C 1.008 2.016 3.024 4.032 5.04 SE +/- 0.02, N = 9 4.48 4.30 4.37
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.03, N = 9 13.91 13.78 14.00
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.36, N = 9 264.23 264.14 266.50
Apache Spark Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 1.0553 2.1106 3.1659 4.2212 5.2765 SE +/- 0.05, N = 9 4.67 4.51 4.69
Apache Spark Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 13 26 39 52 65 SE +/- 0.97, N = 4 57.02 56.92 58.75
Apache Spark Row Count: 40000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time A B C 13 26 39 52 65 SE +/- 1.35, N = 4 58.60 58.34 57.94
Apache Spark Row Count: 40000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time A B C 13 26 39 52 65 SE +/- 0.71, N = 4 51.92 57.92 50.03
Apache Spark Row Count: 40000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time A B C 8 16 24 32 40 SE +/- 0.10, N = 4 35.68 35.08 36.32
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.26, N = 4 13.94 14.02 14.19
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.20, N = 4 263.32 263.85 266.04
Apache Spark Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 15 30 45 60 75 SE +/- 0.79, N = 4 64.39 64.85 65.48
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 0.4343 0.8686 1.3029 1.7372 2.1715 SE +/- 0.02, N = 9 1.86 1.93 1.91
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time A B C 0.5265 1.053 1.5795 2.106 2.6325 SE +/- 0.02, N = 9 2.34 2.28 2.28
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time A B C 0.7448 1.4896 2.2344 2.9792 3.724 SE +/- 0.02, N = 9 3.31 3.10 3.28
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time A B C 0.8978 1.7956 2.6934 3.5912 4.489 SE +/- 0.04, N = 9 3.82 3.99 3.90
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.09, N = 9 14.04 13.88 14.10
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.43, N = 9 265.15 263.54 265.96
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 0.9923 1.9846 2.9769 3.9692 4.9615 SE +/- 0.06, N = 9 4.41 3.98 4.24
Apache Spark Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 13 26 39 52 65 SE +/- 0.68, N = 3 55.74 55.24 54.84
Apache Spark Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time A B C 14 28 42 56 70 SE +/- 0.81, N = 3 59.15 60.76 58.05
Apache Spark Row Count: 40000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time A B C 12 24 36 48 60 SE +/- 0.42, N = 3 50.01 51.19 47.48
Apache Spark Row Count: 40000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time A B C 8 16 24 32 40 SE +/- 0.10, N = 3 33.32 33.01 33.89
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.06, N = 3 13.86 14.87 13.88
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.18, N = 3 263.77 263.76 266.18
Apache Spark Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 14 28 42 56 70 SE +/- 0.31, N = 3 61.26 61.72 61.87
Apache Spark Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 13 26 39 52 65 SE +/- 1.23, N = 3 57.41 55.53 54.76
Apache Spark Row Count: 40000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Inner Join Test Time A B C 13 26 39 52 65 SE +/- 0.91, N = 3 55.31 56.99 57.19
Apache Spark Row Count: 40000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Repartition Test Time A B C 11 22 33 44 55 SE +/- 0.93, N = 3 48.58 47.22 48.18
Apache Spark Row Count: 40000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Group By Test Time A B C 8 16 24 32 40 SE +/- 0.40, N = 3 34.58 34.54 34.29
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.02, N = 3 13.99 13.96 13.97
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.59, N = 3 264.98 263.96 266.71
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 14 28 42 56 70 SE +/- 0.45, N = 3 62.29 62.61 62.43
Apache Spark Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 13 26 39 52 65 SE +/- 1.93, N = 3 58.10 56.38 55.67
Apache Spark Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time A B C 13 26 39 52 65 SE +/- 0.99, N = 3 59.05 55.65 55.99
Apache Spark Row Count: 40000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Repartition Test Time A B C 11 22 33 44 55 SE +/- 0.26, N = 3 46.66 50.60 47.20
Apache Spark Row Count: 40000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Group By Test Time A B C 8 16 24 32 40 SE +/- 0.03, N = 3 32.95 33.87 33.18
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.07, N = 3 14.00 13.99 13.95
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.27, N = 3 264.01 262.92 266.48
Apache Spark Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 14 28 42 56 70 SE +/- 0.13, N = 3 62.08 61.83 62.14
Apache Spark Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 0.5782 1.1564 1.7346 2.3128 2.891 SE +/- 0.02035353, N = 6 2.56956962 2.40000000 2.52000000
Apache Spark Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time A B C 0.675 1.35 2.025 2.7 3.375 SE +/- 0.03, N = 6 2.80 3.00 2.94
Apache Spark Row Count: 1000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time A B C 0.8235 1.647 2.4705 3.294 4.1175 SE +/- 0.02, N = 6 3.66 3.63 3.66
Apache Spark Row Count: 1000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time A B C 1.08 2.16 3.24 4.32 5.4 SE +/- 0.06, N = 6 4.80 4.48 4.65
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.14, N = 6 14.07 14.10 14.11
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.19, N = 6 264.05 263.78 266.03
Apache Spark Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 1.071 2.142 3.213 4.284 5.355 SE +/- 0.083377130, N = 6 4.664221858 4.640000000 4.760000000
Apache Spark Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.32, N = 3 31.60 29.25 31.18
Apache Spark Row Count: 20000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.45, N = 3 29.61 28.94 29.71
Apache Spark Row Count: 20000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Repartition Test Time A B C 6 12 18 24 30 SE +/- 0.16, N = 3 25.99 25.17 26.74
Apache Spark Row Count: 20000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Group By Test Time A B C 4 8 12 16 20 SE +/- 0.19, N = 3 14.97 15.19 15.35
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.03, N = 3 14.05 14.09 13.99
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.45, N = 3 264.38 265.85 265.83
Apache Spark Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 8 16 24 32 40 SE +/- 0.15, N = 3 35.44 34.93 34.57
Apache Spark Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.38, N = 3 27.95 28.79 27.65
Apache Spark Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.30, N = 3 29.12 28.99 29.13
Apache Spark Row Count: 20000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Repartition Test Time A B C 6 12 18 24 30 SE +/- 0.18, N = 3 25.23 24.87 25.41
Apache Spark Row Count: 20000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Group By Test Time A B C 4 8 12 16 20 SE +/- 0.33, N = 3 15.34 14.74 15.30
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.08, N = 3 13.92 13.99 13.93
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.31, N = 3 265.64 266.23 266.71
Apache Spark Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 8 16 24 32 40 SE +/- 0.08, N = 3 33.48 33.37 33.54
Apache Spark Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.24, N = 3 28.70 28.75 27.69
Apache Spark Row Count: 20000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.16, N = 3 29.19 28.79 28.32
Apache Spark Row Count: 20000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Repartition Test Time A B C 6 12 18 24 30 SE +/- 0.08, N = 3 24.88 26.84 24.56
Apache Spark Row Count: 20000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Group By Test Time A B C 4 8 12 16 20 SE +/- 0.19, N = 3 14.63 15.19 14.78
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.08, N = 3 14.10 13.96 13.89
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.37, N = 3 263.64 263.11 265.92
Apache Spark Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 8 16 24 32 40 SE +/- 0.27, N = 3 33.70 33.75 33.71
Apache Spark Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.49, N = 3 27.33 26.95 27.76
Apache Spark Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time A B C 7 14 21 28 35 SE +/- 0.22, N = 3 29.20 28.25 27.81
Apache Spark Row Count: 20000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Repartition Test Time A B C 6 12 18 24 30 SE +/- 0.06, N = 3 24.35 23.97 24.45
Apache Spark Row Count: 20000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Group By Test Time A B C 4 8 12 16 20 SE +/- 0.40, N = 3 14.51 14.24 14.58
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.30, N = 3 13.91 13.80 14.12
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.39, N = 3 264.54 264.30 265.99
Apache Spark Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 8 16 24 32 40 SE +/- 0.23, N = 3 33.09 32.99 32.77
Primesieve Length: 1e13 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e13 A B C 80 160 240 320 400 SE +/- 1.09, N = 3 369.55 373.92 382.86 1. (CXX) g++ options: -O3
Apache Spark Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.04, N = 3 14.59 14.71 14.57
Apache Spark Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.34, N = 3 15.87 16.04 16.60
Apache Spark Row Count: 10000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time A B C 4 8 12 16 20 SE +/- 0.18, N = 3 13.67 13.78 13.90
Apache Spark Row Count: 10000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time A B C 3 6 9 12 15 SE +/- 0.09, N = 3 9.60 10.10 10.15
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.09, N = 3 14.20 13.80 13.98
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.20, N = 3 266.10 263.79 265.85
Apache Spark Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time A B C 5 10 15 20 25 SE +/- 0.01, N = 3 18.73 18.90 18.80
Apache Spark Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.14, N = 3 14.51 14.59 14.23
Apache Spark Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.25, N = 3 15.05 15.16 15.10
Apache Spark Row Count: 10000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time A B C 4 8 12 16 20 SE +/- 0.10, N = 3 13.53 13.69 13.59
Apache Spark Row Count: 10000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time A B C 3 6 9 12 15 SE +/- 0.21, N = 3 9.96 10.07 9.82
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.46, N = 3 13.75 14.02 14.80
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.24, N = 3 263.35 264.02 266.07
Apache Spark Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time A B C 5 10 15 20 25 SE +/- 0.15, N = 3 18.77 18.98 18.81
Apache Spark Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.54, N = 3 14.96 14.63 15.68
Apache Spark Row Count: 10000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.27, N = 3 15.78 15.50 15.85
Apache Spark Row Count: 10000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time A B C 4 8 12 16 20 SE +/- 0.19, N = 3 13.81 13.47 13.90
Apache Spark Row Count: 10000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time A B C 3 6 9 12 15 SE +/- 0.06, N = 3 9.47 9.06 9.34
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.09, N = 3 14.05 14.05 13.96
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.13, N = 3 263.65 263.14 265.63
Apache Spark Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time A B C 5 10 15 20 25 SE +/- 0.13, N = 3 18.72 18.70 18.64
Apache Spark Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.50, N = 3 13.44 13.73 14.14
Apache Spark Row Count: 10000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time A B C 4 8 12 16 20 SE +/- 0.31, N = 3 14.40 14.94 14.96
Apache Spark Row Count: 10000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time A B C 3 6 9 12 15 SE +/- 0.08, N = 3 13.14 13.26 13.29
Apache Spark Row Count: 10000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time A B C 3 6 9 12 15 SE +/- 0.04, N = 3 9.05 8.99 9.35
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe A B C 4 8 12 16 20 SE +/- 0.06, N = 3 14.02 14.04 14.00
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark A B C 60 120 180 240 300 SE +/- 0.09, N = 3 263.75 263.97 266.28
Apache Spark Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time A B C 4 8 12 16 20 SE +/- 0.14, N = 3 17.86 17.76 17.89
ClickHouse 100M Rows Web Analytics Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run A B C 30 60 90 120 150 SE +/- 1.03, N = 5 115.74 121.84 115.00 MIN: 8.6 / MAX: 20000 MIN: 8.5 / MAX: 20000 MIN: 7.14 / MAX: 12000 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run A B C 30 60 90 120 150 SE +/- 1.47, N = 5 113.48 111.19 113.79 MIN: 7.71 / MAX: 12000 MIN: 8.34 / MAX: 12000 MIN: 7.08 / MAX: 20000 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache A B C 20 40 60 80 100 SE +/- 1.12, N = 5 102.81 100.99 100.03 MIN: 7.39 / MAX: 15000 MIN: 7.71 / MAX: 8571.43 MIN: 6.85 / MAX: 12000 1. ClickHouse server version 22.5.4.19 (official build).
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B C 8 16 24 32 40 SE +/- 0.05, N = 3 33.98 33.95 33.91 MIN: 33.82 / MAX: 46.32 MIN: 33.83 / MAX: 45.2 MIN: 33.51 / MAX: 45.92 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A B C 0.9619 1.9238 2.8857 3.8476 4.8095 SE +/- 0.003, N = 3 4.259 4.271 4.275 MIN: 4.21 / MAX: 5.18 MIN: 4.24 / MAX: 5.01 MIN: 4.18 / MAX: 5.25 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B C 0.7027 1.4054 2.1081 2.8108 3.5135 SE +/- 0.003, N = 3 3.066 3.123 3.080 MIN: 2.98 / MAX: 4.12 MIN: 3.01 / MAX: 4.22 MIN: 3 / MAX: 4.62 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B C 1.0973 2.1946 3.2919 4.3892 5.4865 SE +/- 0.010, N = 3 4.867 4.851 4.877 MIN: 4.75 / MAX: 6.13 MIN: 4.76 / MAX: 7.9 MIN: 4.74 / MAX: 6.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A B C 8 16 24 32 40 SE +/- 0.08, N = 3 35.39 35.30 35.28 MIN: 35.23 / MAX: 46.59 MIN: 35.1 / MAX: 46.47 MIN: 35.07 / MAX: 47.22 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B C 0.7668 1.5336 2.3004 3.0672 3.834 SE +/- 0.007, N = 3 3.379 3.399 3.408 MIN: 3.3 / MAX: 15.39 MIN: 3.32 / MAX: 4.38 MIN: 3.3 / MAX: 4.7 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B C 0.3713 0.7426 1.1139 1.4852 1.8565 SE +/- 0.014, N = 3 1.635 1.650 1.636 MIN: 1.58 / MAX: 2.02 MIN: 1.59 / MAX: 2.58 MIN: 1.5 / MAX: 2.6 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B C 4 8 12 16 20 SE +/- 0.15, N = 3 13.32 13.65 13.10 MIN: 12.9 / MAX: 14.46 MIN: 13.28 / MAX: 25.73 MIN: 12.57 / MAX: 25.82 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Timed CPython Compilation Build Configuration: Released Build, PGO + LTO Optimized OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Released Build, PGO + LTO Optimized A B C 60 120 180 240 300 288.40 288.25 289.77
Timed Erlang/OTP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Erlang/OTP Compilation 25.0 Time To Compile A B C 30 60 90 120 150 SE +/- 0.19, N = 3 141.86 142.04 142.32
NCNN Target: CPU - Model: FastestDet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet A B C 0.8933 1.7866 2.6799 3.5732 4.4665 SE +/- 0.03, N = 3 3.97 3.86 3.97 MIN: 3.85 / MAX: 4.15 MIN: 3.78 / MAX: 4.09 MIN: 3.82 / MAX: 11.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vision_transformer OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer A B C 50 100 150 200 250 SE +/- 0.02, N = 3 230.15 228.89 228.59 MIN: 229.93 / MAX: 237.36 MIN: 228.7 / MAX: 235.46 MIN: 228.35 / MAX: 235.87 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: regnety_400m OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m A B C 3 6 9 12 15 SE +/- 0.02, N = 3 10.19 10.00 10.23 MIN: 10.11 / MAX: 11.07 MIN: 9.9 / MAX: 10.76 MIN: 10.13 / MAX: 11.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: squeezenet_ssd OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd A B C 4 8 12 16 20 SE +/- 0.01, N = 3 16.35 16.34 16.33 MIN: 16.25 / MAX: 16.66 MIN: 16.26 / MAX: 16.69 MIN: 16.2 / MAX: 17.41 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: yolov4-tiny OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny A B C 6 12 18 24 30 SE +/- 0.01, N = 3 24.34 24.35 24.31 MIN: 24.19 / MAX: 26.13 MIN: 24.26 / MAX: 24.91 MIN: 24.15 / MAX: 24.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet50 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 A B C 5 10 15 20 25 SE +/- 0.02, N = 3 21.18 21.05 21.03 MIN: 20.9 / MAX: 22.52 MIN: 20.93 / MAX: 22.12 MIN: 20.8 / MAX: 22.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: alexnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet A B C 2 4 6 8 10 SE +/- 0.00, N = 3 8.23 8.24 8.23 MIN: 8.16 / MAX: 9.38 MIN: 8.17 / MAX: 9.15 MIN: 8.14 / MAX: 8.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: resnet18 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 A B C 3 6 9 12 15 SE +/- 0.03, N = 3 10.26 10.27 10.26 MIN: 10.14 / MAX: 11.15 MIN: 10.11 / MAX: 17.12 MIN: 10.11 / MAX: 11.51 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: vgg16 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 A B C 12 24 36 48 60 SE +/- 0.02, N = 3 55.23 55.21 55.11 MIN: 54.95 / MAX: 61.87 MIN: 54.99 / MAX: 56.79 MIN: 54.86 / MAX: 61.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: googlenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet A B C 3 6 9 12 15 SE +/- 0.04, N = 3 12.35 12.27 12.38 MIN: 12.26 / MAX: 13.23 MIN: 12.17 / MAX: 13.31 MIN: 12.25 / MAX: 19.74 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: blazeface OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface A B C 0.2475 0.495 0.7425 0.99 1.2375 SE +/- 0.00, N = 3 1.10 1.10 1.10 MIN: 1.08 / MAX: 1.72 MIN: 1.07 / MAX: 1.75 MIN: 1.07 / MAX: 1.74 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: efficientnet-b0 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 A B C 2 4 6 8 10 SE +/- 0.01, N = 3 6.82 6.78 6.84 MIN: 6.74 / MAX: 7.75 MIN: 6.72 / MAX: 7.62 MIN: 6.72 / MAX: 7.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mnasnet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet A B C 0.783 1.566 2.349 3.132 3.915 SE +/- 0.01, N = 3 3.47 3.41 3.48 MIN: 3.43 / MAX: 4.15 MIN: 3.37 / MAX: 4.1 MIN: 3.38 / MAX: 4.51 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: shufflenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 A B C 0.747 1.494 2.241 2.988 3.735 SE +/- 0.06, N = 3 3.31 3.18 3.32 MIN: 3.26 / MAX: 4.01 MIN: 3.15 / MAX: 3.87 MIN: 3.11 / MAX: 4.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v3-v3 - Model: mobilenet-v3 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 A B C 0.8123 1.6246 2.4369 3.2492 4.0615 SE +/- 0.02, N = 3 3.59 3.57 3.61 MIN: 3.51 / MAX: 4.29 MIN: 3.51 / MAX: 4.6 MIN: 3.49 / MAX: 4.6 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU-v2-v2 - Model: mobilenet-v2 OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 A B C 1.044 2.088 3.132 4.176 5.22 SE +/- 0.03, N = 3 4.64 4.60 4.62 MIN: 4.51 / MAX: 5.33 MIN: 4.49 / MAX: 5.66 MIN: 4.46 / MAX: 5.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
NCNN Target: CPU - Model: mobilenet OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet A B C 4 8 12 16 20 SE +/- 0.01, N = 3 15.52 15.59 15.61 MIN: 15.45 / MAX: 15.79 MIN: 15.39 / MAX: 16.29 MIN: 15.4 / MAX: 16.25 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K A B C 0.2707 0.5414 0.8121 1.0828 1.3535 SE +/- 0.004, N = 3 1.195 1.203 1.201 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Redis Test: LPUSH - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 1000 A B C 400K 800K 1200K 1600K 2000K SE +/- 33163.21, N = 15 1848508.38 1897336.00 1945768.33 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Ultra OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Ultra A B C 13 26 39 52 65 SE +/- 0.03, N = 3 57.0 56.6 57.0
Redis Test: LPUSH - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 50 A B C 400K 800K 1200K 1600K 2000K SE +/- 18846.80, N = 15 1906321.12 1786065.12 1878914.92 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPUSH - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPUSH - Parallel Connections: 500 A B C 400K 800K 1200K 1600K 2000K SE +/- 32181.23, N = 15 2046449.25 1778839.88 1933436.95 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 1000 A B C 400K 800K 1200K 1600K 2000K SE +/- 38452.70, N = 15 2090338.50 2043605.62 2028878.43 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 A B C 500K 1000K 1500K 2000K 2500K SE +/- 36124.68, N = 15 2074037.50 2222163.25 2226848.23 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive A B C 0.1102 0.2204 0.3306 0.4408 0.551 SE +/- 0.0010, N = 3 0.4897 0.4895 0.4859 1. (CXX) g++ options: -O3 -flto -pthread
Redis Test: SADD - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 50 A B C 600K 1200K 1800K 2400K 3000K SE +/- 49552.86, N = 15 2629399.25 2605961.25 2552054.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 A B C 600K 1200K 1800K 2400K 3000K SE +/- 60647.19, N = 15 2912617.00 2347019.00 2732949.30 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Timed PHP Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 8.1.9 Time To Compile A B C 20 40 60 80 100 SE +/- 0.42, N = 3 86.84 87.02 86.25
Timed Wasmer Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile A B C 20 40 60 80 100 SE +/- 0.49, N = 3 87.61 86.67 85.90 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 A B C 600K 1200K 1800K 2400K 3000K SE +/- 36134.78, N = 15 2732588.00 2861609.25 2830668.70 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 A B C 500K 1000K 1500K 2000K 2500K SE +/- 55052.29, N = 12 2331370.00 2327680.00 2201393.84 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SADD - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 500 A B C 600K 1200K 1800K 2400K 3000K SE +/- 38463.64, N = 12 2627615.75 2492194.50 2568888.29 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 50 A B C 700K 1400K 2100K 2800K 3500K SE +/- 120387.99, N = 15 3444536.75 2141549.50 3105232.87 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: LPOP - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: LPOP - Parallel Connections: 500 A B C 700K 1400K 2100K 2800K 3500K SE +/- 69012.03, N = 15 3301260.75 2035452.62 3208768.77 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark A B C 3 6 9 12 15 SE +/- 0.06, N = 3 12.16 12.43 12.14
Dragonflydb Clients: 200 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 6872.33, N = 3 1992128.41 2005750.94 2032385.14 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1 A B C 300K 600K 900K 1200K 1500K SE +/- 7898.00, N = 3 1512323.24 1533754.23 1545189.59 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 8670.43, N = 3 2087416.28 2051704.42 2074466.95 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 200 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 A B C 500K 1000K 1500K 2000K 2500K SE +/- 1318.89, N = 3 2211895.73 2194182.24 2217552.26 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 8267.67, N = 3 1806170.52 1761644.31 1813065.59 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K SE +/- 10080.98, N = 3 1992499.12 2022525.58 1996437.79 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 8849.68, N = 3 1716481.41 1737144.54 1711961.58 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 A B C 300K 600K 900K 1200K 1500K SE +/- 13712.27, N = 3 1623068.24 1593958.74 1620962.12 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 A B C 400K 800K 1200K 1600K 2000K SE +/- 8909.71, N = 3 1761822.77 1772298.56 1720148.88 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K SE +/- 8041.06, N = 3 1790527.72 1789992.05 1748466.51 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 A B C 500K 1000K 1500K 2000K 2500K SE +/- 4648.71, N = 3 2144617.32 2154148.50 2159425.65 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 5831.26, N = 3 1927867.31 1931122.18 1934424.72 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 700 1400 2100 2800 3500 SE +/- 8.61, N = 3 3194.29 3299.94 3338.27 MIN: 1761.36 / MAX: 3455.63 MIN: 3117.38 / MAX: 3443.22 MIN: 2906.46 / MAX: 3515.46 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 0.279 0.558 0.837 1.116 1.395 SE +/- 0.00, N = 3 1.24 1.20 1.17 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
memtier_benchmark Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 A B C 400K 800K 1200K 1600K 2000K SE +/- 18068.84, N = 3 1970123.24 1919288.21 1943695.24 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Dragonflydb Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 6994.81, N = 3 1988043.90 1994368.21 1996857.54 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 A B C 400K 800K 1200K 1600K 2000K SE +/- 12369.07, N = 3 1989301.28 1942966.66 1929613.47 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 4351.00, N = 3 1726134.01 1713885.81 1695938.52 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 700 1400 2100 2800 3500 SE +/- 4.37, N = 3 3200.37 3196.88 3310.58 MIN: 1758.1 / MAX: 3458.98 MIN: 1702.1 / MAX: 3449.98 MIN: 1782.74 / MAX: 3536.08 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 0.2768 0.5536 0.8304 1.1072 1.384 SE +/- 0.00, N = 3 1.23 1.23 1.18 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 A B C 400K 800K 1200K 1600K 2000K SE +/- 27485.35, N = 3 1950043.18 1899683.69 1944401.05 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 A B C 400K 800K 1200K 1600K 2000K SE +/- 11256.90, N = 3 1945581.31 1711625.32 1819804.24 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 400 800 1200 1600 2000 SE +/- 14.58, N = 3 2022.73 2019.27 2056.36 MIN: 1942.04 / MAX: 2048.56 MIN: 1952.63 / MAX: 2066.71 MIN: 1731.28 / MAX: 2158.61 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 0.4455 0.891 1.3365 1.782 2.2275 SE +/- 0.01, N = 3 1.97 1.98 1.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 200 400 600 800 1000 SE +/- 4.29, N = 3 1051.99 1049.73 1100.09 MIN: 1040.91 / MAX: 1059.93 MIN: 1041.12 / MAX: 1057.94 MIN: 1042 / MAX: 1134.78 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 0.8573 1.7146 2.5719 3.4292 4.2865 SE +/- 0.01, N = 3 3.80 3.81 3.63 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 40 80 120 160 200 SE +/- 0.66, N = 3 182.00 181.13 190.39 MIN: 97.01 / MAX: 200.14 MIN: 93.18 / MAX: 198.15 MIN: 137.7 / MAX: 208.9 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 5 10 15 20 25 SE +/- 0.07, N = 3 21.96 22.06 21.00 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 4 8 12 16 20 SE +/- 0.05, N = 3 16.13 16.18 16.49 MIN: 14.36 / MAX: 19.06 MIN: 9 / MAX: 21.75 MIN: 9.1 / MAX: 33.49 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 50 100 150 200 250 SE +/- 0.67, N = 3 247.67 246.98 242.22 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 5 10 15 20 25 SE +/- 0.03, N = 3 18.22 18.10 18.68 MIN: 16.77 / MAX: 28.3 MIN: 10.18 / MAX: 27.41 MIN: 10.54 / MAX: 28.1 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 50 100 150 200 250 SE +/- 0.38, N = 3 219.33 220.79 213.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 9 18 27 36 45 SE +/- 0.33, N = 3 37.49 32.50 33.55 MIN: 29.62 / MAX: 50.43 MIN: 15.55 / MAX: 40.11 MIN: 15.9 / MAX: 44.39 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 30 60 90 120 150 SE +/- 1.14, N = 3 106.65 122.98 119.13 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 5 10 15 20 25 SE +/- 0.08, N = 3 20.61 20.43 21.76 MIN: 15.16 / MAX: 28.91 MIN: 18.15 / MAX: 29.83 MIN: 14.5 / MAX: 32.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 40 80 120 160 200 SE +/- 0.66, N = 3 193.90 195.66 183.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced A B C 30 60 90 120 150 SE +/- 0.00, N = 3 143 143 143 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian A B C 40 80 120 160 200 SE +/- 0.33, N = 3 172 172 172 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 4 8 12 16 20 SE +/- 0.06, N = 3 15.76 15.76 16.46 MIN: 9.85 / MAX: 25.38 MIN: 11.99 / MAX: 24.27 MIN: 12.44 / MAX: 25.62 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 80 160 240 320 400 SE +/- 1.30, N = 3 380.40 380.42 364.28 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl A B C 60 120 180 240 300 SE +/- 0.00, N = 3 264 264 265 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 0.1418 0.2836 0.4254 0.5672 0.709 SE +/- 0.00, N = 3 0.60 0.60 0.63 MIN: 0.39 / MAX: 8.97 MIN: 0.36 / MAX: 9.05 MIN: 0.38 / MAX: 10.06 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 2K 4K 6K 8K 10K SE +/- 41.17, N = 3 9929.88 9884.04 9454.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 0.261 0.522 0.783 1.044 1.305 SE +/- 0.01, N = 3 1.11 1.12 1.16 MIN: 0.67 / MAX: 12.18 MIN: 0.66 / MAX: 2.71 MIN: 0.67 / MAX: 13.85 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 1100 2200 3300 4400 5500 SE +/- 18.95, N = 3 5321.40 5297.96 5106.22 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing A B C 140 280 420 560 700 SE +/- 0.58, N = 3 640 644 643 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen A B C 20 40 60 80 100 SE +/- 0.00, N = 3 91 91 91 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate A B C 140 280 420 560 700 SE +/- 3.71, N = 3 655 625 656 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space A B C 140 280 420 560 700 SE +/- 0.33, N = 3 628 638 639 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Unvanquished Resolution: 1920 x 1080 - Effects Quality: High OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High A B C 30 60 90 120 150 SE +/- 1.15, N = 3 128.4 132.7 134.9
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough A B C 1.0772 2.1544 3.2316 4.3088 5.386 SE +/- 0.0015, N = 3 4.7866 4.7876 4.7847 1. (CXX) g++ options: -O3 -flto -pthread
Unvanquished Resolution: 1920 x 1080 - Effects Quality: Medium OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: Medium A B C 40 80 120 160 200 SE +/- 2.18, N = 4 188.3 182.0 179.2
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship A B C 0.4725 0.945 1.4175 1.89 2.3625 SE +/- 0.00, N = 3 2.1 2.1 2.1
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 A B C 500K 1000K 1500K 2000K 2500K SE +/- 21826.28, N = 6 2361094.50 1924962.25 2306357.42 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p A B C 0.9178 1.8356 2.7534 3.6712 4.589 SE +/- 0.007, N = 3 4.079 4.068 4.072 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast A B C 20 40 60 80 100 SE +/- 0.07, N = 3 100.64 100.34 100.22 1. (CXX) g++ options: -O3 -flto -pthread
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating A B C 8K 16K 24K 32K 40K SE +/- 65.37, N = 3 38738 39049 38962 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating A B C 10K 20K 30K 40K 50K SE +/- 84.10, N = 3 48196 48345 47249 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Primesieve Length: 1e12 OpenBenchmarking.org Seconds, Fewer Is Better Primesieve 8.0 Length: 1e12 A B C 7 14 21 28 35 SE +/- 0.03, N = 3 31.15 30.72 30.99 1. (CXX) g++ options: -O3
Aircrack-ng OpenBenchmarking.org k/s, More Is Better Aircrack-ng 1.7 A B C 5K 10K 15K 20K 25K SE +/- 2.01, N = 3 23361.85 23363.46 23366.60 1. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
Redis Test: SADD - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SADD - Parallel Connections: 1000 A B C 600K 1200K 1800K 2400K 3000K SE +/- 35265.51, N = 3 2609105.50 2158587.25 2594956.17 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Inkscape Operation: SVG Files To PNG OpenBenchmarking.org Seconds, Fewer Is Better Inkscape Operation: SVG Files To PNG A B C 7 14 21 28 35 SE +/- 0.13, N = 3 27.85 27.97 27.67 1. Inkscape 1.1.2 (0a00cf5339, 2022-02-04)
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K A B C 6 12 18 24 30 SE +/- 0.09, N = 3 24.94 25.46 25.65 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 A B C 600K 1200K 1800K 2400K 3000K SE +/- 16101.19, N = 3 2815841.50 2823473.75 2799995.58 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium A B C 9 18 27 36 45 SE +/- 0.01, N = 3 37.18 37.23 37.19 1. (CXX) g++ options: -O3 -flto -pthread
Timed CPython Compilation Build Configuration: Default OpenBenchmarking.org Seconds, Fewer Is Better Timed CPython Compilation 3.10.6 Build Configuration: Default A B C 6 12 18 24 30 24.53 24.50 24.30
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K A B C 12 24 36 48 60 SE +/- 0.09, N = 3 51.15 53.32 53.41 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle A B C 1300 2600 3900 5200 6500 SE +/- 28.37, N = 3 5989.0 5946.4 5977.6 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p A B C 15 30 45 60 75 SE +/- 0.23, N = 3 68.86 68.34 69.10 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K A B C 20 40 60 80 100 SE +/- 0.25, N = 3 76.44 78.25 78.11 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz A B C 2 4 6 8 10 SE +/- 0.031, N = 4 7.241 7.108 7.062
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle A B C 2K 4K 6K 8K 10K SE +/- 23.89, N = 3 10148.2 10174.6 10229.1 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p A B C 40 80 120 160 200 SE +/- 0.48, N = 3 158.08 158.06 158.63 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein A B C 1.112 2.224 3.336 4.448 5.56 SE +/- 0.012, N = 3 4.942 4.941 4.912 1. (CXX) g++ options: -O3 -lm -ldl
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p A B C 60 120 180 240 300 SE +/- 1.67, N = 3 285.72 285.84 286.00 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Phoronix Test Suite v10.8.4