2023 new AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2302069-NE-2023NEW4563&grs .
2023 new Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution a b c AMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads) ASUS ROG ZENITH II EXTREME (1603 BIOS) AMD Starship/Matisse 4 x 16 GB DDR4-3600MT/s Corsair CMT64GX4M4Z3600C16 Samsung SSD 980 PRO 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio ASUS VP28U Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.19.0-051900rc7-generic (x86_64) GNOME Shell 42.2 X Server 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47) 1.2.204 GCC 11.3.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830104d Java Details - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04) Python Details - Python 3.10.6 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
2023 new rocksdb: Rand Fill Sync spark: 10000000 - 1000 - Inner Join Test Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 1000 - Repartition Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 20000000 - 1000 - Inner Join Test Time spark: 20000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 100 - Group By Test Time cloudsuite-ga: Connected Components spark: 1000000 - 2000 - Repartition Test Time spark: 20000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - Repartition Test Time cloudsuite-ws: 100 spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 10000000 - 100 - Repartition Test Time spark: 1000000 - 500 - Group By Test Time deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream spark: 40000000 - 1000 - Repartition Test Time clickhouse: 100M Rows Hits Dataset, Second Run spark: 20000000 - 100 - Inner Join Test Time spark: 20000000 - 100 - Group By Test Time spark: 20000000 - 100 - SHA-512 Benchmark Time spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Broadcast Inner Join Test Time clickhouse: 100M Rows Hits Dataset, First Run / Cold Cache spark: 1000000 - 500 - Repartition Test Time deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream spark: 1000000 - 2000 - Group By Test Time spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 20000000 - 1000 - Calculate Pi Benchmark spark: 40000000 - 500 - Repartition Test Time spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 100 - SHA-512 Benchmark Time spark: 20000000 - 500 - Repartition Test Time spark: 1000000 - 100 - SHA-512 Benchmark Time spark: 40000000 - 1000 - Inner Join Test Time spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Group By Test Time spark: 40000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 500 - Broadcast Inner Join Test Time spark: 20000000 - 2000 - Repartition Test Time cloudsuite-da: 1 spark: 40000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Inner Join Test Time cloudsuite-ws: 500 spark: 10000000 - 500 - Group By Test Time spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe rocksdb: Seq Fill clickhouse: 100M Rows Hits Dataset, Third Run spark: 10000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 2000 - Group By Test Time spark: 40000000 - 100 - Inner Join Test Time spark: 40000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 500 - SHA-512 Benchmark Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 2000 - Group By Test Time spark: 40000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 2000 - Broadcast Inner Join Test Time spark: 20000000 - 1000 - SHA-512 Benchmark Time spark: 20000000 - 500 - Inner Join Test Time spark: 10000000 - 2000 - SHA-512 Benchmark Time cloudsuite-ga: Page Rank spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 2000 - SHA-512 Benchmark Time spark: 20000000 - 500 - SHA-512 Benchmark Time spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 100 - Group By Test Time spark: 20000000 - 500 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 20000000 - 500 - Calculate Pi Benchmark spark: 20000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 20000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - Repartition Test Time spark: 40000000 - 2000 - Repartition Test Time deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream spark: 40000000 - 100 - Group By Test Time deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream spark: 40000000 - 1000 - Broadcast Inner Join Test Time deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 20000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe rocksdb: Rand Fill deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream spark: 20000000 - 2000 - Calculate Pi Benchmark Using Dataframe memcached: 1:1 spark: 40000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Calculate Pi Benchmark spark: 20000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - SHA-512 Benchmark Time deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream openems: pyEMS Coupler deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream spark: 10000000 - 100 - Inner Join Test Time deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream spark: 20000000 - 1000 - Group By Test Time vvenc: Bosphorus 4K - Fast spark: 40000000 - 1000 - SHA-512 Benchmark Time cloudsuite-ma: Large spark: 40000000 - 100 - Repartition Test Time spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 40000000 - 1000 - Group By Test Time spark: 20000000 - 100 - Repartition Test Time deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream spark: 1000000 - 100 - Calculate Pi Benchmark spark: 40000000 - 100 - Broadcast Inner Join Test Time openems: openEMS MSL_NotchFilter cloudsuite-ma: Small cloudsuite-da: 32 spark: 40000000 - 2000 - Calculate Pi Benchmark Using Dataframe deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream vvenc: Bosphorus 4K - Faster memcached: 1:5 deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream memcached: 1:100 deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream spark: 40000000 - 500 - Inner Join Test Time deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream rocksdb: Read Rand Write Rand rocksdb: Rand Read spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 500 - SHA-512 Benchmark Time rocksdb: Read While Writing cloudsuite-da: 4 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream cloudsuite-da: 8 spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream spark: 1000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 2000 - Group By Test Time vvenc: Bosphorus 1080p - Fast cloudsuite-ws: 400 spark: 40000000 - 500 - Group By Test Time spark: 20000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe vvenc: Bosphorus 1080p - Faster cloudsuite-da: 64 rocksdb: Update Rand deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream memcached: 1:10 deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream cloudsuite-ga: Triangle Count a b c 5559 5.58 1.31 2.16 1.79 1.50 15.968747068 8.33 1.611880247 4.91 1.914188324 1.94 4.95 1.46 9.93 3.39 1.89 4.55 14338 2.19 10.09 5.61 19.35 24.982221003 4.87 4.55 23.2899 42.925 15.619477007 240.84 9.86 8.24 14.16 4.82 1.01 216.80 1.75 33.645 29.7142 4.93 4.95 5.72 57.363434206 16.30 8.644669927 25.493696189 8.85 3.16 17.760978601 3.38 8.30 18.23 17.80 9.28 63396 19.04 6.36 35.083 6.28 3.383115781 1030362 231.92 57.47 9.01 18.15 54.84 56.778810455 3.34 4.73 6.40 6.72 3.38 3.42 10.28 14.51859605 9.86 8.94 11438 56.75 25.76 14.506732589 3.42 6.63 9.35 55.599926998 56.05 56.465280056 54.867541231 9.14 55.467485505 55.16 55.96 4.87 16.17 32.1884 16.85 7.4128 17.46 134.7722 25.3246 3.37 3.44 3.46 55.89 3.43 911640 77.8733 12.8337 25.276 3.41 1591288.82 3.44 55.27 10.43 3.52 11.2716 19.86 88.6692 5.62 491.9637 8.08 3.809 26.04 143356 16.23 6.09 17.850802691 8.92 630.0316 177.7953 89.9688 56.163009335 17.07408145 14.63 45941 1286055 3.40 53.7638 8.933 3640442.11 297.474 3226477.91 62.1193 16.0962 18.08 632.2714 2763110 131073357 8.70 8.72 4696370 1282161 61.8205 16.174 21.3079 46.9171 138.8201 1283837 3.39 3.39 115.1035 267.4201 59.7627 56.412917347 17.577524785 7.434 37.15 17.475921022 3.39 3.43 17.354 1280792 788635 80.6837 198.236 100.4802 159.2056 3706097.76 67.0891 14.8946 3954 5.66 1.39 1.92 1.80 1.62 14.29 8.71 1.79 4.77 1.80 2.08 5.43 1.38 10.70 3.70 1.74 4.27 15387 2.20 9.48 5.28 18.2 25.35 5.19 4.84 22.0731 45.2911 16.59 227.29 9.92 7.87 14.56 5.04 1.05 217.01 1.67 34.8511 28.6863 4.76 5.18 5.56 54.98 16.38 8.48 26.56 8.93 3.09 17.786513519 3.43 8.39 18.17 17.32 8.97 62263 18.88 6.57 35.733 6.48 3.49 1003634 233.48 55.81 8.84 18.62 55.432650962 55.248415798 3.32 4.86 6.23 6.81 3.46 3.51 10.55 14.51 9.62 9.03 11307 56.95 26.38 14.44 3.50 6.48 9.14 56.12 55.16 55.83 56.082430322 9.04 56.34 55.921352054 54.80 4.97 16.06 32.8196 16.53 7.4619 17.48 133.8887 24.8742 3.40 3.38 3.40 55.60 3.49 896530 76.6236 13.043 24.9002 3.36 1602278.84 3.40 56.08 10.58 3.48 11.4328 19.6 87.4216 5.63 486.1512 8.11 3.803 25.88 144863 16.24 6.10 17.73 8.82 636.9466 178.9558 89.3862 55.60 17.24 14.64 45704 1274682 3.39 53.4673 8.883 3667823.02 299.1204 3238867.13 61.6254 16.2251 18.22 637.1641 2755551 132034808 8.64 8.69 4664918 1285523 61.6794 16.211 21.448 46.6112 137.9565 1276206 3.37 3.40 115.7595 266.858 59.9036 56.663847311 17.60 7.407 37.017 17.42 3.38 3.44 17.31 1283914 787714 80.5006 198.6864 100.6949 158.8679 3712642.45 67.1395 14.8841 3996 6.64 1.18 1.86 1.57 1.43 14.71 9.27 1.63 4.45 1.98 2.13 5.40 1.51 9.79 3.44 1.88 4.63 14198 2.03 9.32 5.20 19.6 26.81 5.17 4.55 21.9057 45.6351 15.973349091 229.25 10.441529534 8.31 14.94 5.08 1.06 206.93 1.71 35.2363 28.3731 4.71 5.13 5.81 55.55 15.70 8.84 25.64 9.22 3.21 18.45 3.51 8.09 18.81 17.20 9.26 64351 19.51 6.43 34.6 6.36 3.46 1035331 226.34 55.95 8.76 18.11 56.378464554 55.297873149 3.25 4.76 6.29 6.90 3.47 3.44 10.43 14.16 9.73 9.16 11584 55.60 25.98 14.17 3.49 6.51 9.24 56.872353093 56.41 55.22 54.88 9.24 55.123690729 56.36 55.10 4.88 16.38 32.4421 16.737295642 7.553 17.79 132.2769 25.3061 3.43 3.38 3.40 56.58 3.46 911359 77.1214 12.9589 25.2574 3.41 1614717.75 3.45 55.31596326 10.44 3.53 11.3287 19.88 88.2231 5.70 492.9379 8.19 3.758 26.23 143016 16.05 6.16 17.65 8.85 630.9519 179.6446 89.0438 56.04 17.07 14.5 45505 1277138 3.37 53.2992 8.856 3637603.67 299.9325 3212714.28 61.677 16.2116 18.15 632.761 2776604 131994679 8.65 8.75 4696807 1290818 62.0953 16.1026 21.3951 46.7264 138.272 1283805 3.37 3.38 115.4872 268.205 59.6151 56.54 17.65 7.415 37.05 17.48 3.39 3.43 17.307 1280550 786611 80.5865 198.4746 100.6041 159.0119 3711642.32 67.1327 14.8854 OpenBenchmarking.org
RocksDB Test: Random Fill Sync OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Random Fill Sync a b c 1200 2400 3600 4800 6000 5559 3954 3996 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Apache Spark Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time a b c 2 4 6 8 10 5.58 5.66 6.64
Apache Spark Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time a b c 0.3128 0.6256 0.9384 1.2512 1.564 1.31 1.39 1.18
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time a b c 0.486 0.972 1.458 1.944 2.43 2.16 1.92 1.86
Apache Spark Row Count: 1000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time a b c 0.405 0.81 1.215 1.62 2.025 1.79 1.80 1.57
Apache Spark Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time a b c 0.3645 0.729 1.0935 1.458 1.8225 1.50 1.62 1.43
Apache Spark Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - SHA-512 Benchmark Time a b c 4 8 12 16 20 15.97 14.29 14.71
Apache Spark Row Count: 20000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Repartition Test Time a b c 3 6 9 12 15 8.33 8.71 9.27
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time a b c 0.4028 0.8056 1.2084 1.6112 2.014 1.611880247 1.790000000 1.630000000
Apache Spark Row Count: 1000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time a b c 1.1048 2.2096 3.3144 4.4192 5.524 4.91 4.77 4.45
Apache Spark Row Count: 1000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time a b c 0.4455 0.891 1.3365 1.782 2.2275 1.914188324 1.800000000 1.980000000
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time a b c 0.4793 0.9586 1.4379 1.9172 2.3965 1.94 2.08 2.13
Apache Spark Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time a b c 1.2218 2.4436 3.6654 4.8872 6.109 4.95 5.43 5.40
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time a b c 0.3398 0.6796 1.0194 1.3592 1.699 1.46 1.38 1.51
Apache Spark Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Inner Join Test Time a b c 3 6 9 12 15 9.93 10.70 9.79
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a b c 0.8325 1.665 2.4975 3.33 4.1625 3.39 3.70 3.44
Apache Spark Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time a b c 0.4253 0.8506 1.2759 1.7012 2.1265 1.89 1.74 1.88
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time a b c 1.0418 2.0836 3.1254 4.1672 5.209 4.55 4.27 4.63
CloudSuite Graph Analytics GraphX Algorithm: Connected Components OpenBenchmarking.org ms, Fewer Is Better CloudSuite Graph Analytics 4.0 GraphX Algorithm: Connected Components a b c 3K 6K 9K 12K 15K 14338 15387 14198
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time a b c 0.495 0.99 1.485 1.98 2.475 2.19 2.20 2.03
Apache Spark Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Broadcast Inner Join Test Time a b c 3 6 9 12 15 10.09 9.48 9.32
Apache Spark Row Count: 10000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time a b c 1.2623 2.5246 3.7869 5.0492 6.3115 5.61 5.28 5.20
CloudSuite Web Serving Load Scale: 100 OpenBenchmarking.org ops/sec, More Is Better CloudSuite Web Serving Load Scale: 100 a b c 5 10 15 20 25 19.35 18.20 19.60
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time a b c 6 12 18 24 30 24.98 25.35 26.81
Apache Spark Row Count: 10000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time a b c 1.1678 2.3356 3.5034 4.6712 5.839 4.87 5.19 5.17
Apache Spark Row Count: 1000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time a b c 1.089 2.178 3.267 4.356 5.445 4.55 4.84 4.55
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b c 6 12 18 24 30 23.29 22.07 21.91
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a b c 10 20 30 40 50 42.93 45.29 45.64
Apache Spark Row Count: 40000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Repartition Test Time a b c 4 8 12 16 20 15.62 16.59 15.97
ClickHouse 100M Rows Hits Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Second Run a b c 50 100 150 200 250 240.84 227.29 229.25 MIN: 24.26 / MAX: 3750 MIN: 24.14 / MAX: 3333.33 MIN: 24.51 / MAX: 3157.89
Apache Spark Row Count: 20000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Inner Join Test Time a b c 3 6 9 12 15 9.860000000 9.920000000 10.441529534
Apache Spark Row Count: 20000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Group By Test Time a b c 2 4 6 8 10 8.24 7.87 8.31
Apache Spark Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - SHA-512 Benchmark Time a b c 4 8 12 16 20 14.16 14.56 14.94
Apache Spark Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time a b c 1.143 2.286 3.429 4.572 5.715 4.82 5.04 5.08
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time a b c 0.2385 0.477 0.7155 0.954 1.1925 1.01 1.05 1.06
ClickHouse 100M Rows Hits Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, First Run / Cold Cache a b c 50 100 150 200 250 216.80 217.01 206.93 MIN: 14.82 / MAX: 3529.41 MIN: 15.02 / MAX: 2608.7 MIN: 15.36 / MAX: 3000
Apache Spark Row Count: 1000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time a b c 0.3938 0.7876 1.1814 1.5752 1.969 1.75 1.67 1.71
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b c 8 16 24 32 40 33.65 34.85 35.24
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a b c 7 14 21 28 35 29.71 28.69 28.37
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time a b c 1.1093 2.2186 3.3279 4.4372 5.5465 4.93 4.76 4.71
Apache Spark Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time a b c 1.1655 2.331 3.4965 4.662 5.8275 4.95 5.18 5.13
Apache Spark Row Count: 10000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time a b c 1.3073 2.6146 3.9219 5.2292 6.5365 5.72 5.56 5.81
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark a b c 13 26 39 52 65 57.36 54.98 55.55
Apache Spark Row Count: 40000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Repartition Test Time a b c 4 8 12 16 20 16.30 16.38 15.70
Apache Spark Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time a b c 2 4 6 8 10 8.644669927 8.480000000 8.840000000
Apache Spark Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - SHA-512 Benchmark Time a b c 6 12 18 24 30 25.49 26.56 25.64
Apache Spark Row Count: 20000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Repartition Test Time a b c 3 6 9 12 15 8.85 8.93 9.22
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time a b c 0.7223 1.4446 2.1669 2.8892 3.6115 3.16 3.09 3.21
Apache Spark Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Inner Join Test Time a b c 5 10 15 20 25 17.76 17.79 18.45
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe a b c 0.7898 1.5796 2.3694 3.1592 3.949 3.38 3.43 3.51
Apache Spark Row Count: 20000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Group By Test Time a b c 2 4 6 8 10 8.30 8.39 8.09
Apache Spark Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Broadcast Inner Join Test Time a b c 5 10 15 20 25 18.23 18.17 18.81
Apache Spark Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test Time a b c 4 8 12 16 20 17.80 17.32 17.20
Apache Spark Row Count: 20000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Repartition Test Time a b c 3 6 9 12 15 9.28 8.97 9.26
CloudSuite Data Analytics Hadoop Slaves: 1 OpenBenchmarking.org ms, Fewer Is Better CloudSuite Data Analytics 4.0 Hadoop Slaves: 1 a b c 14K 28K 42K 56K 70K 63396 62263 64351
Apache Spark Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Inner Join Test Time a b c 5 10 15 20 25 19.04 18.88 19.51
Apache Spark Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time a b c 2 4 6 8 10 6.36 6.57 6.43
CloudSuite Web Serving Load Scale: 500 OpenBenchmarking.org ops/sec, More Is Better CloudSuite Web Serving Load Scale: 500 a b c 8 16 24 32 40 35.08 35.73 34.60
Apache Spark Row Count: 10000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time a b c 2 4 6 8 10 6.28 6.48 6.36
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe a b c 0.7853 1.5706 2.3559 3.1412 3.9265 3.383115781 3.490000000 3.460000000
RocksDB Test: Sequential Fill OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Sequential Fill a b c 200K 400K 600K 800K 1000K 1030362 1003634 1035331 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
ClickHouse 100M Rows Hits Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.12.3.5 100M Rows Hits Dataset, Third Run a b c 50 100 150 200 250 231.92 233.48 226.34 MIN: 22.59 / MAX: 2608.7 MIN: 23.7 / MAX: 3157.89 MIN: 22.8 / MAX: 3333.33
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark a b c 13 26 39 52 65 57.47 55.81 55.95
Apache Spark Row Count: 20000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Group By Test Time a b c 3 6 9 12 15 9.01 8.84 8.76
Apache Spark Row Count: 40000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Inner Join Test Time a b c 5 10 15 20 25 18.15 18.62 18.11
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark a b c 13 26 39 52 65 54.84 55.43 56.38
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark a b c 13 26 39 52 65 56.78 55.25 55.30
Apache Spark Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time a b c 0.7515 1.503 2.2545 3.006 3.7575 3.34 3.32 3.25
Apache Spark Row Count: 10000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time a b c 1.0935 2.187 3.2805 4.374 5.4675 4.73 4.86 4.76
Apache Spark Row Count: 10000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time a b c 2 4 6 8 10 6.40 6.23 6.29
Apache Spark Row Count: 10000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time a b c 2 4 6 8 10 6.72 6.81 6.90
Apache Spark Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a b c 0.7808 1.5616 2.3424 3.1232 3.904 3.38 3.46 3.47
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe a b c 0.7898 1.5796 2.3694 3.1592 3.949 3.42 3.51 3.44
Apache Spark Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Broadcast Inner Join Test Time a b c 3 6 9 12 15 10.28 10.55 10.43
Apache Spark Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - SHA-512 Benchmark Time a b c 4 8 12 16 20 14.52 14.51 14.16
Apache Spark Row Count: 20000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Inner Join Test Time a b c 3 6 9 12 15 9.86 9.62 9.73
Apache Spark Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time a b c 3 6 9 12 15 8.94 9.03 9.16
CloudSuite Graph Analytics GraphX Algorithm: Page Rank OpenBenchmarking.org ms, Fewer Is Better CloudSuite Graph Analytics 4.0 GraphX Algorithm: Page Rank a b c 2K 4K 6K 8K 10K 11438 11307 11584
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark a b c 13 26 39 52 65 56.75 56.95 55.60
Apache Spark Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - SHA-512 Benchmark Time a b c 6 12 18 24 30 25.76 26.38 25.98
Apache Spark Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - SHA-512 Benchmark Time a b c 4 8 12 16 20 14.51 14.44 14.17
Apache Spark Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time a b c 0.7875 1.575 2.3625 3.15 3.9375 3.42 3.50 3.49
Apache Spark Row Count: 10000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time a b c 2 4 6 8 10 6.63 6.48 6.51
Apache Spark Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Broadcast Inner Join Test Time a b c 3 6 9 12 15 9.35 9.14 9.24
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark a b c 13 26 39 52 65 55.60 56.12 56.87
Apache Spark Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Calculate Pi Benchmark a b c 13 26 39 52 65 56.05 55.16 56.41
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark a b c 13 26 39 52 65 56.47 55.83 55.22
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark a b c 13 26 39 52 65 54.87 56.08 54.88
Apache Spark Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Broadcast Inner Join Test Time a b c 3 6 9 12 15 9.14 9.04 9.24
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark a b c 13 26 39 52 65 55.47 56.34 55.12
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark a b c 13 26 39 52 65 55.16 55.92 56.36
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark a b c 13 26 39 52 65 55.96 54.80 55.10
Apache Spark Row Count: 10000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time a b c 1.1183 2.2366 3.3549 4.4732 5.5915 4.87 4.97 4.88
Apache Spark Row Count: 40000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Repartition Test Time a b c 4 8 12 16 20 16.17 16.06 16.38
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c 8 16 24 32 40 32.19 32.82 32.44
Apache Spark Row Count: 40000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Group By Test Time a b c 4 8 12 16 20 16.85 16.53 16.74
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c 2 4 6 8 10 7.4128 7.4619 7.5530
Apache Spark Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Broadcast Inner Join Test Time a b c 4 8 12 16 20 17.46 17.48 17.79
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c 30 60 90 120 150 134.77 133.89 132.28
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c 6 12 18 24 30 25.32 24.87 25.31
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe a b c 0.7718 1.5436 2.3154 3.0872 3.859 3.37 3.40 3.43
Apache Spark Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe a b c 0.774 1.548 2.322 3.096 3.87 3.44 3.38 3.38
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe a b c 0.7785 1.557 2.3355 3.114 3.8925 3.46 3.40 3.40
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark a b c 13 26 39 52 65 55.89 55.60 56.58
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a b c 0.7853 1.5706 2.3559 3.1412 3.9265 3.43 3.49 3.46
RocksDB Test: Random Fill OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Random Fill a b c 200K 400K 600K 800K 1000K 911640 896530 911359 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a b c 20 40 60 80 100 77.87 76.62 77.12
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a b c 3 6 9 12 15 12.83 13.04 12.96
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c 6 12 18 24 30 25.28 24.90 25.26
Apache Spark Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe a b c 0.7673 1.5346 2.3019 3.0692 3.8365 3.41 3.36 3.41
Memcached Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:1 a b c 300K 600K 900K 1200K 1500K 1591288.82 1602278.84 1614717.75 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Apache Spark Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe a b c 0.7763 1.5526 2.3289 3.1052 3.8815 3.44 3.40 3.45
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark a b c 13 26 39 52 65 55.27 56.08 55.32
Apache Spark Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 2000 - Inner Join Test Time a b c 3 6 9 12 15 10.43 10.58 10.44
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time a b c 0.7943 1.5886 2.3829 3.1772 3.9715 3.52 3.48 3.53
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c 3 6 9 12 15 11.27 11.43 11.33
OpenEMS Test: pyEMS Coupler OpenBenchmarking.org MCells/s, More Is Better OpenEMS 0.0.35-86 Test: pyEMS Coupler a b c 5 10 15 20 25 19.86 19.60 19.88 1. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c 20 40 60 80 100 88.67 87.42 88.22
Apache Spark Row Count: 10000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time a b c 1.2825 2.565 3.8475 5.13 6.4125 5.62 5.63 5.70
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c 110 220 330 440 550 491.96 486.15 492.94
Apache Spark Row Count: 20000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Group By Test Time a b c 2 4 6 8 10 8.08 8.11 8.19
VVenC Video Input: Bosphorus 4K - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 4K - Video Preset: Fast a b c 0.857 1.714 2.571 3.428 4.285 3.809 3.803 3.758 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Apache Spark Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - SHA-512 Benchmark Time a b c 6 12 18 24 30 26.04 25.88 26.23
CloudSuite In-Memory Analytics Training Set Size: Large OpenBenchmarking.org ms, Fewer Is Better CloudSuite In-Memory Analytics 4.0 Training Set Size: Large a b c 30K 60K 90K 120K 150K 143356 144863 143016
Apache Spark Row Count: 40000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Repartition Test Time a b c 4 8 12 16 20 16.23 16.24 16.05
Apache Spark Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time a b c 2 4 6 8 10 6.09 6.10 6.16
Apache Spark Row Count: 40000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 1000 - Group By Test Time a b c 4 8 12 16 20 17.85 17.73 17.65
Apache Spark Row Count: 20000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 100 - Repartition Test Time a b c 2 4 6 8 10 8.92 8.82 8.85
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c 140 280 420 560 700 630.03 636.95 630.95
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c 40 80 120 160 200 177.80 178.96 179.64
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b c 20 40 60 80 100 89.97 89.39 89.04
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark a b c 13 26 39 52 65 56.16 55.60 56.04
Apache Spark Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 100 - Broadcast Inner Join Test Time a b c 4 8 12 16 20 17.07 17.24 17.07
OpenEMS Test: openEMS MSL_NotchFilter OpenBenchmarking.org MCells/s, More Is Better OpenEMS 0.0.35-86 Test: openEMS MSL_NotchFilter a b c 4 8 12 16 20 14.63 14.64 14.50 1. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat
CloudSuite In-Memory Analytics Training Set Size: Small OpenBenchmarking.org ms, Fewer Is Better CloudSuite In-Memory Analytics 4.0 Training Set Size: Small a b c 10K 20K 30K 40K 50K 45941 45704 45505
CloudSuite Data Analytics Hadoop Slaves: 32 OpenBenchmarking.org ms, Fewer Is Better CloudSuite Data Analytics 4.0 Hadoop Slaves: 32 a b c 300K 600K 900K 1200K 1500K 1286055 1274682 1277138
Apache Spark Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe a b c 0.765 1.53 2.295 3.06 3.825 3.40 3.39 3.37
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c 12 24 36 48 60 53.76 53.47 53.30
VVenC Video Input: Bosphorus 4K - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 4K - Video Preset: Faster a b c 2 4 6 8 10 8.933 8.883 8.856 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Memcached Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:5 a b c 800K 1600K 2400K 3200K 4000K 3640442.11 3667823.02 3637603.67 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c 70 140 210 280 350 297.47 299.12 299.93
Memcached Set To Get Ratio: 1:100 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:100 a b c 700K 1400K 2100K 2800K 3500K 3226477.91 3238867.13 3212714.28 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c 14 28 42 56 70 62.12 61.63 61.68
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c 4 8 12 16 20 16.10 16.23 16.21
Apache Spark Row Count: 40000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Inner Join Test Time a b c 4 8 12 16 20 18.08 18.22 18.15
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c 140 280 420 560 700 632.27 637.16 632.76
RocksDB Test: Read Random Write Random OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Read Random Write Random a b c 600K 1200K 1800K 2400K 3000K 2763110 2755551 2776604 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
RocksDB Test: Random Read OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Random Read a b c 30M 60M 90M 120M 150M 131073357 132034808 131994679 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Apache Spark Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time a b c 2 4 6 8 10 8.70 8.64 8.65
Apache Spark Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time a b c 2 4 6 8 10 8.72 8.69 8.75
RocksDB Test: Read While Writing OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Read While Writing a b c 1000K 2000K 3000K 4000K 5000K 4696370 4664918 4696807 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
CloudSuite Data Analytics Hadoop Slaves: 4 OpenBenchmarking.org ms, Fewer Is Better CloudSuite Data Analytics 4.0 Hadoop Slaves: 4 a b c 300K 600K 900K 1200K 1500K 1282161 1285523 1290818
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c 14 28 42 56 70 61.82 61.68 62.10
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c 4 8 12 16 20 16.17 16.21 16.10
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c 5 10 15 20 25 21.31 21.45 21.40
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c 11 22 33 44 55 46.92 46.61 46.73
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c 30 60 90 120 150 138.82 137.96 138.27
CloudSuite Data Analytics Hadoop Slaves: 8 OpenBenchmarking.org ms, Fewer Is Better CloudSuite Data Analytics 4.0 Hadoop Slaves: 8 a b c 300K 600K 900K 1200K 1500K 1283837 1276206 1283805
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe a b c 0.7628 1.5256 2.2884 3.0512 3.814 3.39 3.37 3.37
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a b c 0.765 1.53 2.295 3.06 3.825 3.39 3.40 3.38
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c 30 60 90 120 150 115.10 115.76 115.49
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c 60 120 180 240 300 267.42 266.86 268.21
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b c 13 26 39 52 65 59.76 59.90 59.62
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark a b c 13 26 39 52 65 56.41 56.66 56.54
Apache Spark Row Count: 40000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 2000 - Group By Test Time a b c 4 8 12 16 20 17.58 17.60 17.65
VVenC Video Input: Bosphorus 1080p - Video Preset: Fast OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 1080p - Video Preset: Fast a b c 2 4 6 8 10 7.434 7.407 7.415 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
CloudSuite Web Serving Load Scale: 400 OpenBenchmarking.org ops/sec, More Is Better CloudSuite Web Serving Load Scale: 400 a b c 9 18 27 36 45 37.15 37.02 37.05
Apache Spark Row Count: 40000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Group By Test Time a b c 4 8 12 16 20 17.48 17.42 17.48
Apache Spark Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 20000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe a b c 0.7628 1.5256 2.2884 3.0512 3.814 3.39 3.38 3.39
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe a b c 0.774 1.548 2.322 3.096 3.87 3.43 3.44 3.43
VVenC Video Input: Bosphorus 1080p - Video Preset: Faster OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.7 Video Input: Bosphorus 1080p - Video Preset: Faster a b c 4 8 12 16 20 17.35 17.31 17.31 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
CloudSuite Data Analytics Hadoop Slaves: 64 OpenBenchmarking.org ms, Fewer Is Better CloudSuite Data Analytics 4.0 Hadoop Slaves: 64 a b c 300K 600K 900K 1200K 1500K 1280792 1283914 1280550
RocksDB Test: Update Random OpenBenchmarking.org Op/s, More Is Better RocksDB 7.9.2 Test: Update Random a b c 200K 400K 600K 800K 1000K 788635 787714 786611 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c 20 40 60 80 100 80.68 80.50 80.59
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c 40 80 120 160 200 198.24 198.69 198.47
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c 20 40 60 80 100 100.48 100.69 100.60
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b c 40 80 120 160 200 159.21 158.87 159.01
Memcached Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:10 a b c 800K 1600K 2400K 3200K 4000K 3706097.76 3712642.45 3711642.32 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c 15 30 45 60 75 67.09 67.14 67.13
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c 4 8 12 16 20 14.89 14.88 14.89
Phoronix Test Suite v10.8.4