Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Ubuntu 22.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2302026-NE-ADLFEB23315 adl feb - Phoronix Test Suite adl feb Intel Core i7-1280P testing with a MSI MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 15GB on Ubuntu 22.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2302026-NE-ADLFEB23315&rdt&grr .
adl feb Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compiler File-System Screen Resolution a n c Intel Core i7-1280P @ 4.80GHz (14 Cores / 20 Threads) MSI MS-14C6 (E14C6IMS.115 BIOS) Intel Alder Lake PCH 16GB 1024GB Micron_3400_MTFDKBA1T0TFH MSI Intel ADL GT2 15GB (1450MHz) Realtek ALC274 Intel Alder Lake-P PCH CNVi WiFi Ubuntu 22.10 5.19.0-29-generic (x86_64) Xfce 4.16 X Server 1.21.1.4 4.6 Mesa 22.2.1 OpenCL 3.0 1.3.224 GCC 12.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x421 - Thermald 2.5.1 Java Details - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu2) Python Details - Python 3.10.7 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
adl feb spark: 10000000 - 500 - Broadcast Inner Join Test Time spark: 10000000 - 500 - Inner Join Test Time spark: 10000000 - 500 - Repartition Test Time spark: 10000000 - 500 - Group By Test Time spark: 10000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 500 - Calculate Pi Benchmark spark: 10000000 - 500 - SHA-512 Benchmark Time spark: 10000000 - 2000 - Broadcast Inner Join Test Time spark: 10000000 - 2000 - Inner Join Test Time spark: 10000000 - 2000 - Repartition Test Time spark: 10000000 - 2000 - Group By Test Time spark: 10000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 2000 - Calculate Pi Benchmark spark: 10000000 - 2000 - SHA-512 Benchmark Time spark: 10000000 - 1000 - Broadcast Inner Join Test Time spark: 10000000 - 1000 - Inner Join Test Time spark: 10000000 - 1000 - Repartition Test Time spark: 10000000 - 1000 - Group By Test Time spark: 10000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 1000 - Calculate Pi Benchmark spark: 10000000 - 1000 - SHA-512 Benchmark Time spark: 10000000 - 100 - Broadcast Inner Join Test Time spark: 10000000 - 100 - Inner Join Test Time spark: 10000000 - 100 - Repartition Test Time spark: 10000000 - 100 - Group By Test Time spark: 10000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 10000000 - 100 - Calculate Pi Benchmark spark: 10000000 - 100 - SHA-512 Benchmark Time spark: 1000000 - 2000 - Broadcast Inner Join Test Time spark: 1000000 - 2000 - Inner Join Test Time spark: 1000000 - 2000 - Repartition Test Time spark: 1000000 - 2000 - Group By Test Time spark: 1000000 - 2000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 2000 - Calculate Pi Benchmark spark: 1000000 - 2000 - SHA-512 Benchmark Time spark: 1000000 - 1000 - Broadcast Inner Join Test Time spark: 1000000 - 1000 - Inner Join Test Time spark: 1000000 - 1000 - Repartition Test Time spark: 1000000 - 1000 - Group By Test Time spark: 1000000 - 1000 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 1000 - Calculate Pi Benchmark spark: 1000000 - 1000 - SHA-512 Benchmark Time spark: 1000000 - 500 - Broadcast Inner Join Test Time spark: 1000000 - 500 - Inner Join Test Time spark: 1000000 - 500 - Repartition Test Time spark: 1000000 - 500 - Group By Test Time spark: 1000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 500 - Calculate Pi Benchmark spark: 1000000 - 500 - SHA-512 Benchmark Time spark: 1000000 - 100 - Broadcast Inner Join Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - SHA-512 Benchmark Time memcached: 1:100 memcached: 1:10 memcached: 1:1 memcached: 1:5 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream a n c 12.703110342 14.41 11.950580371 8.93 11.87 207.452984342 16.492305717 12.84 14.32 12.041006514 9.21 11.85 207.554569999 16.768759819 13.10 13.443032784 12.18 9.05 11.90 207.523566094 15.926342675 14.048404932 14.087262577 13.07 8.01 11.840200882 207.559282092 15.513832371 2.74 3.349109111 3.595425861 5.17 12.003429445 206.580142731 4.79 2.37 2.82 3.32 4.61 11.97 207.482634438 4.56 1.98 2.48 3.03 3.95 11.944524348 209.933188681 4.12 1.35 1.583333601 2.21 3.60 11.906098387 207.951877263 2.93 1665301.7 1742709.67 1767830.19 1869070 1102.4971 6.3049 19.7839 50.53 351.5017 19.7804 1534.4547 4.5104 1545.7642 4.5054 365.1139 19.0193 104.1022 67.1916 174.8444 5.7188 237.725 4.2064 240.1261 4.1644 160.8432 43.4532 61.8911 16.1559 67.4054 14.8331 22.6707 44.0878 103.9281 67.1943 248.4341 28.0309 29.9799 33.3497 42.6237 23.4536 13.55 14.60 12.10 9.02 12.03 207.757873404 16.51 13.16 14.48 12.26 9.19 11.74 207.762851188 16.69 12.98 14.46 11.11 10.43 11.854006987 207.466115076 16.03069788 13.18 13.71 11.79 8.39 12.02 208.173581938 15.41 2.81 3.50 3.60 5.29 12.70 213.35 4.99 2.24 2.87 3.39 4.75 12.02 208.49 4.39 1.93 2.32 3.06 3.74 12.10 210.212547098 4.155978363 1.32 1.57 2.25 3.65 11.90 207.107599919 3.03 1686349.51 1739460.62 1767278.13 1781728.27 1137.8994 6.1199 22.8974 43.6605 374.0639 18.6035 1508.5481 4.5519 1523.2204 4.4135 357.4797 19.5227 104.5305 66.9068 175.7334 5.6899 235.821 4.2403 239.6798 4.1721 170.1949 40.9985 61.5894 16.235 67.9297 14.7185 22.5568 44.3119 114.7806 60.9341 244.8894 28.4189 30.1744 33.1348 43.1131 23.1872 14.39 15.33 12.533537256 8.88 11.88 209.010077237 16.58 13.29 14.15 12.21 9.34 11.84 208.248847794 16.87 13.54 13.77 12.07 8.33 11.79 207.56 15.95 12.59 13.41 11.63 8.48 11.80 208.61783672 15.160105142 2.71 3.49 3.65 5.07 11.89 207.73175806 4.96 2.22 2.81 3.36 4.58 11.87 208.244056069 4.36 1.95 2.40 3.28 3.89 12.04 210.110708861 4.24 1.36 1.60 2.26 3.72 11.99 206.319826158 3.17 1662246.52 1750912.91 1767103.07 1823583.08 1169.8973 5.7934 23.0076 43.4516 367.8485 18.8282 1520.5369 4.5475 1529.3919 4.4061 361.7257 19.1766 104.02 67.1439 174.5776 5.7275 238.7595 4.1882 239.168 4.181 171.2531 40.8184 61.741 16.195 67.2541 14.8663 22.592 44.2419 112.441 62.1742 247.3813 28.1703 30.0991 33.2175 43.0829 23.2034 OpenBenchmarking.org
Apache Spark Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Broadcast Inner Join Test Time a n c 4 8 12 16 20 12.70 13.55 14.39
Apache Spark Row Count: 10000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Inner Join Test Time a n c 4 8 12 16 20 14.41 14.60 15.33
Apache Spark Row Count: 10000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Repartition Test Time a n c 3 6 9 12 15 11.95 12.10 12.53
Apache Spark Row Count: 10000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Group By Test Time a n c 3 6 9 12 15 8.93 9.02 8.88
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 11.87 12.03 11.88
Apache Spark Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - Calculate Pi Benchmark a n c 50 100 150 200 250 207.45 207.76 209.01
Apache Spark Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 500 - SHA-512 Benchmark Time a n c 4 8 12 16 20 16.49 16.51 16.58
Apache Spark Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Broadcast Inner Join Test Time a n c 3 6 9 12 15 12.84 13.16 13.29
Apache Spark Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Inner Join Test Time a n c 4 8 12 16 20 14.32 14.48 14.15
Apache Spark Row Count: 10000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Repartition Test Time a n c 3 6 9 12 15 12.04 12.26 12.21
Apache Spark Row Count: 10000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Group By Test Time a n c 3 6 9 12 15 9.21 9.19 9.34
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 11.85 11.74 11.84
Apache Spark Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - Calculate Pi Benchmark a n c 50 100 150 200 250 207.55 207.76 208.25
Apache Spark Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 2000 - SHA-512 Benchmark Time a n c 4 8 12 16 20 16.77 16.69 16.87
Apache Spark Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Broadcast Inner Join Test Time a n c 3 6 9 12 15 13.10 12.98 13.54
Apache Spark Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Inner Join Test Time a n c 4 8 12 16 20 13.44 14.46 13.77
Apache Spark Row Count: 10000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Repartition Test Time a n c 3 6 9 12 15 12.18 11.11 12.07
Apache Spark Row Count: 10000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Group By Test Time a n c 3 6 9 12 15 9.05 10.43 8.33
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 11.90 11.85 11.79
Apache Spark Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - Calculate Pi Benchmark a n c 50 100 150 200 250 207.52 207.47 207.56
Apache Spark Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 1000 - SHA-512 Benchmark Time a n c 4 8 12 16 20 15.93 16.03 15.95
Apache Spark Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Broadcast Inner Join Test Time a n c 4 8 12 16 20 14.05 13.18 12.59
Apache Spark Row Count: 10000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Inner Join Test Time a n c 4 8 12 16 20 14.09 13.71 13.41
Apache Spark Row Count: 10000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Repartition Test Time a n c 3 6 9 12 15 13.07 11.79 11.63
Apache Spark Row Count: 10000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Group By Test Time a n c 2 4 6 8 10 8.01 8.39 8.48
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 11.84 12.02 11.80
Apache Spark Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - Calculate Pi Benchmark a n c 50 100 150 200 250 207.56 208.17 208.62
Apache Spark Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 10000000 - Partitions: 100 - SHA-512 Benchmark Time a n c 4 8 12 16 20 15.51 15.41 15.16
Apache Spark Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Broadcast Inner Join Test Time a n c 0.6323 1.2646 1.8969 2.5292 3.1615 2.74 2.81 2.71
Apache Spark Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Inner Join Test Time a n c 0.7875 1.575 2.3625 3.15 3.9375 3.349109111 3.500000000 3.490000000
Apache Spark Row Count: 1000000 - Partitions: 2000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Repartition Test Time a n c 0.8213 1.6426 2.4639 3.2852 4.1065 3.595425861 3.600000000 3.650000000
Apache Spark Row Count: 1000000 - Partitions: 2000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Group By Test Time a n c 1.1903 2.3806 3.5709 4.7612 5.9515 5.17 5.29 5.07
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 12.00 12.70 11.89
Apache Spark Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - Calculate Pi Benchmark a n c 50 100 150 200 250 206.58 213.35 207.73
Apache Spark Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 2000 - SHA-512 Benchmark Time a n c 1.1228 2.2456 3.3684 4.4912 5.614 4.79 4.99 4.96
Apache Spark Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Broadcast Inner Join Test Time a n c 0.5333 1.0666 1.5999 2.1332 2.6665 2.37 2.24 2.22
Apache Spark Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Inner Join Test Time a n c 0.6458 1.2916 1.9374 2.5832 3.229 2.82 2.87 2.81
Apache Spark Row Count: 1000000 - Partitions: 1000 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Repartition Test Time a n c 0.7628 1.5256 2.2884 3.0512 3.814 3.32 3.39 3.36
Apache Spark Row Count: 1000000 - Partitions: 1000 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Group By Test Time a n c 1.0688 2.1376 3.2064 4.2752 5.344 4.61 4.75 4.58
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 11.97 12.02 11.87
Apache Spark Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - Calculate Pi Benchmark a n c 50 100 150 200 250 207.48 208.49 208.24
Apache Spark Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 1000 - SHA-512 Benchmark Time a n c 1.026 2.052 3.078 4.104 5.13 4.56 4.39 4.36
Apache Spark Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Broadcast Inner Join Test Time a n c 0.4455 0.891 1.3365 1.782 2.2275 1.98 1.93 1.95
Apache Spark Row Count: 1000000 - Partitions: 500 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Inner Join Test Time a n c 0.558 1.116 1.674 2.232 2.79 2.48 2.32 2.40
Apache Spark Row Count: 1000000 - Partitions: 500 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Repartition Test Time a n c 0.738 1.476 2.214 2.952 3.69 3.03 3.06 3.28
Apache Spark Row Count: 1000000 - Partitions: 500 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Group By Test Time a n c 0.8888 1.7776 2.6664 3.5552 4.444 3.95 3.74 3.89
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 11.94 12.10 12.04
Apache Spark Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - Calculate Pi Benchmark a n c 50 100 150 200 250 209.93 210.21 210.11
Apache Spark Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 500 - SHA-512 Benchmark Time a n c 0.954 1.908 2.862 3.816 4.77 4.120000000 4.155978363 4.240000000
Apache Spark Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time a n c 0.306 0.612 0.918 1.224 1.53 1.35 1.32 1.36
Apache Spark Row Count: 1000000 - Partitions: 100 - Inner Join Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Inner Join Test Time a n c 0.36 0.72 1.08 1.44 1.8 1.583333601 1.570000000 1.600000000
Apache Spark Row Count: 1000000 - Partitions: 100 - Repartition Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Repartition Test Time a n c 0.5085 1.017 1.5255 2.034 2.5425 2.21 2.25 2.26
Apache Spark Row Count: 1000000 - Partitions: 100 - Group By Test Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Group By Test Time a n c 0.837 1.674 2.511 3.348 4.185 3.60 3.65 3.72
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe a n c 3 6 9 12 15 11.91 11.90 11.99
Apache Spark Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark a n c 50 100 150 200 250 207.95 207.11 206.32
Apache Spark Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time a n c 0.7133 1.4266 2.1399 2.8532 3.5665 2.93 3.03 3.17
Memcached Set To Get Ratio: 1:100 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:100 a n c 400K 800K 1200K 1600K 2000K 1665301.70 1686349.51 1662246.52 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:10 a n c 400K 800K 1200K 1600K 2000K 1742709.67 1739460.62 1750912.91 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Set To Get Ratio: 1:1 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:1 a n c 400K 800K 1200K 1600K 2000K 1767830.19 1767278.13 1767103.07 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Set To Get Ratio: 1:5 OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.18 Set To Get Ratio: 1:5 a n c 400K 800K 1200K 1600K 2000K 1869070.00 1781728.27 1823583.08 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a n c 300 600 900 1200 1500 1102.50 1137.90 1169.90
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a n c 2 4 6 8 10 6.3049 6.1199 5.7934
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a n c 6 12 18 24 30 19.78 22.90 23.01
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a n c 11 22 33 44 55 50.53 43.66 43.45
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a n c 80 160 240 320 400 351.50 374.06 367.85
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a n c 5 10 15 20 25 19.78 18.60 18.83
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a n c 300 600 900 1200 1500 1534.45 1508.55 1520.54
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a n c 1.0242 2.0484 3.0726 4.0968 5.121 4.5104 4.5519 4.5475
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a n c 300 600 900 1200 1500 1545.76 1523.22 1529.39
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a n c 1.0137 2.0274 3.0411 4.0548 5.0685 4.5054 4.4135 4.4061
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a n c 80 160 240 320 400 365.11 357.48 361.73
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a n c 5 10 15 20 25 19.02 19.52 19.18
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a n c 20 40 60 80 100 104.10 104.53 104.02
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a n c 15 30 45 60 75 67.19 66.91 67.14
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a n c 40 80 120 160 200 174.84 175.73 174.58
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a n c 1.2887 2.5774 3.8661 5.1548 6.4435 5.7188 5.6899 5.7275
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a n c 50 100 150 200 250 237.73 235.82 238.76
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a n c 0.9541 1.9082 2.8623 3.8164 4.7705 4.2064 4.2403 4.1882
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a n c 50 100 150 200 250 240.13 239.68 239.17
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a n c 0.9407 1.8814 2.8221 3.7628 4.7035 4.1644 4.1721 4.1810
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a n c 40 80 120 160 200 160.84 170.19 171.25
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a n c 10 20 30 40 50 43.45 41.00 40.82
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a n c 14 28 42 56 70 61.89 61.59 61.74
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream a n c 4 8 12 16 20 16.16 16.24 16.20
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a n c 15 30 45 60 75 67.41 67.93 67.25
Neural Magic DeepSparse Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream a n c 4 8 12 16 20 14.83 14.72 14.87
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a n c 5 10 15 20 25 22.67 22.56 22.59
Neural Magic DeepSparse Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream a n c 10 20 30 40 50 44.09 44.31 44.24
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a n c 30 60 90 120 150 103.93 114.78 112.44
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a n c 15 30 45 60 75 67.19 60.93 62.17
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a n c 50 100 150 200 250 248.43 244.89 247.38
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a n c 7 14 21 28 35 28.03 28.42 28.17
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a n c 7 14 21 28 35 29.98 30.17 30.10
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a n c 8 16 24 32 40 33.35 33.13 33.22
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a n c 10 20 30 40 50 42.62 43.11 43.08
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.3.2 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a n c 6 12 18 24 30 23.45 23.19 23.20
Phoronix Test Suite v10.8.4