sdfa AMD Ryzen Threadripper 3990X 64-Core testing with a Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2312122-PTS-SDFA911983&grs&sro .
sdfa Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads) Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) AMD Starship/Matisse 128GB Samsung SSD 970 EVO Plus 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio DELL P2415Q Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 23.10 6.5.0-13-generic (x86_64) GNOME Shell 45.0 X Server + Wayland 4.6 Mesa 23.2.1-1ubuntu3 (LLVM 15.0.7 DRM 3.54) GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107a Java Details - OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10) Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
sdfa deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream spark-tpch: 10 - Geometric Mean Of All Queries deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream spark-tpch: 1 - Geometric Mean Of All Queries deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream spark-tpch: 50 - Geometric Mean Of All Queries deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream spark-tpch: 50 - Q22 spark-tpch: 50 - Q21 spark-tpch: 50 - Q20 spark-tpch: 50 - Q19 spark-tpch: 50 - Q18 spark-tpch: 50 - Q17 spark-tpch: 50 - Q16 spark-tpch: 50 - Q15 spark-tpch: 50 - Q14 spark-tpch: 50 - Q13 spark-tpch: 50 - Q12 spark-tpch: 50 - Q11 spark-tpch: 50 - Q10 spark-tpch: 50 - Q09 spark-tpch: 50 - Q08 spark-tpch: 50 - Q07 spark-tpch: 50 - Q06 spark-tpch: 50 - Q05 spark-tpch: 50 - Q04 spark-tpch: 50 - Q03 spark-tpch: 50 - Q02 spark-tpch: 50 - Q01 spark-tpch: 10 - Q22 spark-tpch: 10 - Q21 spark-tpch: 10 - Q20 spark-tpch: 10 - Q19 spark-tpch: 10 - Q18 spark-tpch: 10 - Q17 spark-tpch: 10 - Q16 spark-tpch: 10 - Q15 spark-tpch: 10 - Q14 spark-tpch: 10 - Q13 spark-tpch: 10 - Q12 spark-tpch: 10 - Q11 spark-tpch: 10 - Q10 spark-tpch: 10 - Q09 spark-tpch: 10 - Q08 spark-tpch: 10 - Q07 spark-tpch: 10 - Q06 spark-tpch: 10 - Q05 spark-tpch: 10 - Q04 spark-tpch: 10 - Q03 spark-tpch: 10 - Q02 spark-tpch: 10 - Q01 spark-tpch: 1 - Q22 spark-tpch: 1 - Q21 spark-tpch: 1 - Q20 spark-tpch: 1 - Q19 spark-tpch: 1 - Q18 spark-tpch: 1 - Q17 spark-tpch: 1 - Q16 spark-tpch: 1 - Q15 spark-tpch: 1 - Q14 spark-tpch: 1 - Q13 spark-tpch: 1 - Q12 spark-tpch: 1 - Q11 spark-tpch: 1 - Q10 spark-tpch: 1 - Q09 spark-tpch: 1 - Q08 spark-tpch: 1 - Q07 spark-tpch: 1 - Q06 spark-tpch: 1 - Q05 spark-tpch: 1 - Q04 spark-tpch: 1 - Q03 spark-tpch: 1 - Q02 spark-tpch: 1 - Q01 a b c d 1.5897 627.7539 58.5511 17.0745 88.4675 11.3088 8.17090667 836.9172 37.9294 17.5978 56.7902 201.9130 158.1544 71.9375 13.8987 2555.4232 12.4868 129.7041 7.7066 9.6462 103.5643 9.3853 106.4432 71.0390 14.0751 72.4193 441.4407 86.3769 11.5611 2.20328734 147.9620 43.1906 10.9635 91.1432 215.8297 736.7739 32.6019 980.4497 37.7351 27.97562823 63.4770 503.4132 841.4259 64.7072 15.4510 95.0151 336.1917 71.1719 449.3079 49.2180 649.9348 10.67486382 130.86564128 30.70769501 24.71872266 62.50857417 55.60917791 10.58654118 23.04547437 25.48705292 12.33711910 30.43039259 9.06239065 35.92603938 46.89715449 38.35942205 35.63054784 20.38117282 41.16542689 30.58487765 36.61449941 10.48352242 26.02234904 3.53586213 29.31493378 8.84160964 7.21627538 14.86223253 12.45044740 4.32050626 5.83709431 5.79615275 4.45708307 8.03465001 4.54931625 10.26555188 14.89943695 10.99399408 10.63118744 4.40131362 12.47607358 8.63682970 10.24406719 4.85099522 8.67833742 1.08502284 7.39505470 2.75555176 1.02847038 4.00372249 2.66192222 1.42731291 1.99491587 1.83231536 1.40058664 2.14987269 1.30849856 2.83789939 4.40090740 2.29091269 3.16296500 0.75870752 3.46156806 2.96026391 3.15890259 2.19916511 4.69511843 1.6656 599.1863 55.5621 17.9912 86.1505 11.6057 7.89399455 806.3928 39.3833 16.9355 59.0082 206.9627 154.3035 69.4174 14.4029 2635.5729 12.1095 131.167 7.6192 9.3424 106.9295 9.377 106.5387 69.8784 14.3078 70.9424 450.8062 88.165 11.3279 2.24531415 150.2259 43.8384 11.1265 89.8018 212.7552 726.2504 32.1567 993.6037 37.9456 27.78682893 62.8112 508.5902 837.4581 64.3985 15.522 94.8818 337.0821 71.2672 448.7603 49.3328 648.524 10.52690792 131.45526123 30.53813362 25.07553101 61.9550209 55.27999115 10.53230286 22.88924599 25.39692879 12.13110542 30.40716553 8.69320774 36.27207947 46.31754303 37.97156906 35.34214783 20.29723358 43.25001144 30.26870918 36.02916718 10.45031643 25.99035454 3.40110779 28.79863548 8.87663841 6.0425024 14.87680531 12.44107723 4.15591764 5.68089533 5.82840872 4.03698015 8.10859299 4.26761341 10.18445492 14.1802969 10.67045593 10.531744 4.62162924 12.91888523 8.70809078 10.32122612 4.88495016 8.70151806 0.98886323 7.70369244 2.58762598 1.1739459 4.0458951 2.72264194 1.43939853 2.17460775 1.78157055 1.45866323 2.0294199 1.36447799 3.07959628 4.51510048 2.32780766 3.30348134 0.73223764 3.3776865 3.09257221 3.2391243 2.20726132 4.73092079 1.6887 591.2570 58.8526 16.9865 84.4587 11.8387 8.11142575 841.3435 37.7845 17.4779 57.1793 199.2162 160.2780 72.0273 13.8813 2548.3771 12.5199 132.1047 7.5689 9.4215 106.0333 9.3277 107.1042 70.8989 14.1048 71.8091 445.1623 87.9306 11.3576 2.23705911 147.4889 43.0454 10.9619 91.1565 216.3914 737.9777 32.3392 988.4905 37.9380 28.11219413 63.1295 506.1330 837.1418 64.1154 15.5926 94.6789 337.7813 70.7399 452.0401 49.0805 649.7550 10.63667170 131.42175293 30.78961944 24.89447530 62.29955419 55.33429082 10.82443651 22.94638952 25.45622126 13.13648542 30.88582865 8.75260639 36.82142385 47.10843913 37.80742772 35.45530701 20.34083811 39.70042801 30.22341601 37.05844625 10.56094869 26.00166639 3.69966158 29.03367678 8.91217136 6.54328155 16.11452039 12.41860390 4.17572975 5.81021436 5.86177413 4.09777006 8.17116038 4.38963429 10.41971207 14.81898117 10.85271359 10.62009271 4.30036068 12.66337935 8.50457668 10.26445103 4.92897081 8.66681448 1.05880324 7.46197589 2.75284823 1.20837911 3.99324067 2.58527072 1.49448760 2.14363774 1.83836599 1.39991681 2.04039538 1.28974267 3.02914357 4.32967218 2.35779921 3.18430217 0.71082556 3.52418653 2.94428595 3.34556897 2.27846766 4.84034109 1.6090 620.2412 58.5011 17.0878 85.1572 11.7423 8.24567273 820.1208 38.7188 17.5761 56.8599 201.9444 158.1065 71.9192 13.9020 2543.1138 12.5483 134.3975 7.4402 9.6084 103.9791 9.5802 104.2930 71.4165 13.9997 72.4572 441.3941 87.4654 11.4172 2.22356446 148.2827 43.0569 11.1539 89.5918 215.2909 737.8524 32.3108 989.0589 37.4534 28.12394226 63.2965 504.8420 845.4438 64.0842 15.6001 95.4458 335.0968 71.0506 450.0598 49.1687 650.6931 10.26505438 131.00612386 30.51781400 25.02892049 62.73364003 55.42181269 10.91404247 23.00236766 25.42378616 13.68457921 30.04180272 9.01768939 35.92824809 46.99718602 38.38075257 35.40038427 20.38204702 38.99460093 30.55707931 35.99200439 10.44357236 26.11903827 3.59682607 28.90806135 8.82452583 8.02795347 14.91129049 12.40058359 4.34601260 5.85565599 5.73522472 4.61340586 8.09419775 4.48057460 10.35872300 14.92097505 10.81727695 10.74541600 4.36710962 12.65661907 8.33958340 10.15224552 4.79437430 8.73231697 1.06113847 7.40773551 2.74797662 0.99435820 3.86921136 2.57773050 1.53928820 2.27715731 1.83219139 1.39225551 2.06954416 1.29190469 3.14210590 4.55872504 2.22529737 3.25330043 0.74262398 3.30995369 2.85856040 3.15447322 2.27512868 4.84622987 OpenBenchmarking.org
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 0.38 0.76 1.14 1.52 1.9 SE +/- 0.0051, N = 3 SE +/- 0.0240, N = 3 SE +/- 0.0037, N = 3 1.5897 1.6656 1.6887 1.6090
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 140 280 420 560 700 SE +/- 2.01, N = 3 SE +/- 8.33, N = 3 SE +/- 1.41, N = 3 627.75 599.19 591.26 620.24
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d 13 26 39 52 65 SE +/- 0.38, N = 3 SE +/- 0.30, N = 3 SE +/- 0.11, N = 3 58.55 55.56 58.85 58.50
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream a b c d 4 8 12 16 20 SE +/- 0.11, N = 3 SE +/- 0.09, N = 3 SE +/- 0.03, N = 3 17.07 17.99 16.99 17.09
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.60, N = 15 SE +/- 0.42, N = 3 SE +/- 0.61, N = 3 88.47 86.15 84.46 85.16
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.07, N = 15 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 11.31 11.61 11.84 11.74
Apache Spark TPC-H Scale Factor: 10 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Geometric Mean Of All Queries a b c d 2 4 6 8 10 SE +/- 0.00516651, N = 3 SE +/- 0.05385571, N = 3 SE +/- 0.01945474, N = 3 8.17090667 7.89399455 8.11142575 8.24567273 MIN: 3.42 / MAX: 29.75 MIN: 3.4 / MAX: 28.8 MIN: 3.43 / MAX: 29.17 MIN: 3.57 / MAX: 29.28
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d 200 400 600 800 1000 SE +/- 12.88, N = 3 SE +/- 8.13, N = 6 SE +/- 7.99, N = 3 836.92 806.39 841.34 820.12
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b c d 9 18 27 36 45 SE +/- 0.52, N = 3 SE +/- 0.38, N = 6 SE +/- 0.36, N = 3 37.93 39.38 37.78 38.72
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d 4 8 12 16 20 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 SE +/- 0.06, N = 3 17.60 16.94 17.48 17.58
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a b c d 13 26 39 52 65 SE +/- 0.15, N = 3 SE +/- 0.19, N = 3 SE +/- 0.19, N = 3 56.79 59.01 57.18 56.86
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d 50 100 150 200 250 SE +/- 1.28, N = 3 SE +/- 1.60, N = 3 SE +/- 1.27, N = 3 201.91 206.96 199.22 201.94
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream a b c d 40 80 120 160 200 SE +/- 0.94, N = 3 SE +/- 1.36, N = 3 SE +/- 1.02, N = 3 158.15 154.30 160.28 158.11
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d 16 32 48 64 80 SE +/- 0.30, N = 3 SE +/- 0.25, N = 3 SE +/- 0.18, N = 3 71.94 69.42 72.03 71.92
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a b c d 4 8 12 16 20 SE +/- 0.06, N = 3 SE +/- 0.05, N = 3 SE +/- 0.03, N = 3 13.90 14.40 13.88 13.90
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 600 1200 1800 2400 3000 SE +/- 14.52, N = 3 SE +/- 12.01, N = 3 SE +/- 15.60, N = 3 2555.42 2635.57 2548.38 2543.11
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 3 6 9 12 15 SE +/- 0.07, N = 3 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 12.49 12.11 12.52 12.55
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 30 60 90 120 150 SE +/- 0.69, N = 3 SE +/- 1.51, N = 4 SE +/- 1.48, N = 5 129.70 131.17 132.10 134.40
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 2 4 6 8 10 SE +/- 0.0410, N = 3 SE +/- 0.0865, N = 4 SE +/- 0.0842, N = 5 7.7066 7.6192 7.5689 7.4402
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.0493, N = 3 SE +/- 0.0132, N = 3 SE +/- 0.0758, N = 3 9.6462 9.3424 9.4215 9.6084
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.53, N = 3 SE +/- 0.15, N = 3 SE +/- 0.81, N = 3 103.56 106.93 106.03 103.98
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.0142, N = 3 SE +/- 0.0294, N = 3 SE +/- 0.0916, N = 3 9.3853 9.3770 9.3277 9.5802
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.16, N = 3 SE +/- 0.34, N = 3 SE +/- 1.00, N = 3 106.44 106.54 107.10 104.29
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d 16 32 48 64 80 SE +/- 0.43, N = 3 SE +/- 0.72, N = 3 SE +/- 0.16, N = 3 71.04 69.88 70.90 71.42
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream a b c d 4 8 12 16 20 SE +/- 0.08, N = 3 SE +/- 0.14, N = 3 SE +/- 0.03, N = 3 14.08 14.31 14.10 14.00
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d 16 32 48 64 80 SE +/- 0.35, N = 3 SE +/- 0.24, N = 3 SE +/- 0.19, N = 3 72.42 70.94 71.81 72.46
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream a b c d 100 200 300 400 500 SE +/- 2.10, N = 3 SE +/- 1.47, N = 3 SE +/- 1.15, N = 3 441.44 450.81 445.16 441.39
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.58, N = 3 SE +/- 0.32, N = 3 SE +/- 0.19, N = 3 86.38 88.17 87.93 87.47
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 11.56 11.33 11.36 11.42
Apache Spark TPC-H Scale Factor: 1 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Geometric Mean Of All Queries a b c d 0.5052 1.0104 1.5156 2.0208 2.526 SE +/- 0.02461970, N = 4 SE +/- 0.00573499, N = 3 SE +/- 0.02256519, N = 3 2.20328734 2.24531415 2.23705911 2.22356446 MIN: 0.99 / MAX: 7.56 MIN: 0.99 / MAX: 7.7 MIN: 1.05 / MAX: 7.66 MIN: 0.97 / MAX: 7.7
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 30 60 90 120 150 SE +/- 0.45, N = 3 SE +/- 0.12, N = 3 SE +/- 0.06, N = 3 147.96 150.23 147.49 148.28
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d 10 20 30 40 50 SE +/- 0.30, N = 3 SE +/- 0.28, N = 3 SE +/- 0.06, N = 3 43.19 43.84 43.05 43.06
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 3 6 9 12 15 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 10.96 11.13 10.96 11.15
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 20 40 60 80 100 SE +/- 0.36, N = 3 SE +/- 0.17, N = 3 SE +/- 0.36, N = 3 91.14 89.80 91.16 89.59
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 50 100 150 200 250 SE +/- 0.62, N = 3 SE +/- 0.14, N = 3 SE +/- 0.09, N = 3 215.83 212.76 216.39 215.29
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b c d 160 320 480 640 800 SE +/- 6.44, N = 3 SE +/- 4.42, N = 3 SE +/- 2.01, N = 3 736.77 726.25 737.98 737.85
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 8 16 24 32 40 SE +/- 0.11, N = 3 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 32.60 32.16 32.34 32.31
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 200 400 600 800 1000 SE +/- 3.44, N = 3 SE +/- 1.16, N = 3 SE +/- 1.17, N = 3 980.45 993.60 988.49 989.06
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d 9 18 27 36 45 SE +/- 0.10, N = 3 SE +/- 0.25, N = 3 SE +/- 0.10, N = 3 37.74 37.95 37.94 37.45
Apache Spark TPC-H Scale Factor: 50 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Geometric Mean Of All Queries a b c d 7 14 21 28 35 SE +/- 0.09, N = 3 SE +/- 0.14, N = 3 SE +/- 0.15, N = 3 27.98 27.79 28.11 28.12 MIN: 8.84 / MAX: 131.32 MIN: 8.69 / MAX: 131.46 MIN: 8.58 / MAX: 132.82 MIN: 8.73 / MAX: 132.42
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 14 28 42 56 70 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 63.48 62.81 63.13 63.30
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c d 110 220 330 440 550 SE +/- 0.37, N = 3 SE +/- 0.45, N = 3 SE +/- 0.62, N = 3 503.41 508.59 506.13 504.84
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b c d 200 400 600 800 1000 SE +/- 2.31, N = 3 SE +/- 6.31, N = 3 SE +/- 2.60, N = 3 841.43 837.46 837.14 845.44
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 14 28 42 56 70 SE +/- 0.66, N = 3 SE +/- 0.51, N = 3 SE +/- 0.57, N = 3 64.71 64.40 64.12 64.08
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 4 8 12 16 20 SE +/- 0.16, N = 3 SE +/- 0.12, N = 3 SE +/- 0.14, N = 3 15.45 15.52 15.59 15.60
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d 20 40 60 80 100 SE +/- 0.08, N = 3 SE +/- 0.12, N = 3 SE +/- 0.28, N = 3 95.02 94.88 94.68 95.45
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b c d 70 140 210 280 350 SE +/- 0.53, N = 3 SE +/- 0.44, N = 3 SE +/- 1.00, N = 3 336.19 337.08 337.78 335.10
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d 16 32 48 64 80 SE +/- 0.17, N = 3 SE +/- 0.10, N = 3 SE +/- 0.25, N = 3 71.17 71.27 70.74 71.05
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream a b c d 100 200 300 400 500 SE +/- 1.01, N = 3 SE +/- 0.60, N = 3 SE +/- 1.54, N = 3 449.31 448.76 452.04 450.06
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d 11 22 33 44 55 SE +/- 0.03, N = 3 SE +/- 0.26, N = 3 SE +/- 0.10, N = 3 49.22 49.33 49.08 49.17
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b c d 140 280 420 560 700 SE +/- 0.29, N = 3 SE +/- 1.51, N = 3 SE +/- 1.27, N = 3 649.93 648.52 649.76 650.69
Apache Spark TPC-H Scale Factor: 50 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q22 a b c d 3 6 9 12 15 SE +/- 0.26, N = 3 SE +/- 0.11, N = 3 SE +/- 0.04, N = 3 10.67 10.53 10.64 10.27
Apache Spark TPC-H Scale Factor: 50 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q21 a b c d 30 60 90 120 150 SE +/- 0.28, N = 3 SE +/- 0.77, N = 3 SE +/- 0.96, N = 3 130.87 131.46 131.42 131.01
Apache Spark TPC-H Scale Factor: 50 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q20 a b c d 7 14 21 28 35 SE +/- 0.07, N = 3 SE +/- 0.24, N = 3 SE +/- 0.01, N = 3 30.71 30.54 30.79 30.52
Apache Spark TPC-H Scale Factor: 50 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q19 a b c d 6 12 18 24 30 SE +/- 0.11, N = 3 SE +/- 0.11, N = 3 SE +/- 0.24, N = 3 24.72 25.08 24.89 25.03
Apache Spark TPC-H Scale Factor: 50 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q18 a b c d 14 28 42 56 70 SE +/- 0.41, N = 3 SE +/- 0.13, N = 3 SE +/- 0.54, N = 3 62.51 61.96 62.30 62.73
Apache Spark TPC-H Scale Factor: 50 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q17 a b c d 12 24 36 48 60 SE +/- 0.08, N = 3 SE +/- 0.19, N = 3 SE +/- 0.55, N = 3 55.61 55.28 55.33 55.42
Apache Spark TPC-H Scale Factor: 50 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q16 a b c d 3 6 9 12 15 SE +/- 0.09, N = 3 SE +/- 0.13, N = 3 SE +/- 0.47, N = 3 10.59 10.53 10.82 10.91
Apache Spark TPC-H Scale Factor: 50 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q15 a b c d 6 12 18 24 30 SE +/- 0.04, N = 3 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 23.05 22.89 22.95 23.00
Apache Spark TPC-H Scale Factor: 50 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q14 a b c d 6 12 18 24 30 SE +/- 0.11, N = 3 SE +/- 0.03, N = 3 SE +/- 0.20, N = 3 25.49 25.40 25.46 25.42
Apache Spark TPC-H Scale Factor: 50 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q13 a b c d 4 8 12 16 20 SE +/- 0.04, N = 3 SE +/- 0.87, N = 3 SE +/- 0.76, N = 3 12.34 12.13 13.14 13.68
Apache Spark TPC-H Scale Factor: 50 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q12 a b c d 7 14 21 28 35 SE +/- 0.31, N = 3 SE +/- 0.47, N = 3 SE +/- 0.21, N = 3 30.43 30.41 30.89 30.04
Apache Spark TPC-H Scale Factor: 50 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q11 a b c d 3 6 9 12 15 SE +/- 0.17117283, N = 3 SE +/- 0.09627546, N = 3 SE +/- 0.14619367, N = 3 9.06239065 8.69320774 8.75260639 9.01768939
Apache Spark TPC-H Scale Factor: 50 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q10 a b c d 8 16 24 32 40 SE +/- 0.66, N = 3 SE +/- 0.34, N = 3 SE +/- 0.22, N = 3 35.93 36.27 36.82 35.93
Apache Spark TPC-H Scale Factor: 50 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q09 a b c d 11 22 33 44 55 SE +/- 0.20, N = 3 SE +/- 0.54, N = 3 SE +/- 0.25, N = 3 46.90 46.32 47.11 47.00
Apache Spark TPC-H Scale Factor: 50 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q08 a b c d 9 18 27 36 45 SE +/- 0.55, N = 3 SE +/- 0.08, N = 3 SE +/- 0.24, N = 3 38.36 37.97 37.81 38.38
Apache Spark TPC-H Scale Factor: 50 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q07 a b c d 8 16 24 32 40 SE +/- 0.46, N = 3 SE +/- 0.13, N = 3 SE +/- 0.15, N = 3 35.63 35.34 35.46 35.40
Apache Spark TPC-H Scale Factor: 50 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q06 a b c d 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 SE +/- 0.07, N = 3 20.38 20.30 20.34 20.38
Apache Spark TPC-H Scale Factor: 50 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q05 a b c d 10 20 30 40 50 SE +/- 0.81, N = 3 SE +/- 1.29, N = 3 SE +/- 0.23, N = 3 41.17 43.25 39.70 38.99
Apache Spark TPC-H Scale Factor: 50 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q04 a b c d 7 14 21 28 35 SE +/- 0.26, N = 3 SE +/- 0.12, N = 3 SE +/- 0.40, N = 3 30.58 30.27 30.22 30.56
Apache Spark TPC-H Scale Factor: 50 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q03 a b c d 9 18 27 36 45 SE +/- 0.49, N = 3 SE +/- 0.13, N = 3 SE +/- 0.42, N = 3 36.61 36.03 37.06 35.99
Apache Spark TPC-H Scale Factor: 50 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q02 a b c d 3 6 9 12 15 SE +/- 0.18, N = 3 SE +/- 0.09, N = 3 SE +/- 0.12, N = 3 10.48 10.45 10.56 10.44
Apache Spark TPC-H Scale Factor: 50 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q01 a b c d 6 12 18 24 30 SE +/- 0.08, N = 3 SE +/- 0.09, N = 3 SE +/- 0.18, N = 3 26.02 25.99 26.00 26.12
Apache Spark TPC-H Scale Factor: 10 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q22 a b c d 0.8324 1.6648 2.4972 3.3296 4.162 SE +/- 0.10541256, N = 3 SE +/- 0.18733559, N = 3 SE +/- 0.01543789, N = 3 3.53586213 3.40110779 3.69966158 3.59682607
Apache Spark TPC-H Scale Factor: 10 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q21 a b c d 7 14 21 28 35 SE +/- 0.25, N = 3 SE +/- 0.12, N = 3 SE +/- 0.20, N = 3 29.31 28.80 29.03 28.91
Apache Spark TPC-H Scale Factor: 10 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q20 a b c d 2 4 6 8 10 SE +/- 0.06394497, N = 3 SE +/- 0.01360476, N = 3 SE +/- 0.08793082, N = 3 8.84160964 8.87663841 8.91217136 8.82452583
Apache Spark TPC-H Scale Factor: 10 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q19 a b c d 2 4 6 8 10 SE +/- 0.61100502, N = 3 SE +/- 0.63365910, N = 3 SE +/- 0.02955483, N = 3 7.21627538 6.04250240 6.54328155 8.02795347
Apache Spark TPC-H Scale Factor: 10 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q18 a b c d 4 8 12 16 20 SE +/- 0.12, N = 3 SE +/- 0.79, N = 3 SE +/- 0.10, N = 3 14.86 14.88 16.11 14.91
Apache Spark TPC-H Scale Factor: 10 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q17 a b c d 3 6 9 12 15 SE +/- 0.10, N = 3 SE +/- 0.04, N = 3 SE +/- 0.13, N = 3 12.45 12.44 12.42 12.40
Apache Spark TPC-H Scale Factor: 10 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q16 a b c d 0.9779 1.9558 2.9337 3.9116 4.8895 SE +/- 0.04048184, N = 3 SE +/- 0.08541494, N = 3 SE +/- 0.00250658, N = 3 4.32050626 4.15591764 4.17572975 4.34601260
Apache Spark TPC-H Scale Factor: 10 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q15 a b c d 1.3175 2.635 3.9525 5.27 6.5875 SE +/- 0.03384145, N = 3 SE +/- 0.03487590, N = 3 SE +/- 0.04663640, N = 3 5.83709431 5.68089533 5.81021436 5.85565599
Apache Spark TPC-H Scale Factor: 10 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q14 a b c d 1.3189 2.6378 3.9567 5.2756 6.5945 SE +/- 0.02725146, N = 3 SE +/- 0.02997105, N = 3 SE +/- 0.04489420, N = 3 5.79615275 5.82840872 5.86177413 5.73522472
Apache Spark TPC-H Scale Factor: 10 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q13 a b c d 1.038 2.076 3.114 4.152 5.19 SE +/- 0.30907989, N = 3 SE +/- 0.05603992, N = 3 SE +/- 0.25379241, N = 3 4.45708307 4.03698015 4.09777006 4.61340586
Apache Spark TPC-H Scale Factor: 10 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q12 a b c d 2 4 6 8 10 SE +/- 0.04371011, N = 3 SE +/- 0.01728126, N = 3 SE +/- 0.05148904, N = 3 8.03465001 8.10859299 8.17116038 8.09419775
Apache Spark TPC-H Scale Factor: 10 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q11 a b c d 1.0236 2.0472 3.0708 4.0944 5.118 SE +/- 0.03938644, N = 3 SE +/- 0.02158043, N = 3 SE +/- 0.00931035, N = 3 4.54931625 4.26761341 4.38963429 4.48057460
Apache Spark TPC-H Scale Factor: 10 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q10 a b c d 3 6 9 12 15 SE +/- 0.12, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 10.27 10.18 10.42 10.36
Apache Spark TPC-H Scale Factor: 10 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q09 a b c d 4 8 12 16 20 SE +/- 0.27, N = 3 SE +/- 0.07, N = 3 SE +/- 0.31, N = 3 14.90 14.18 14.82 14.92
Apache Spark TPC-H Scale Factor: 10 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q08 a b c d 3 6 9 12 15 SE +/- 0.13, N = 3 SE +/- 0.15, N = 3 SE +/- 0.08, N = 3 10.99 10.67 10.85 10.82
Apache Spark TPC-H Scale Factor: 10 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q07 a b c d 3 6 9 12 15 SE +/- 0.11, N = 3 SE +/- 0.05, N = 3 SE +/- 0.17, N = 3 10.63 10.53 10.62 10.75
Apache Spark TPC-H Scale Factor: 10 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q06 a b c d 1.0399 2.0798 3.1197 4.1596 5.1995 SE +/- 0.04431026, N = 3 SE +/- 0.01079337, N = 3 SE +/- 0.03080177, N = 3 4.40131362 4.62162924 4.30036068 4.36710962
Apache Spark TPC-H Scale Factor: 10 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q05 a b c d 3 6 9 12 15 SE +/- 0.23, N = 3 SE +/- 0.33, N = 3 SE +/- 0.18, N = 3 12.48 12.92 12.66 12.66
Apache Spark TPC-H Scale Factor: 10 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q04 a b c d 2 4 6 8 10 SE +/- 0.24804165, N = 3 SE +/- 0.08517991, N = 3 SE +/- 0.07436974, N = 3 8.63682970 8.70809078 8.50457668 8.33958340
Apache Spark TPC-H Scale Factor: 10 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q03 a b c d 3 6 9 12 15 SE +/- 0.28, N = 3 SE +/- 0.27, N = 3 SE +/- 0.00, N = 3 10.24 10.32 10.26 10.15
Apache Spark TPC-H Scale Factor: 10 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q02 a b c d 1.109 2.218 3.327 4.436 5.545 SE +/- 0.02777661, N = 3 SE +/- 0.06191683, N = 3 SE +/- 0.02874460, N = 3 4.85099522 4.88495016 4.92897081 4.79437430
Apache Spark TPC-H Scale Factor: 10 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q01 a b c d 2 4 6 8 10 SE +/- 0.07219277, N = 3 SE +/- 0.10886176, N = 3 SE +/- 0.05844458, N = 3 8.67833742 8.70151806 8.66681448 8.73231697
Apache Spark TPC-H Scale Factor: 1 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q22 a b c d 0.2441 0.4882 0.7323 0.9764 1.2205 SE +/- 0.00972834, N = 4 SE +/- 0.00495537, N = 3 SE +/- 0.01300436, N = 3 1.08502284 0.98886323 1.05880324 1.06113847
Apache Spark TPC-H Scale Factor: 1 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q21 a b c d 2 4 6 8 10 SE +/- 0.08062589, N = 4 SE +/- 0.11898636, N = 3 SE +/- 0.17170189, N = 3 7.39505470 7.70369244 7.46197589 7.40773551
Apache Spark TPC-H Scale Factor: 1 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q20 a b c d 0.62 1.24 1.86 2.48 3.1 SE +/- 0.09583966, N = 4 SE +/- 0.13759046, N = 3 SE +/- 0.14206756, N = 3 2.75555176 2.58762598 2.75284823 2.74797662
Apache Spark TPC-H Scale Factor: 1 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q19 a b c d 0.2719 0.5438 0.8157 1.0876 1.3595 SE +/- 0.02636608, N = 4 SE +/- 0.07878807, N = 3 SE +/- 0.01874751, N = 3 1.02847038 1.17394590 1.20837911 0.99435820
Apache Spark TPC-H Scale Factor: 1 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q18 a b c d 0.9103 1.8206 2.7309 3.6412 4.5515 SE +/- 0.10303949, N = 4 SE +/- 0.10019808, N = 3 SE +/- 0.13038086, N = 3 4.00372249 4.04589510 3.99324067 3.86921136
Apache Spark TPC-H Scale Factor: 1 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q17 a b c d 0.6126 1.2252 1.8378 2.4504 3.063 SE +/- 0.02035298, N = 4 SE +/- 0.01865956, N = 3 SE +/- 0.03392729, N = 3 2.66192222 2.72264194 2.58527072 2.57773050
Apache Spark TPC-H Scale Factor: 1 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q16 a b c d 0.3463 0.6926 1.0389 1.3852 1.7315 SE +/- 0.01144007, N = 4 SE +/- 0.05940937, N = 3 SE +/- 0.04799671, N = 3 1.42731291 1.43939853 1.49448760 1.53928820
Apache Spark TPC-H Scale Factor: 1 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q15 a b c d 0.5124 1.0248 1.5372 2.0496 2.562 SE +/- 0.04340326, N = 4 SE +/- 0.20579960, N = 3 SE +/- 0.09001964, N = 3 1.99491587 2.17460775 2.14363774 2.27715731
Apache Spark TPC-H Scale Factor: 1 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q14 a b c d 0.4136 0.8272 1.2408 1.6544 2.068 SE +/- 0.03540160, N = 4 SE +/- 0.01014717, N = 3 SE +/- 0.02463739, N = 3 1.83231536 1.78157055 1.83836599 1.83219139
Apache Spark TPC-H Scale Factor: 1 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q13 a b c d 0.3282 0.6564 0.9846 1.3128 1.641 SE +/- 0.03676867, N = 4 SE +/- 0.04981792, N = 3 SE +/- 0.02733553, N = 3 1.40058664 1.45866323 1.39991681 1.39225551
Apache Spark TPC-H Scale Factor: 1 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q12 a b c d 0.4837 0.9674 1.4511 1.9348 2.4185 SE +/- 0.15031140, N = 4 SE +/- 0.04470734, N = 3 SE +/- 0.02411619, N = 3 2.14987269 2.02941990 2.04039538 2.06954416
Apache Spark TPC-H Scale Factor: 1 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q11 a b c d 0.307 0.614 0.921 1.228 1.535 SE +/- 0.07929998, N = 4 SE +/- 0.15013623, N = 3 SE +/- 0.10958276, N = 3 1.30849856 1.36447799 1.28974267 1.29190469
Apache Spark TPC-H Scale Factor: 1 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q10 a b c d 0.707 1.414 2.121 2.828 3.535 SE +/- 0.04789727, N = 4 SE +/- 0.03625533, N = 3 SE +/- 0.04198555, N = 3 2.83789939 3.07959628 3.02914357 3.14210590
Apache Spark TPC-H Scale Factor: 1 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q09 a b c d 1.0257 2.0514 3.0771 4.1028 5.1285 SE +/- 0.13363421, N = 4 SE +/- 0.11520840, N = 3 SE +/- 0.17668063, N = 3 4.40090740 4.51510048 4.32967218 4.55872504
Apache Spark TPC-H Scale Factor: 1 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q08 a b c d 0.5305 1.061 1.5915 2.122 2.6525 SE +/- 0.02917526, N = 4 SE +/- 0.06735723, N = 3 SE +/- 0.07315965, N = 3 2.29091269 2.32780766 2.35779921 2.22529737
Apache Spark TPC-H Scale Factor: 1 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q07 a b c d 0.7433 1.4866 2.2299 2.9732 3.7165 SE +/- 0.07379513, N = 4 SE +/- 0.13350733, N = 3 SE +/- 0.03596564, N = 3 3.16296500 3.30348134 3.18430217 3.25330043
Apache Spark TPC-H Scale Factor: 1 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q06 a b c d 0.1707 0.3414 0.5121 0.6828 0.8535 SE +/- 0.03196645, N = 4 SE +/- 0.00867146, N = 3 SE +/- 0.01813313, N = 3 0.75870752 0.73223764 0.71082556 0.74262398
Apache Spark TPC-H Scale Factor: 1 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q05 a b c d 0.7929 1.5858 2.3787 3.1716 3.9645 SE +/- 0.06297205, N = 4 SE +/- 0.08593638, N = 3 SE +/- 0.06648494, N = 3 3.46156806 3.37768650 3.52418653 3.30995369
Apache Spark TPC-H Scale Factor: 1 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q04 a b c d 0.6958 1.3916 2.0874 2.7832 3.479 SE +/- 0.02567628, N = 4 SE +/- 0.05472939, N = 3 SE +/- 0.09994253, N = 3 2.96026391 3.09257221 2.94428595 2.85856040
Apache Spark TPC-H Scale Factor: 1 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q03 a b c d 0.7528 1.5056 2.2584 3.0112 3.764 SE +/- 0.03420168, N = 4 SE +/- 0.05826206, N = 3 SE +/- 0.04118145, N = 3 3.15890259 3.23912430 3.34556897 3.15447322
Apache Spark TPC-H Scale Factor: 1 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q02 a b c d 0.5127 1.0254 1.5381 2.0508 2.5635 SE +/- 0.05229177, N = 4 SE +/- 0.08231744, N = 3 SE +/- 0.07468701, N = 3 2.19916511 2.20726132 2.27846766 2.27512868
Apache Spark TPC-H Scale Factor: 1 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q01 a b c d 1.0904 2.1808 3.2712 4.3616 5.452 SE +/- 0.06902799, N = 4 SE +/- 0.05428360, N = 3 SE +/- 0.08250324, N = 3 4.69511843 4.73092079 4.84034109 4.84622987
Phoronix Test Suite v10.8.5