sdfa AMD Ryzen Threadripper 3990X 64-Core testing with a Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2312122-PTS-SDFA911983&sor&gru .
sdfa Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b c d AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads) Gigabyte TRX40 AORUS PRO WIFI (F6 BIOS) AMD Starship/Matisse 128GB Samsung SSD 970 EVO Plus 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio DELL P2415Q Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 23.10 6.5.0-13-generic (x86_64) GNOME Shell 45.0 X Server + Wayland 4.6 Mesa 23.2.1-1ubuntu3 (LLVM 15.0.7 DRM 3.54) GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107a Java Details - OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10) Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
sdfa deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream spark-tpch: 1 - Geometric Mean Of All Queries spark-tpch: 1 - Q01 spark-tpch: 1 - Q02 spark-tpch: 1 - Q03 spark-tpch: 1 - Q04 spark-tpch: 1 - Q05 spark-tpch: 1 - Q06 spark-tpch: 1 - Q07 spark-tpch: 1 - Q08 spark-tpch: 1 - Q09 spark-tpch: 1 - Q10 spark-tpch: 1 - Q11 spark-tpch: 1 - Q12 spark-tpch: 1 - Q13 spark-tpch: 1 - Q14 spark-tpch: 1 - Q15 spark-tpch: 1 - Q16 spark-tpch: 1 - Q17 spark-tpch: 1 - Q18 spark-tpch: 1 - Q19 spark-tpch: 1 - Q20 spark-tpch: 1 - Q21 spark-tpch: 1 - Q22 spark-tpch: 10 - Geometric Mean Of All Queries spark-tpch: 10 - Q01 spark-tpch: 10 - Q02 spark-tpch: 10 - Q03 spark-tpch: 10 - Q04 spark-tpch: 10 - Q05 spark-tpch: 10 - Q06 spark-tpch: 10 - Q07 spark-tpch: 10 - Q08 spark-tpch: 10 - Q09 spark-tpch: 10 - Q10 spark-tpch: 10 - Q11 spark-tpch: 10 - Q12 spark-tpch: 10 - Q13 spark-tpch: 10 - Q14 spark-tpch: 10 - Q15 spark-tpch: 10 - Q16 spark-tpch: 10 - Q17 spark-tpch: 10 - Q18 spark-tpch: 10 - Q19 spark-tpch: 10 - Q20 spark-tpch: 10 - Q21 spark-tpch: 10 - Q22 spark-tpch: 50 - Geometric Mean Of All Queries spark-tpch: 50 - Q01 spark-tpch: 50 - Q02 spark-tpch: 50 - Q03 spark-tpch: 50 - Q04 spark-tpch: 50 - Q05 spark-tpch: 50 - Q06 spark-tpch: 50 - Q07 spark-tpch: 50 - Q08 spark-tpch: 50 - Q09 spark-tpch: 50 - Q10 spark-tpch: 50 - Q11 spark-tpch: 50 - Q12 spark-tpch: 50 - Q13 spark-tpch: 50 - Q14 spark-tpch: 50 - Q15 spark-tpch: 50 - Q16 spark-tpch: 50 - Q17 spark-tpch: 50 - Q18 spark-tpch: 50 - Q19 spark-tpch: 50 - Q20 spark-tpch: 50 - Q21 spark-tpch: 50 - Q22 spark-tpch: 100 a b c d 37.9294 13.8987 980.4497 129.7041 441.4407 103.5643 2555.4232 627.7539 201.9130 86.3769 49.2180 11.3088 449.3079 106.4432 215.8297 91.1432 336.1917 56.7902 43.1906 17.0745 503.4132 64.7072 37.7351 14.0751 836.9172 71.9375 32.6019 7.7066 72.4193 9.6462 12.4868 1.5897 158.1544 11.5611 649.9348 88.4675 71.1719 9.3853 147.9620 10.9635 95.0151 17.5978 736.7739 58.5511 63.4770 15.4510 841.4259 71.0390 2.20328734 4.69511843 2.19916511 3.15890259 2.96026391 3.46156806 0.75870752 3.16296500 2.29091269 4.40090740 2.83789939 1.30849856 2.14987269 1.40058664 1.83231536 1.99491587 1.42731291 2.66192222 4.00372249 1.02847038 2.75555176 7.39505470 1.08502284 8.17090667 8.67833742 4.85099522 10.24406719 8.63682970 12.47607358 4.40131362 10.63118744 10.99399408 14.89943695 10.26555188 4.54931625 8.03465001 4.45708307 5.79615275 5.83709431 4.32050626 12.45044740 14.86223253 7.21627538 8.84160964 29.31493378 3.53586213 27.97562823 26.02234904 10.48352242 36.61449941 30.58487765 41.16542689 20.38117282 35.63054784 38.35942205 46.89715449 35.92603938 9.06239065 30.43039259 12.33711910 25.48705292 23.04547437 10.58654118 55.60917791 62.50857417 24.71872266 30.70769501 130.86564128 10.67486382 39.3833 14.4029 993.6037 131.167 450.8062 106.9295 2635.5729 599.1863 206.9627 88.165 49.3328 11.6057 448.7603 106.5387 212.7552 89.8018 337.0821 59.0082 43.8384 17.9912 508.5902 64.3985 37.9456 14.3078 806.3928 69.4174 32.1567 7.6192 70.9424 9.3424 12.1095 1.6656 154.3035 11.3279 648.524 86.1505 71.2672 9.377 150.2259 11.1265 94.8818 16.9355 726.2504 55.5621 62.8112 15.522 837.4581 69.8784 2.24531415 4.73092079 2.20726132 3.2391243 3.09257221 3.3776865 0.73223764 3.30348134 2.32780766 4.51510048 3.07959628 1.36447799 2.0294199 1.45866323 1.78157055 2.17460775 1.43939853 2.72264194 4.0458951 1.1739459 2.58762598 7.70369244 0.98886323 7.89399455 8.70151806 4.88495016 10.32122612 8.70809078 12.91888523 4.62162924 10.531744 10.67045593 14.1802969 10.18445492 4.26761341 8.10859299 4.03698015 5.82840872 5.68089533 4.15591764 12.44107723 14.87680531 6.0425024 8.87663841 28.79863548 3.40110779 27.78682893 25.99035454 10.45031643 36.02916718 30.26870918 43.25001144 20.29723358 35.34214783 37.97156906 46.31754303 36.27207947 8.69320774 30.40716553 12.13110542 25.39692879 22.88924599 10.53230286 55.27999115 61.9550209 25.07553101 30.53813362 131.45526123 10.52690792 37.7845 13.8813 988.4905 132.1047 445.1623 106.0333 2548.3771 591.2570 199.2162 87.9306 49.0805 11.8387 452.0401 107.1042 216.3914 91.1565 337.7813 57.1793 43.0454 16.9865 506.1330 64.1154 37.9380 14.1048 841.3435 72.0273 32.3392 7.5689 71.8091 9.4215 12.5199 1.6887 160.2780 11.3576 649.7550 84.4587 70.7399 9.3277 147.4889 10.9619 94.6789 17.4779 737.9777 58.8526 63.1295 15.5926 837.1418 70.8989 2.23705911 4.84034109 2.27846766 3.34556897 2.94428595 3.52418653 0.71082556 3.18430217 2.35779921 4.32967218 3.02914357 1.28974267 2.04039538 1.39991681 1.83836599 2.14363774 1.49448760 2.58527072 3.99324067 1.20837911 2.75284823 7.46197589 1.05880324 8.11142575 8.66681448 4.92897081 10.26445103 8.50457668 12.66337935 4.30036068 10.62009271 10.85271359 14.81898117 10.41971207 4.38963429 8.17116038 4.09777006 5.86177413 5.81021436 4.17572975 12.41860390 16.11452039 6.54328155 8.91217136 29.03367678 3.69966158 28.11219413 26.00166639 10.56094869 37.05844625 30.22341601 39.70042801 20.34083811 35.45530701 37.80742772 47.10843913 36.82142385 8.75260639 30.88582865 13.13648542 25.45622126 22.94638952 10.82443651 55.33429082 62.29955419 24.89447530 30.78961944 131.42175293 10.63667170 38.7188 13.9020 989.0589 134.3975 441.3941 103.9791 2543.1138 620.2412 201.9444 87.4654 49.1687 11.7423 450.0598 104.2930 215.2909 89.5918 335.0968 56.8599 43.0569 17.0878 504.8420 64.0842 37.4534 13.9997 820.1208 71.9192 32.3108 7.4402 72.4572 9.6084 12.5483 1.6090 158.1065 11.4172 650.6931 85.1572 71.0506 9.5802 148.2827 11.1539 95.4458 17.5761 737.8524 58.5011 63.2965 15.6001 845.4438 71.4165 2.22356446 4.84622987 2.27512868 3.15447322 2.85856040 3.30995369 0.74262398 3.25330043 2.22529737 4.55872504 3.14210590 1.29190469 2.06954416 1.39225551 1.83219139 2.27715731 1.53928820 2.57773050 3.86921136 0.99435820 2.74797662 7.40773551 1.06113847 8.24567273 8.73231697 4.79437430 10.15224552 8.33958340 12.65661907 4.36710962 10.74541600 10.81727695 14.92097505 10.35872300 4.48057460 8.09419775 4.61340586 5.73522472 5.85565599 4.34601260 12.40058359 14.91129049 8.02795347 8.82452583 28.90806135 3.59682607 28.12394226 26.11903827 10.44357236 35.99200439 30.55707931 38.99460093 20.38204702 35.40038427 38.38075257 46.99718602 35.92824809 9.01768939 30.04180272 13.68457921 25.42378616 23.00236766 10.91404247 55.42181269 62.73364003 25.02892049 30.51781400 131.00612386 10.26505438 OpenBenchmarking.org
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream b d a c 9 18 27 36 45 SE +/- 0.36, N = 3 SE +/- 0.52, N = 3 SE +/- 0.38, N = 6 39.38 38.72 37.93 37.78
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream b d a c 4 8 12 16 20 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 SE +/- 0.05, N = 3 14.40 13.90 13.90 13.88
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream b d c a 200 400 600 800 1000 SE +/- 1.17, N = 3 SE +/- 1.16, N = 3 SE +/- 3.44, N = 3 993.60 989.06 988.49 980.45
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream d c b a 30 60 90 120 150 SE +/- 1.48, N = 5 SE +/- 1.51, N = 4 SE +/- 0.69, N = 3 134.40 132.10 131.17 129.70
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream b c a d 100 200 300 400 500 SE +/- 1.47, N = 3 SE +/- 2.10, N = 3 SE +/- 1.15, N = 3 450.81 445.16 441.44 441.39
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream b c d a 20 40 60 80 100 SE +/- 0.15, N = 3 SE +/- 0.81, N = 3 SE +/- 0.53, N = 3 106.93 106.03 103.98 103.56
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream b a c d 600 1200 1800 2400 3000 SE +/- 14.52, N = 3 SE +/- 12.01, N = 3 SE +/- 15.60, N = 3 2635.57 2555.42 2548.38 2543.11
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a d b c 140 280 420 560 700 SE +/- 2.01, N = 3 SE +/- 1.41, N = 3 SE +/- 8.33, N = 3 627.75 620.24 599.19 591.26
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream b d a c 50 100 150 200 250 SE +/- 1.27, N = 3 SE +/- 1.28, N = 3 SE +/- 1.60, N = 3 206.96 201.94 201.91 199.22
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream b c d a 20 40 60 80 100 SE +/- 0.32, N = 3 SE +/- 0.19, N = 3 SE +/- 0.58, N = 3 88.17 87.93 87.47 86.38
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream b a d c 11 22 33 44 55 SE +/- 0.03, N = 3 SE +/- 0.10, N = 3 SE +/- 0.26, N = 3 49.33 49.22 49.17 49.08
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream c d b a 3 6 9 12 15 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 SE +/- 0.07, N = 15 11.84 11.74 11.61 11.31
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream c d a b 100 200 300 400 500 SE +/- 0.60, N = 3 SE +/- 1.54, N = 3 SE +/- 1.01, N = 3 452.04 450.06 449.31 448.76
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream c b a d 20 40 60 80 100 SE +/- 0.34, N = 3 SE +/- 0.16, N = 3 SE +/- 1.00, N = 3 107.10 106.54 106.44 104.29
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream c a d b 50 100 150 200 250 SE +/- 0.14, N = 3 SE +/- 0.62, N = 3 SE +/- 0.09, N = 3 216.39 215.83 215.29 212.76
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream c a b d 20 40 60 80 100 SE +/- 0.17, N = 3 SE +/- 0.36, N = 3 SE +/- 0.36, N = 3 91.16 91.14 89.80 89.59
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream c b a d 70 140 210 280 350 SE +/- 0.44, N = 3 SE +/- 0.53, N = 3 SE +/- 1.00, N = 3 337.78 337.08 336.19 335.10
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream b c d a 13 26 39 52 65 SE +/- 0.19, N = 3 SE +/- 0.19, N = 3 SE +/- 0.15, N = 3 59.01 57.18 56.86 56.79
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream b a d c 10 20 30 40 50 SE +/- 0.30, N = 3 SE +/- 0.06, N = 3 SE +/- 0.28, N = 3 43.84 43.19 43.06 43.05
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream b d a c 4 8 12 16 20 SE +/- 0.03, N = 3 SE +/- 0.11, N = 3 SE +/- 0.09, N = 3 17.99 17.09 17.07 16.99
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c d a 110 220 330 440 550 SE +/- 0.45, N = 3 SE +/- 0.62, N = 3 SE +/- 0.37, N = 3 508.59 506.13 504.84 503.41
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 14 28 42 56 70 SE +/- 0.66, N = 3 SE +/- 0.51, N = 3 SE +/- 0.57, N = 3 64.71 64.40 64.12 64.08
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b c a d 9 18 27 36 45 SE +/- 0.25, N = 3 SE +/- 0.10, N = 3 SE +/- 0.10, N = 3 37.95 37.94 37.74 37.45
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream b c a d 4 8 12 16 20 SE +/- 0.14, N = 3 SE +/- 0.08, N = 3 SE +/- 0.03, N = 3 14.31 14.10 14.08 14.00
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream b d a c 200 400 600 800 1000 SE +/- 7.99, N = 3 SE +/- 12.88, N = 3 SE +/- 8.13, N = 6 806.39 820.12 836.92 841.34
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream b d a c 16 32 48 64 80 SE +/- 0.18, N = 3 SE +/- 0.30, N = 3 SE +/- 0.25, N = 3 69.42 71.92 71.94 72.03
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream b d c a 8 16 24 32 40 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 SE +/- 0.11, N = 3 32.16 32.31 32.34 32.60
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream d c b a 2 4 6 8 10 SE +/- 0.0842, N = 5 SE +/- 0.0865, N = 4 SE +/- 0.0410, N = 3 7.4402 7.5689 7.6192 7.7066
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream b c a d 16 32 48 64 80 SE +/- 0.24, N = 3 SE +/- 0.35, N = 3 SE +/- 0.19, N = 3 70.94 71.81 72.42 72.46
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream b c d a 3 6 9 12 15 SE +/- 0.0132, N = 3 SE +/- 0.0758, N = 3 SE +/- 0.0493, N = 3 9.3424 9.4215 9.6084 9.6462
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream b a c d 3 6 9 12 15 SE +/- 0.07, N = 3 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 12.11 12.49 12.52 12.55
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream a d b c 0.38 0.76 1.14 1.52 1.9 SE +/- 0.0051, N = 3 SE +/- 0.0037, N = 3 SE +/- 0.0240, N = 3 1.5897 1.6090 1.6656 1.6887
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream b d a c 40 80 120 160 200 SE +/- 1.02, N = 3 SE +/- 0.94, N = 3 SE +/- 1.36, N = 3 154.30 158.11 158.15 160.28
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream b c d a 3 6 9 12 15 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 SE +/- 0.08, N = 3 11.33 11.36 11.42 11.56
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream b c a d 140 280 420 560 700 SE +/- 1.51, N = 3 SE +/- 0.29, N = 3 SE +/- 1.27, N = 3 648.52 649.76 649.93 650.69
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream c d b a 20 40 60 80 100 SE +/- 0.42, N = 3 SE +/- 0.61, N = 3 SE +/- 0.60, N = 15 84.46 85.16 86.15 88.47
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream c d a b 16 32 48 64 80 SE +/- 0.10, N = 3 SE +/- 0.25, N = 3 SE +/- 0.17, N = 3 70.74 71.05 71.17 71.27
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream c b a d 3 6 9 12 15 SE +/- 0.0294, N = 3 SE +/- 0.0142, N = 3 SE +/- 0.0916, N = 3 9.3277 9.3770 9.3853 9.5802
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream c a d b 30 60 90 120 150 SE +/- 0.12, N = 3 SE +/- 0.45, N = 3 SE +/- 0.06, N = 3 147.49 147.96 148.28 150.23
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream c a b d 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 10.96 10.96 11.13 11.15
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream c b a d 20 40 60 80 100 SE +/- 0.12, N = 3 SE +/- 0.08, N = 3 SE +/- 0.28, N = 3 94.68 94.88 95.02 95.45
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream b c d a 4 8 12 16 20 SE +/- 0.06, N = 3 SE +/- 0.06, N = 3 SE +/- 0.05, N = 3 16.94 17.48 17.58 17.60
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream b a d c 160 320 480 640 800 SE +/- 6.44, N = 3 SE +/- 2.01, N = 3 SE +/- 4.42, N = 3 726.25 736.77 737.85 737.98
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream b d a c 13 26 39 52 65 SE +/- 0.11, N = 3 SE +/- 0.38, N = 3 SE +/- 0.30, N = 3 55.56 58.50 58.55 58.85
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c d a 14 28 42 56 70 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 62.81 63.13 63.30 63.48
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b c d 4 8 12 16 20 SE +/- 0.16, N = 3 SE +/- 0.12, N = 3 SE +/- 0.14, N = 3 15.45 15.52 15.59 15.60
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream c b a d 200 400 600 800 1000 SE +/- 6.31, N = 3 SE +/- 2.31, N = 3 SE +/- 2.60, N = 3 837.14 837.46 841.43 845.44
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream b c a d 16 32 48 64 80 SE +/- 0.72, N = 3 SE +/- 0.43, N = 3 SE +/- 0.16, N = 3 69.88 70.90 71.04 71.42
Apache Spark TPC-H Scale Factor: 1 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Geometric Mean Of All Queries a d c b 0.5052 1.0104 1.5156 2.0208 2.526 SE +/- 0.02461970, N = 4 SE +/- 0.02256519, N = 3 SE +/- 0.00573499, N = 3 2.20328734 2.22356446 2.23705911 2.24531415 MIN: 0.99 / MAX: 7.56 MIN: 0.97 / MAX: 7.7 MIN: 1.05 / MAX: 7.66 MIN: 0.99 / MAX: 7.7
Apache Spark TPC-H Scale Factor: 1 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q01 a b c d 1.0904 2.1808 3.2712 4.3616 5.452 SE +/- 0.06902799, N = 4 SE +/- 0.05428360, N = 3 SE +/- 0.08250324, N = 3 4.69511843 4.73092079 4.84034109 4.84622987
Apache Spark TPC-H Scale Factor: 1 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q02 a b d c 0.5127 1.0254 1.5381 2.0508 2.5635 SE +/- 0.05229177, N = 4 SE +/- 0.07468701, N = 3 SE +/- 0.08231744, N = 3 2.19916511 2.20726132 2.27512868 2.27846766
Apache Spark TPC-H Scale Factor: 1 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q03 d a b c 0.7528 1.5056 2.2584 3.0112 3.764 SE +/- 0.04118145, N = 3 SE +/- 0.03420168, N = 4 SE +/- 0.05826206, N = 3 3.15447322 3.15890259 3.23912430 3.34556897
Apache Spark TPC-H Scale Factor: 1 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q04 d c a b 0.6958 1.3916 2.0874 2.7832 3.479 SE +/- 0.09994253, N = 3 SE +/- 0.05472939, N = 3 SE +/- 0.02567628, N = 4 2.85856040 2.94428595 2.96026391 3.09257221
Apache Spark TPC-H Scale Factor: 1 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q05 d b a c 0.7929 1.5858 2.3787 3.1716 3.9645 SE +/- 0.06648494, N = 3 SE +/- 0.06297205, N = 4 SE +/- 0.08593638, N = 3 3.30995369 3.37768650 3.46156806 3.52418653
Apache Spark TPC-H Scale Factor: 1 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q06 c b d a 0.1707 0.3414 0.5121 0.6828 0.8535 SE +/- 0.00867146, N = 3 SE +/- 0.01813313, N = 3 SE +/- 0.03196645, N = 4 0.71082556 0.73223764 0.74262398 0.75870752
Apache Spark TPC-H Scale Factor: 1 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q07 a c d b 0.7433 1.4866 2.2299 2.9732 3.7165 SE +/- 0.07379513, N = 4 SE +/- 0.13350733, N = 3 SE +/- 0.03596564, N = 3 3.16296500 3.18430217 3.25330043 3.30348134
Apache Spark TPC-H Scale Factor: 1 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q08 d a b c 0.5305 1.061 1.5915 2.122 2.6525 SE +/- 0.07315965, N = 3 SE +/- 0.02917526, N = 4 SE +/- 0.06735723, N = 3 2.22529737 2.29091269 2.32780766 2.35779921
Apache Spark TPC-H Scale Factor: 1 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q09 c a b d 1.0257 2.0514 3.0771 4.1028 5.1285 SE +/- 0.11520840, N = 3 SE +/- 0.13363421, N = 4 SE +/- 0.17668063, N = 3 4.32967218 4.40090740 4.51510048 4.55872504
Apache Spark TPC-H Scale Factor: 1 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q10 a c b d 0.707 1.414 2.121 2.828 3.535 SE +/- 0.04789727, N = 4 SE +/- 0.03625533, N = 3 SE +/- 0.04198555, N = 3 2.83789939 3.02914357 3.07959628 3.14210590
Apache Spark TPC-H Scale Factor: 1 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q11 c d a b 0.307 0.614 0.921 1.228 1.535 SE +/- 0.15013623, N = 3 SE +/- 0.10958276, N = 3 SE +/- 0.07929998, N = 4 1.28974267 1.29190469 1.30849856 1.36447799
Apache Spark TPC-H Scale Factor: 1 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q12 b c d a 0.4837 0.9674 1.4511 1.9348 2.4185 SE +/- 0.04470734, N = 3 SE +/- 0.02411619, N = 3 SE +/- 0.15031140, N = 4 2.02941990 2.04039538 2.06954416 2.14987269
Apache Spark TPC-H Scale Factor: 1 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q13 d c a b 0.3282 0.6564 0.9846 1.3128 1.641 SE +/- 0.02733553, N = 3 SE +/- 0.04981792, N = 3 SE +/- 0.03676867, N = 4 1.39225551 1.39991681 1.40058664 1.45866323
Apache Spark TPC-H Scale Factor: 1 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q14 b d a c 0.4136 0.8272 1.2408 1.6544 2.068 SE +/- 0.02463739, N = 3 SE +/- 0.03540160, N = 4 SE +/- 0.01014717, N = 3 1.78157055 1.83219139 1.83231536 1.83836599
Apache Spark TPC-H Scale Factor: 1 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q15 a c b d 0.5124 1.0248 1.5372 2.0496 2.562 SE +/- 0.04340326, N = 4 SE +/- 0.20579960, N = 3 SE +/- 0.09001964, N = 3 1.99491587 2.14363774 2.17460775 2.27715731
Apache Spark TPC-H Scale Factor: 1 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q16 a b c d 0.3463 0.6926 1.0389 1.3852 1.7315 SE +/- 0.01144007, N = 4 SE +/- 0.05940937, N = 3 SE +/- 0.04799671, N = 3 1.42731291 1.43939853 1.49448760 1.53928820
Apache Spark TPC-H Scale Factor: 1 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q17 d c a b 0.6126 1.2252 1.8378 2.4504 3.063 SE +/- 0.03392729, N = 3 SE +/- 0.01865956, N = 3 SE +/- 0.02035298, N = 4 2.57773050 2.58527072 2.66192222 2.72264194
Apache Spark TPC-H Scale Factor: 1 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q18 d c a b 0.9103 1.8206 2.7309 3.6412 4.5515 SE +/- 0.13038086, N = 3 SE +/- 0.10019808, N = 3 SE +/- 0.10303949, N = 4 3.86921136 3.99324067 4.00372249 4.04589510
Apache Spark TPC-H Scale Factor: 1 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q19 d a b c 0.2719 0.5438 0.8157 1.0876 1.3595 SE +/- 0.01874751, N = 3 SE +/- 0.02636608, N = 4 SE +/- 0.07878807, N = 3 0.99435820 1.02847038 1.17394590 1.20837911
Apache Spark TPC-H Scale Factor: 1 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q20 b d c a 0.62 1.24 1.86 2.48 3.1 SE +/- 0.14206756, N = 3 SE +/- 0.13759046, N = 3 SE +/- 0.09583966, N = 4 2.58762598 2.74797662 2.75284823 2.75555176
Apache Spark TPC-H Scale Factor: 1 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q21 a d c b 2 4 6 8 10 SE +/- 0.08062589, N = 4 SE +/- 0.17170189, N = 3 SE +/- 0.11898636, N = 3 7.39505470 7.40773551 7.46197589 7.70369244
Apache Spark TPC-H Scale Factor: 1 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 1 - Q22 b c d a 0.2441 0.4882 0.7323 0.9764 1.2205 SE +/- 0.00495537, N = 3 SE +/- 0.01300436, N = 3 SE +/- 0.00972834, N = 4 0.98886323 1.05880324 1.06113847 1.08502284
Apache Spark TPC-H Scale Factor: 10 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Geometric Mean Of All Queries b c a d 2 4 6 8 10 SE +/- 0.05385571, N = 3 SE +/- 0.00516651, N = 3 SE +/- 0.01945474, N = 3 7.89399455 8.11142575 8.17090667 8.24567273 MIN: 3.4 / MAX: 28.8 MIN: 3.43 / MAX: 29.17 MIN: 3.42 / MAX: 29.75 MIN: 3.57 / MAX: 29.28
Apache Spark TPC-H Scale Factor: 10 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q01 c a b d 2 4 6 8 10 SE +/- 0.10886176, N = 3 SE +/- 0.07219277, N = 3 SE +/- 0.05844458, N = 3 8.66681448 8.67833742 8.70151806 8.73231697
Apache Spark TPC-H Scale Factor: 10 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q02 d a b c 1.109 2.218 3.327 4.436 5.545 SE +/- 0.02874460, N = 3 SE +/- 0.02777661, N = 3 SE +/- 0.06191683, N = 3 4.79437430 4.85099522 4.88495016 4.92897081
Apache Spark TPC-H Scale Factor: 10 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q03 d a c b 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.28, N = 3 SE +/- 0.27, N = 3 10.15 10.24 10.26 10.32
Apache Spark TPC-H Scale Factor: 10 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q04 d c a b 2 4 6 8 10 SE +/- 0.07436974, N = 3 SE +/- 0.08517991, N = 3 SE +/- 0.24804165, N = 3 8.33958340 8.50457668 8.63682970 8.70809078
Apache Spark TPC-H Scale Factor: 10 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q05 a d c b 3 6 9 12 15 SE +/- 0.23, N = 3 SE +/- 0.18, N = 3 SE +/- 0.33, N = 3 12.48 12.66 12.66 12.92
Apache Spark TPC-H Scale Factor: 10 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q06 c d a b 1.0399 2.0798 3.1197 4.1596 5.1995 SE +/- 0.01079337, N = 3 SE +/- 0.03080177, N = 3 SE +/- 0.04431026, N = 3 4.30036068 4.36710962 4.40131362 4.62162924
Apache Spark TPC-H Scale Factor: 10 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q07 b c a d 3 6 9 12 15 SE +/- 0.05, N = 3 SE +/- 0.11, N = 3 SE +/- 0.17, N = 3 10.53 10.62 10.63 10.75
Apache Spark TPC-H Scale Factor: 10 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q08 b d c a 3 6 9 12 15 SE +/- 0.08, N = 3 SE +/- 0.15, N = 3 SE +/- 0.13, N = 3 10.67 10.82 10.85 10.99
Apache Spark TPC-H Scale Factor: 10 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q09 b c a d 4 8 12 16 20 SE +/- 0.07, N = 3 SE +/- 0.27, N = 3 SE +/- 0.31, N = 3 14.18 14.82 14.90 14.92
Apache Spark TPC-H Scale Factor: 10 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q10 b a d c 3 6 9 12 15 SE +/- 0.12, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 10.18 10.27 10.36 10.42
Apache Spark TPC-H Scale Factor: 10 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q11 b c d a 1.0236 2.0472 3.0708 4.0944 5.118 SE +/- 0.02158043, N = 3 SE +/- 0.00931035, N = 3 SE +/- 0.03938644, N = 3 4.26761341 4.38963429 4.48057460 4.54931625
Apache Spark TPC-H Scale Factor: 10 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q12 a d b c 2 4 6 8 10 SE +/- 0.04371011, N = 3 SE +/- 0.05148904, N = 3 SE +/- 0.01728126, N = 3 8.03465001 8.09419775 8.10859299 8.17116038
Apache Spark TPC-H Scale Factor: 10 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q13 b c a d 1.038 2.076 3.114 4.152 5.19 SE +/- 0.05603992, N = 3 SE +/- 0.30907989, N = 3 SE +/- 0.25379241, N = 3 4.03698015 4.09777006 4.45708307 4.61340586
Apache Spark TPC-H Scale Factor: 10 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q14 d a b c 1.3189 2.6378 3.9567 5.2756 6.5945 SE +/- 0.04489420, N = 3 SE +/- 0.02725146, N = 3 SE +/- 0.02997105, N = 3 5.73522472 5.79615275 5.82840872 5.86177413
Apache Spark TPC-H Scale Factor: 10 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q15 b c a d 1.3175 2.635 3.9525 5.27 6.5875 SE +/- 0.03487590, N = 3 SE +/- 0.03384145, N = 3 SE +/- 0.04663640, N = 3 5.68089533 5.81021436 5.83709431 5.85565599
Apache Spark TPC-H Scale Factor: 10 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q16 b c a d 0.9779 1.9558 2.9337 3.9116 4.8895 SE +/- 0.08541494, N = 3 SE +/- 0.04048184, N = 3 SE +/- 0.00250658, N = 3 4.15591764 4.17572975 4.32050626 4.34601260
Apache Spark TPC-H Scale Factor: 10 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q17 d c b a 3 6 9 12 15 SE +/- 0.13, N = 3 SE +/- 0.04, N = 3 SE +/- 0.10, N = 3 12.40 12.42 12.44 12.45
Apache Spark TPC-H Scale Factor: 10 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q18 a b d c 4 8 12 16 20 SE +/- 0.12, N = 3 SE +/- 0.10, N = 3 SE +/- 0.79, N = 3 14.86 14.88 14.91 16.11
Apache Spark TPC-H Scale Factor: 10 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q19 b c a d 2 4 6 8 10 SE +/- 0.63365910, N = 3 SE +/- 0.61100502, N = 3 SE +/- 0.02955483, N = 3 6.04250240 6.54328155 7.21627538 8.02795347
Apache Spark TPC-H Scale Factor: 10 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q20 d a b c 2 4 6 8 10 SE +/- 0.08793082, N = 3 SE +/- 0.06394497, N = 3 SE +/- 0.01360476, N = 3 8.82452583 8.84160964 8.87663841 8.91217136
Apache Spark TPC-H Scale Factor: 10 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q21 b d c a 7 14 21 28 35 SE +/- 0.20, N = 3 SE +/- 0.12, N = 3 SE +/- 0.25, N = 3 28.80 28.91 29.03 29.31
Apache Spark TPC-H Scale Factor: 10 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 10 - Q22 b a d c 0.8324 1.6648 2.4972 3.3296 4.162 SE +/- 0.10541256, N = 3 SE +/- 0.01543789, N = 3 SE +/- 0.18733559, N = 3 3.40110779 3.53586213 3.59682607 3.69966158
Apache Spark TPC-H Scale Factor: 50 - Geometric Mean Of All Queries OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Geometric Mean Of All Queries b a c d 7 14 21 28 35 SE +/- 0.09, N = 3 SE +/- 0.14, N = 3 SE +/- 0.15, N = 3 27.79 27.98 28.11 28.12 MIN: 8.69 / MAX: 131.46 MIN: 8.84 / MAX: 131.32 MIN: 8.58 / MAX: 132.82 MIN: 8.73 / MAX: 132.42
Apache Spark TPC-H Scale Factor: 50 - Q01 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q01 b c a d 6 12 18 24 30 SE +/- 0.09, N = 3 SE +/- 0.08, N = 3 SE +/- 0.18, N = 3 25.99 26.00 26.02 26.12
Apache Spark TPC-H Scale Factor: 50 - Q02 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q02 d b a c 3 6 9 12 15 SE +/- 0.12, N = 3 SE +/- 0.18, N = 3 SE +/- 0.09, N = 3 10.44 10.45 10.48 10.56
Apache Spark TPC-H Scale Factor: 50 - Q03 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q03 d b a c 9 18 27 36 45 SE +/- 0.42, N = 3 SE +/- 0.49, N = 3 SE +/- 0.13, N = 3 35.99 36.03 36.61 37.06
Apache Spark TPC-H Scale Factor: 50 - Q04 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q04 c b d a 7 14 21 28 35 SE +/- 0.12, N = 3 SE +/- 0.40, N = 3 SE +/- 0.26, N = 3 30.22 30.27 30.56 30.58
Apache Spark TPC-H Scale Factor: 50 - Q05 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q05 d c a b 10 20 30 40 50 SE +/- 0.23, N = 3 SE +/- 1.29, N = 3 SE +/- 0.81, N = 3 38.99 39.70 41.17 43.25
Apache Spark TPC-H Scale Factor: 50 - Q06 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q06 b c a d 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 20.30 20.34 20.38 20.38
Apache Spark TPC-H Scale Factor: 50 - Q07 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q07 b d c a 8 16 24 32 40 SE +/- 0.15, N = 3 SE +/- 0.13, N = 3 SE +/- 0.46, N = 3 35.34 35.40 35.46 35.63
Apache Spark TPC-H Scale Factor: 50 - Q08 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q08 c b a d 9 18 27 36 45 SE +/- 0.08, N = 3 SE +/- 0.55, N = 3 SE +/- 0.24, N = 3 37.81 37.97 38.36 38.38
Apache Spark TPC-H Scale Factor: 50 - Q09 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q09 b a d c 11 22 33 44 55 SE +/- 0.20, N = 3 SE +/- 0.25, N = 3 SE +/- 0.54, N = 3 46.32 46.90 47.00 47.11
Apache Spark TPC-H Scale Factor: 50 - Q10 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q10 a d b c 8 16 24 32 40 SE +/- 0.66, N = 3 SE +/- 0.22, N = 3 SE +/- 0.34, N = 3 35.93 35.93 36.27 36.82
Apache Spark TPC-H Scale Factor: 50 - Q11 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q11 b c d a 3 6 9 12 15 SE +/- 0.09627546, N = 3 SE +/- 0.14619367, N = 3 SE +/- 0.17117283, N = 3 8.69320774 8.75260639 9.01768939 9.06239065
Apache Spark TPC-H Scale Factor: 50 - Q12 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q12 d b a c 7 14 21 28 35 SE +/- 0.21, N = 3 SE +/- 0.31, N = 3 SE +/- 0.47, N = 3 30.04 30.41 30.43 30.89
Apache Spark TPC-H Scale Factor: 50 - Q13 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q13 b a c d 4 8 12 16 20 SE +/- 0.04, N = 3 SE +/- 0.87, N = 3 SE +/- 0.76, N = 3 12.13 12.34 13.14 13.68
Apache Spark TPC-H Scale Factor: 50 - Q14 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q14 b d c a 6 12 18 24 30 SE +/- 0.20, N = 3 SE +/- 0.03, N = 3 SE +/- 0.11, N = 3 25.40 25.42 25.46 25.49
Apache Spark TPC-H Scale Factor: 50 - Q15 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q15 b c d a 6 12 18 24 30 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 22.89 22.95 23.00 23.05
Apache Spark TPC-H Scale Factor: 50 - Q16 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q16 b a c d 3 6 9 12 15 SE +/- 0.09, N = 3 SE +/- 0.13, N = 3 SE +/- 0.47, N = 3 10.53 10.59 10.82 10.91
Apache Spark TPC-H Scale Factor: 50 - Q17 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q17 b c d a 12 24 36 48 60 SE +/- 0.19, N = 3 SE +/- 0.55, N = 3 SE +/- 0.08, N = 3 55.28 55.33 55.42 55.61
Apache Spark TPC-H Scale Factor: 50 - Q18 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q18 b c a d 14 28 42 56 70 SE +/- 0.13, N = 3 SE +/- 0.41, N = 3 SE +/- 0.54, N = 3 61.96 62.30 62.51 62.73
Apache Spark TPC-H Scale Factor: 50 - Q19 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q19 a c d b 6 12 18 24 30 SE +/- 0.11, N = 3 SE +/- 0.11, N = 3 SE +/- 0.24, N = 3 24.72 24.89 25.03 25.08
Apache Spark TPC-H Scale Factor: 50 - Q20 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q20 d b a c 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.07, N = 3 SE +/- 0.24, N = 3 30.52 30.54 30.71 30.79
Apache Spark TPC-H Scale Factor: 50 - Q21 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q21 a d c b 30 60 90 120 150 SE +/- 0.28, N = 3 SE +/- 0.96, N = 3 SE +/- 0.77, N = 3 130.87 131.01 131.42 131.46
Apache Spark TPC-H Scale Factor: 50 - Q22 OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark TPC-H 3.5 Scale Factor: 50 - Q22 d b c a 3 6 9 12 15 SE +/- 0.04, N = 3 SE +/- 0.11, N = 3 SE +/- 0.26, N = 3 10.27 10.53 10.64 10.67
Phoronix Test Suite v10.8.5