deepspaarse 17 AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS) and NVIDIA GeForce RTX 3080 10GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2403151-PTS-DEEPSPAA58&sor&grs .
deepspaarse 17 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL OpenCL Compiler File-System Screen Resolution a b c d e AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads) ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS) AMD Device 14d8 2 x 16GB DRAM-6000MT/s G Skill F5-6000J3038F16G 2000GB Samsung SSD 980 PRO 2TB + Western Digital WD_BLACK SN850X 2000GB NVIDIA GeForce RTX 3080 10GB NVIDIA GA102 HD Audio DELL U2723QE Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 Ubuntu 23.10 6.7.0-060700-generic (x86_64) GNOME Shell 45.2 X Server 1.21.1.7 NVIDIA 550.54.14 4.6.0 OpenCL 3.0 CUDA 12.4.89 GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601206 Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
deepspaarse 17 deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Stream deepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Stream deepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Stream a b c d e 117.6643 8.4942 369.7269 0.6937 1437.5523 21.5986 3.3518 2379.9513 28.6159 279.4430 929.8526 8.5925 194.4294 5.1378 191.7023 41.7072 21.5097 371.5702 18.8124 53.1496 1900.4795 4.0959 18.5738 430.4254 53.7610 18.5981 112.8655 8.8521 126.3399 300.8741 3.3206 63.2681 99.0032 10.0959 31.6588 212.4479 37.6522 31.5733 195.2905 5.1149 280.6164 28.4928 131.3760 7.6108 116.3492 8.5902 376.3639 0.6932 1438.271 21.2505 3.3559 2376.5719 28.5076 280.4549 928.1657 8.6079 195.4038 5.112 192.506 41.53 21.4734 370.8275 18.6562 53.5945 1894.2203 4.1098 18.5138 431.8114 53.9298 18.5403 112.649 8.8696 126.6763 301.2353 3.3168 63.1227 99.497 10.046 31.5706 211.7512 37.7759 31.6614 195.9601 5.0976 280.4277 28.5051 131.409 7.6089 116.5639 8.5743 375.8659 0.6899 1444.8333 21.2799 3.3425 2385.769 28.4559 280.98 927.609 8.6128 196.3357 5.088 192.6076 41.5244 21.3153 374.0775 18.6913 53.494 1905.4156 4.0853 18.5539 430.8886 53.7145 18.6145 112.2523 8.9004 127.0251 300.1138 3.3287 62.9466 99.4073 10.0546 31.5743 211.819 37.7639 31.6577 195.5005 5.1095 281.0267 28.458 131.4587 7.606 115.2566 8.6715 376.5224 0.7019 1420.7445 21.2445 3.3488 2381.2716 28.5083 280.5104 920.2774 8.6817 196.3727 5.0871 192.3054 41.5819 21.4523 371.948 18.7156 53.4247 1908.6078 4.0791 18.5638 430.6637 53.6024 18.6535 112.5691 8.8748 126.4112 301.7005 3.3113 63.2608 99.4984 10.046 31.5308 212.0904 37.7156 31.7017 196.0237 5.0963 280.4486 28.5145 131.5957 7.5982 117.2079 8.5269 374.9653 0.6957 1433.2426 21.3119 3.3800 2359.6605 28.3070 282.4848 926.3913 8.6244 195.9355 5.0981 190.7916 41.9183 21.4247 372.4923 18.6648 53.5694 1900.5641 4.0962 18.6294 429.1529 53.7068 18.6168 112.5344 8.8780 126.7258 301.1689 3.3176 63.0966 99.1040 10.0857 31.6738 212.7103 37.6057 31.5591 195.7473 5.1030 280.5353 28.5061 131.5494 7.6006 OpenBenchmarking.org
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a e c b d 30 60 90 120 150 SE +/- 0.36, N = 3 SE +/- 0.10, N = 3 117.66 117.21 116.56 116.35 115.26
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a e c b d 2 4 6 8 10 SE +/- 0.0262, N = 3 SE +/- 0.0075, N = 3 8.4942 8.5269 8.5743 8.5902 8.6715
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a e c b d 80 160 240 320 400 SE +/- 1.41, N = 3 SE +/- 0.28, N = 3 369.73 374.97 375.87 376.36 376.52
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream c b a e d 0.1579 0.3158 0.4737 0.6316 0.7895 SE +/- 0.0010, N = 3 SE +/- 0.0030, N = 3 0.6899 0.6932 0.6937 0.6957 0.7019
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream c b a e d 300 600 900 1200 1500 SE +/- 2.00, N = 3 SE +/- 6.10, N = 3 1444.83 1438.27 1437.55 1433.24 1420.74
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a e c b d 5 10 15 20 25 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 21.60 21.31 21.28 21.25 21.24
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c d a b e 0.7605 1.521 2.2815 3.042 3.8025 SE +/- 0.0036, N = 3 SE +/- 0.0197, N = 3 3.3425 3.3488 3.3518 3.3559 3.3800
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c d a b e 500 1000 1500 2000 2500 SE +/- 2.33, N = 3 SE +/- 13.77, N = 3 2385.77 2381.27 2379.95 2376.57 2359.66
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream e c b d a 7 14 21 28 35 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 28.31 28.46 28.51 28.51 28.62
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream e c d b a 60 120 180 240 300 SE +/- 0.79, N = 3 SE +/- 0.35, N = 3 282.48 280.98 280.51 280.45 279.44
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c e d 200 400 600 800 1000 SE +/- 1.67, N = 3 SE +/- 1.13, N = 3 929.85 928.17 927.61 926.39 920.28
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c e d 2 4 6 8 10 SE +/- 0.0153, N = 3 SE +/- 0.0104, N = 3 8.5925 8.6079 8.6128 8.6244 8.6817
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream d c e b a 40 80 120 160 200 SE +/- 0.48, N = 3 SE +/- 0.40, N = 3 196.37 196.34 195.94 195.40 194.43
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream d c e b a 1.156 2.312 3.468 4.624 5.78 SE +/- 0.0126, N = 3 SE +/- 0.0107, N = 3 5.0871 5.0880 5.0981 5.1120 5.1378
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream c b d a e 40 80 120 160 200 SE +/- 0.07, N = 3 SE +/- 0.41, N = 3 192.61 192.51 192.31 191.70 190.79
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream c b d a e 10 20 30 40 50 SE +/- 0.02, N = 3 SE +/- 0.09, N = 3 41.52 41.53 41.58 41.71 41.92
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b d e c 5 10 15 20 25 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 21.51 21.47 21.45 21.42 21.32
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a d e c 80 160 240 320 400 SE +/- 1.38, N = 3 SE +/- 1.05, N = 3 370.83 371.57 371.95 372.49 374.08
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a d c e b 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 18.81 18.72 18.69 18.66 18.66
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a d c e b 12 24 36 48 60 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 53.15 53.42 53.49 53.57 53.59
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream b a e c d 400 800 1200 1600 2000 SE +/- 0.71, N = 3 SE +/- 1.74, N = 3 1894.22 1900.48 1900.56 1905.42 1908.61
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream b e a c d 0.9247 1.8494 2.7741 3.6988 4.6235 SE +/- 0.0043, N = 3 SE +/- 0.0014, N = 3 4.1098 4.0962 4.0959 4.0853 4.0791
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c d a e 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 18.51 18.55 18.56 18.57 18.63
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c d a e 90 180 270 360 450 SE +/- 0.23, N = 3 SE +/- 0.70, N = 3 431.81 430.89 430.66 430.43 429.15
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream d e c a b 12 24 36 48 60 SE +/- 0.08, N = 3 SE +/- 0.02, N = 3 53.60 53.71 53.71 53.76 53.93
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream d e c a b 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 18.65 18.62 18.61 18.60 18.54
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b d e c 30 60 90 120 150 SE +/- 0.54, N = 3 SE +/- 0.18, N = 3 112.87 112.65 112.57 112.53 112.25
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b d e c 2 4 6 8 10 SE +/- 0.0421, N = 3 SE +/- 0.0141, N = 3 8.8521 8.8696 8.8748 8.8780 8.9004
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream c e b d a 30 60 90 120 150 SE +/- 0.88, N = 3 SE +/- 0.41, N = 3 127.03 126.73 126.68 126.41 126.34
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream d b e a c 70 140 210 280 350 SE +/- 1.34, N = 3 SE +/- 0.77, N = 3 301.70 301.24 301.17 300.87 300.11
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream d b e a c 0.749 1.498 2.247 2.996 3.745 SE +/- 0.0148, N = 3 SE +/- 0.0086, N = 3 3.3113 3.3168 3.3176 3.3206 3.3287
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream c e b d a 14 28 42 56 70 SE +/- 0.43, N = 3 SE +/- 0.21, N = 3 62.95 63.10 63.12 63.26 63.27
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream d b c e a 20 40 60 80 100 SE +/- 0.14, N = 3 SE +/- 0.04, N = 3 99.50 99.50 99.41 99.10 99.00
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream b d c e a 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 10.05 10.05 10.05 10.09 10.10
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream d b c a e 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 31.53 31.57 31.57 31.66 31.67
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream b c d a e 50 100 150 200 250 SE +/- 0.21, N = 3 SE +/- 0.13, N = 3 211.75 211.82 212.09 212.45 212.71
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream b c d a e 9 18 27 36 45 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 37.78 37.76 37.72 37.65 37.61
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream d b c a e 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 31.70 31.66 31.66 31.57 31.56
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream d b e c a 40 80 120 160 200 SE +/- 0.46, N = 3 SE +/- 0.13, N = 3 196.02 195.96 195.75 195.50 195.29
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream d b e c a 1.1509 2.3018 3.4527 4.6036 5.7545 SE +/- 0.0121, N = 3 SE +/- 0.0035, N = 3 5.0963 5.0976 5.1030 5.1095 5.1149
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream c a e d b 60 120 180 240 300 SE +/- 0.32, N = 3 SE +/- 0.54, N = 3 281.03 280.62 280.54 280.45 280.43
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream c a b e d 7 14 21 28 35 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 28.46 28.49 28.51 28.51 28.51
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream a b c e d 30 60 90 120 150 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 131.38 131.41 131.46 131.55 131.60
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream a b c e d 2 4 6 8 10 SE +/- 0.0050, N = 3 SE +/- 0.0028, N = 3 7.6108 7.6089 7.6060 7.6006 7.5982
Phoronix Test Suite v10.8.5