deepspaarse 17 AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS) and NVIDIA GeForce RTX 3080 10GB on Ubuntu 23.10 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2403151-PTS-DEEPSPAA58&sor&gru .
deepspaarse 17 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL OpenCL Compiler File-System Screen Resolution a b c d e AMD Ryzen 9 7950X 16-Core @ 5.88GHz (16 Cores / 32 Threads) ASUS ROG STRIX X670E-E GAMING WIFI (1905 BIOS) AMD Device 14d8 2 x 16GB DRAM-6000MT/s G Skill F5-6000J3038F16G 2000GB Samsung SSD 980 PRO 2TB + Western Digital WD_BLACK SN850X 2000GB NVIDIA GeForce RTX 3080 10GB NVIDIA GA102 HD Audio DELL U2723QE Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 Ubuntu 23.10 6.7.0-060700-generic (x86_64) GNOME Shell 45.2 X Server 1.21.1.7 NVIDIA 550.54.14 4.6.0 OpenCL 3.0 CUDA 12.4.89 GCC 13.2.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Processor Details - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - CPU Microcode: 0xa601206 Python Details - Python 3.11.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
deepspaarse 17 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Stream deepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Stream deepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream a b c d e 21.5986 18.8124 929.8526 300.8741 280.6164 195.2905 2379.9513 1437.5523 4.0959 7.6108 279.4430 194.4294 126.3399 99.0032 191.7023 117.6643 37.6522 31.5733 430.4254 112.8655 21.5097 18.5981 369.7269 53.1496 8.5925 3.3206 28.4928 5.1149 3.3518 0.6937 1900.4795 131.3760 28.6159 5.1378 63.2681 10.0959 41.7072 8.4942 212.4479 31.6588 18.5738 8.8521 371.5702 53.7610 21.2505 18.6562 928.1657 301.2353 280.4277 195.9601 2376.5719 1438.271 4.1098 7.6089 280.4549 195.4038 126.6763 99.497 192.506 116.3492 37.7759 31.6614 431.8114 112.649 21.4734 18.5403 376.3639 53.5945 8.6079 3.3168 28.5051 5.0976 3.3559 0.6932 1894.2203 131.409 28.5076 5.112 63.1227 10.046 41.53 8.5902 211.7512 31.5706 18.5138 8.8696 370.8275 53.9298 21.2799 18.6913 927.609 300.1138 281.0267 195.5005 2385.769 1444.8333 4.0853 7.606 280.98 196.3357 127.0251 99.4073 192.6076 116.5639 37.7639 31.6577 430.8886 112.2523 21.3153 18.6145 375.8659 53.494 8.6128 3.3287 28.458 5.1095 3.3425 0.6899 1905.4156 131.4587 28.4559 5.088 62.9466 10.0546 41.5244 8.5743 211.819 31.5743 18.5539 8.9004 374.0775 53.7145 21.2445 18.7156 920.2774 301.7005 280.4486 196.0237 2381.2716 1420.7445 4.0791 7.5982 280.5104 196.3727 126.4112 99.4984 192.3054 115.2566 37.7156 31.7017 430.6637 112.5691 21.4523 18.6535 376.5224 53.4247 8.6817 3.3113 28.5145 5.0963 3.3488 0.7019 1908.6078 131.5957 28.5083 5.0871 63.2608 10.046 41.5819 8.6715 212.0904 31.5308 18.5638 8.8748 371.948 53.6024 21.3119 18.6648 926.3913 301.1689 280.5353 195.7473 2359.6605 1433.2426 4.0962 7.6006 282.4848 195.9355 126.7258 99.1040 190.7916 117.2079 37.6057 31.5591 429.1529 112.5344 21.4247 18.6168 374.9653 53.5694 8.6244 3.3176 28.5061 5.1030 3.3800 0.6957 1900.5641 131.5494 28.3070 5.0981 63.0966 10.0857 41.9183 8.5269 212.7103 31.6738 18.6294 8.8780 372.4923 53.7068 OpenBenchmarking.org
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a e c b d 5 10 15 20 25 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 21.60 21.31 21.28 21.25 21.24
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a d c e b 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 18.81 18.72 18.69 18.66 18.66
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c e d 200 400 600 800 1000 SE +/- 1.67, N = 3 SE +/- 1.13, N = 3 929.85 928.17 927.61 926.39 920.28
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream d b e a c 70 140 210 280 350 SE +/- 1.34, N = 3 SE +/- 0.77, N = 3 301.70 301.24 301.17 300.87 300.11
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream c a e d b 60 120 180 240 300 SE +/- 0.32, N = 3 SE +/- 0.54, N = 3 281.03 280.62 280.54 280.45 280.43
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream d b e c a 40 80 120 160 200 SE +/- 0.46, N = 3 SE +/- 0.13, N = 3 196.02 195.96 195.75 195.50 195.29
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c d a b e 500 1000 1500 2000 2500 SE +/- 2.33, N = 3 SE +/- 13.77, N = 3 2385.77 2381.27 2379.95 2376.57 2359.66
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream c b a e d 300 600 900 1200 1500 SE +/- 2.00, N = 3 SE +/- 6.10, N = 3 1444.83 1438.27 1437.55 1433.24 1420.74
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream b e a c d 0.9247 1.8494 2.7741 3.6988 4.6235 SE +/- 0.0043, N = 3 SE +/- 0.0014, N = 3 4.1098 4.0962 4.0959 4.0853 4.0791
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream a b c e d 2 4 6 8 10 SE +/- 0.0050, N = 3 SE +/- 0.0028, N = 3 7.6108 7.6089 7.6060 7.6006 7.5982
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream e c d b a 60 120 180 240 300 SE +/- 0.79, N = 3 SE +/- 0.35, N = 3 282.48 280.98 280.51 280.45 279.44
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream d c e b a 40 80 120 160 200 SE +/- 0.48, N = 3 SE +/- 0.40, N = 3 196.37 196.34 195.94 195.40 194.43
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream c e b d a 30 60 90 120 150 SE +/- 0.88, N = 3 SE +/- 0.41, N = 3 127.03 126.73 126.68 126.41 126.34
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream d b c e a 20 40 60 80 100 SE +/- 0.14, N = 3 SE +/- 0.04, N = 3 99.50 99.50 99.41 99.10 99.00
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream c b d a e 40 80 120 160 200 SE +/- 0.07, N = 3 SE +/- 0.41, N = 3 192.61 192.51 192.31 191.70 190.79
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a e c b d 30 60 90 120 150 SE +/- 0.36, N = 3 SE +/- 0.10, N = 3 117.66 117.21 116.56 116.35 115.26
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream b c d a e 9 18 27 36 45 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 37.78 37.76 37.72 37.65 37.61
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream d b c a e 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 31.70 31.66 31.66 31.57 31.56
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c d a e 90 180 270 360 450 SE +/- 0.23, N = 3 SE +/- 0.70, N = 3 431.81 430.89 430.66 430.43 429.15
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b d e c 30 60 90 120 150 SE +/- 0.54, N = 3 SE +/- 0.18, N = 3 112.87 112.65 112.57 112.53 112.25
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b d e c 5 10 15 20 25 SE +/- 0.08, N = 3 SE +/- 0.08, N = 3 21.51 21.47 21.45 21.42 21.32
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream d e c a b 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 18.65 18.62 18.61 18.60 18.54
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a e c b d 80 160 240 320 400 SE +/- 1.41, N = 3 SE +/- 0.28, N = 3 369.73 374.97 375.87 376.36 376.52
Neural Magic DeepSparse Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream a d c e b 12 24 36 48 60 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 53.15 53.42 53.49 53.57 53.59
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b c e d 2 4 6 8 10 SE +/- 0.0153, N = 3 SE +/- 0.0104, N = 3 8.5925 8.6079 8.6128 8.6244 8.6817
Neural Magic DeepSparse Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream d b e a c 0.749 1.498 2.247 2.996 3.745 SE +/- 0.0148, N = 3 SE +/- 0.0086, N = 3 3.3113 3.3168 3.3176 3.3206 3.3287
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream c a b e d 7 14 21 28 35 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 28.46 28.49 28.51 28.51 28.51
Neural Magic DeepSparse Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream d b e c a 1.1509 2.3018 3.4527 4.6036 5.7545 SE +/- 0.0121, N = 3 SE +/- 0.0035, N = 3 5.0963 5.0976 5.1030 5.1095 5.1149
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream c d a b e 0.7605 1.521 2.2815 3.042 3.8025 SE +/- 0.0036, N = 3 SE +/- 0.0197, N = 3 3.3425 3.3488 3.3518 3.3559 3.3800
Neural Magic DeepSparse Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream c b a e d 0.1579 0.3158 0.4737 0.6316 0.7895 SE +/- 0.0010, N = 3 SE +/- 0.0030, N = 3 0.6899 0.6932 0.6937 0.6957 0.7019
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream b a e c d 400 800 1200 1600 2000 SE +/- 0.71, N = 3 SE +/- 1.74, N = 3 1894.22 1900.48 1900.56 1905.42 1908.61
Neural Magic DeepSparse Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream a b c e d 30 60 90 120 150 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 131.38 131.41 131.46 131.55 131.60
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream e c b d a 7 14 21 28 35 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 28.31 28.46 28.51 28.51 28.62
Neural Magic DeepSparse Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream d c e b a 1.156 2.312 3.468 4.624 5.78 SE +/- 0.0126, N = 3 SE +/- 0.0107, N = 3 5.0871 5.0880 5.0981 5.1120 5.1378
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream c e b d a 14 28 42 56 70 SE +/- 0.43, N = 3 SE +/- 0.21, N = 3 62.95 63.10 63.12 63.26 63.27
Neural Magic DeepSparse Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream b d c e a 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 10.05 10.05 10.05 10.09 10.10
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream c b d a e 10 20 30 40 50 SE +/- 0.02, N = 3 SE +/- 0.09, N = 3 41.52 41.53 41.58 41.71 41.92
Neural Magic DeepSparse Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream a e c b d 2 4 6 8 10 SE +/- 0.0262, N = 3 SE +/- 0.0075, N = 3 8.4942 8.5269 8.5743 8.5902 8.6715
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream b c d a e 50 100 150 200 250 SE +/- 0.21, N = 3 SE +/- 0.13, N = 3 211.75 211.82 212.09 212.45 212.71
Neural Magic DeepSparse Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream d b c a e 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 31.53 31.57 31.57 31.66 31.67
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream b c d a e 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 18.51 18.55 18.56 18.57 18.63
Neural Magic DeepSparse Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream a b d e c 2 4 6 8 10 SE +/- 0.0421, N = 3 SE +/- 0.0141, N = 3 8.8521 8.8696 8.8748 8.8780 8.9004
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a d e c 80 160 240 320 400 SE +/- 1.38, N = 3 SE +/- 1.05, N = 3 370.83 371.57 371.95 372.49 374.08
Neural Magic DeepSparse Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.7 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream d e c a b 12 24 36 48 60 SE +/- 0.08, N = 3 SE +/- 0.02, N = 3 53.60 53.71 53.71 53.76 53.93
Phoronix Test Suite v10.8.5