Neural Magic DeepSparse This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/).
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark deepsparse .
Test Created 13 October 2022
Last Updated 15 March 2024
Test Type System
Average Install Time 14 Minutes, 29 Seconds
Average Run Time 3 Minutes, 24 Seconds
Test Dependencies Python
Accolades 40k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views OpenBenchmarking.org Events Neural Magic DeepSparse Popularity Statistics pts/deepsparse 2022.10 2022.11 2022.12 2023.01 2023.02 2023.03 2023.04 2023.05 2023.06 2023.07 2023.08 2023.09 2023.10 2023.11 2023.12 2024.01 2024.02 2024.03 2024.04 2024.05 2024.06 2024.07 2024.08 2024.09 2024.10 2024.11 2024.12 4K 8K 12K 16K 20K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. Data updated weekly as of 19 December 2024.
NLP Document Classification, oBERT base uncased on IMDB 9.1% CV Classification, ResNet-50 ImageNet 9.1% NLP Text Classification, DistilBERT mnli 9.1% Llama2 Chat 7b Quantized 8.7% BERT-Large, NLP Question Answering, Sparse INT8 9.1% NLP Text Classification, BERT base uncased SST2, Sparse INT8 9.1% CV Detection, YOLOv5s COCO, Sparse INT8 9.1% CV Segmentation, 90% Pruned YOLACT Pruned 9.1% ResNet-50, Sparse INT8 9.1% NLP Token Classification, BERT base uncased conll2003 9.1% ResNet-50, Baseline 9.1% Model Option Popularity OpenBenchmarking.org
Revision Historypts/deepsparse-1.7.0 [View Source ] Fri, 15 Mar 2024 12:35:17 GMT Update against DeepSparse 1.7 upstream, add Llama 2 chat test.
pts/deepsparse-1.6.0 [View Source ] Mon, 11 Dec 2023 16:59:10 GMT Update against deepsparse 1.6 upstream.
pts/deepsparse-1.5.2 [View Source ] Wed, 26 Jul 2023 15:52:28 GMT Update against 1.5.2 point release, add more models.
pts/deepsparse-1.5.0 [View Source ] Wed, 07 Jun 2023 07:51:58 GMT Update against Deepsparse 1.5 upstream.
pts/deepsparse-1.3.2 [View Source ] Sun, 22 Jan 2023 19:05:03 GMT Update against DeepSparse 1.3.2 upstream.
pts/deepsparse-1.0.1 [View Source ] Thu, 13 Oct 2022 13:47:39 GMT Initial commit of DeepSparse benchmark.
Performance MetricsAnalyze Test Configuration: pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) Neural Magic DeepSparse 1.7 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream OpenBenchmarking.org metrics for this test profile configuration based on 72 public results since 15 March 2024 with the latest data as of 20 August 2024 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
items/sec (Average)
Detailed Performance Overview OpenBenchmarking.org Distribution Of Public Results - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream 72 Results Range From 85 To 329 items/sec 85 101 117 133 149 165 181 197 213 229 245 261 277 293 309 325 341 3 6 9 12 15
Based on OpenBenchmarking.org data, the selected test / test configuration (Neural Magic DeepSparse 1.7 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ) has an average run-time of 3 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream Run-Time 2 4 6 8 10 Min: 2 / Avg: 2.33 / Max: 3
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-N1 128-Core, ARMv8 Neoverse-V1