Neural Magic DeepSparse This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/).
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark deepsparse .
Test Created 13 October 2022
Last Updated 15 March 2024
Test Type System
Average Install Time 15 Minutes, 13 Seconds
Average Run Time 3 Minutes, 24 Seconds
Test Dependencies Python
Accolades 40k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views OpenBenchmarking.org Events Neural Magic DeepSparse Popularity Statistics pts/deepsparse 2022.10 2022.11 2022.12 2023.01 2023.02 2023.03 2023.04 2023.05 2023.06 2023.07 2023.08 2023.09 2023.10 2023.11 2023.12 2024.01 2024.02 2024.03 2024.04 2024.05 2024.06 2024.07 2024.08 2024.09 2024.10 2024.11 2024.12 4K 8K 12K 16K 20K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. Data updated weekly as of 4 December 2024.
NLP Document Classification, oBERT base uncased on IMDB 9.1% CV Classification, ResNet-50 ImageNet 9.1% NLP Text Classification, DistilBERT mnli 9.1% Llama2 Chat 7b Quantized 8.7% BERT-Large, NLP Question Answering, Sparse INT8 9.1% NLP Text Classification, BERT base uncased SST2, Sparse INT8 9.1% CV Detection, YOLOv5s COCO, Sparse INT8 9.1% CV Segmentation, 90% Pruned YOLACT Pruned 9.1% ResNet-50, Sparse INT8 9.1% NLP Token Classification, BERT base uncased conll2003 9.1% ResNet-50, Baseline 9.1% Model Option Popularity OpenBenchmarking.org
Revision Historypts/deepsparse-1.7.0 [View Source ] Fri, 15 Mar 2024 12:35:17 GMT Update against DeepSparse 1.7 upstream, add Llama 2 chat test.
pts/deepsparse-1.6.0 [View Source ] Mon, 11 Dec 2023 16:59:10 GMT Update against deepsparse 1.6 upstream.
pts/deepsparse-1.5.2 [View Source ] Wed, 26 Jul 2023 15:52:28 GMT Update against 1.5.2 point release, add more models.
pts/deepsparse-1.5.0 [View Source ] Wed, 07 Jun 2023 07:51:58 GMT Update against Deepsparse 1.5 upstream.
pts/deepsparse-1.3.2 [View Source ] Sun, 22 Jan 2023 19:05:03 GMT Update against DeepSparse 1.3.2 upstream.
pts/deepsparse-1.0.1 [View Source ] Thu, 13 Oct 2022 13:47:39 GMT Initial commit of DeepSparse benchmark.
Performance MetricsAnalyze Test Configuration: pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.7.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.7.x - Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.6.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.6.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) Neural Magic DeepSparse 1.7 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream OpenBenchmarking.org metrics for this test profile configuration based on 72 public results since 15 March 2024 with the latest data as of 20 August 2024 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
ms/batch (Average)
OpenBenchmarking.org Distribution Of Public Results - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream 72 Results Range From 20 To 253 ms/batch 20 35 50 65 80 95 110 125 140 155 170 185 200 215 230 245 260 5 10 15 20 25
Based on OpenBenchmarking.org data, the selected test / test configuration (Neural Magic DeepSparse 1.7 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ) has an average run-time of 3 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream Run-Time 2 4 6 8 10 Min: 2 / Avg: 2.05 / Max: 3
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
ARMv8 64-bit
aarch64
ARMv8 Neoverse-N1 128-Core, ARMv8 Neoverse-V1
Recent Test Results
Featured Processor Comparison
Featured Processor Comparison
Featured Processor Comparison
Featured Processor Comparison
Featured Processor Comparison
1 System - 323 Benchmark Results
AMD Ryzen Threadripper 7960X 24-Cores - Gigabyte TRX50 AERO D - AMD Device 14a4
Ubuntu 24.04 - 6.8.0-48-generic - GNOME Shell 46.0
Featured Processor Comparison
Featured Processor Comparison
Featured Processor Comparison
1 System - 342 Benchmark Results
Intel Core i9-12900K - ASUS PRIME Z790-V AX - Intel Raptor Lake-S PCH
Ubuntu 24.04 - 6.8.0-47-generic - GNOME Shell 46.0
2 Systems - 195 Benchmark Results
2 Systems - 195 Benchmark Results
2 Systems - 195 Benchmark Results
2 Systems - 195 Benchmark Results
2 Systems - 195 Benchmark Results
Most Popular Test Results
4 Systems - 158 Benchmark Results
AMD Ryzen Threadripper PRO 5965WX 24-Cores - ASUS Pro WS WRX80E-SAGE SE WIFI - AMD Starship
Ubuntu 23.10 - 6.5.0-15-generic - GNOME Shell 45.0
4 Systems - 62 Benchmark Results
2 x INTEL XEON PLATINUM 8592+ - Quanta Cloud QuantaGrid D54Q-2U S6Q-MB-MPS - Intel Device 1bce
Ubuntu 23.10 - 6.6.0-rc5-phx-patched - GNOME Shell 45.0
3 Systems - 119 Benchmark Results
AMD EPYC 8534P 64-Core - AMD Cinnabar - AMD Device 14a4
Ubuntu 23.10 - 6.5.0-15-generic - GNOME Shell
2 Systems - 57 Benchmark Results
AMD Ryzen 7 3800XT 8-Core - MSI X370 XPOWER GAMING TITANIUM - AMD Starship
Debian 12 - 6.1.0-18-amd64 - X Server 1.20.11
4 Systems - 70 Benchmark Results
Intel Core Ultra 7 155H - MTL Swift SFG14-72T Coral_MTH - Intel Device 7e7f
Ubuntu 23.10 - 6.8.0-060800rc1daily20240126-generic - GNOME Shell 45.2
4 Systems - 52 Benchmark Results
AMD Ryzen Threadripper 7980X 64-Cores - System76 Thelio Major - AMD Device 14a4
Ubuntu 23.10 - 6.5.0-25-generic - GNOME Shell 45.2
2 Systems - 57 Benchmark Results
AMD Ryzen 7 3800XT 8-Core - MSI X370 XPOWER GAMING TITANIUM - AMD Starship
Debian 12 - 6.1.0-18-amd64 - X Server 1.20.11
4 Systems - 158 Benchmark Results
Intel Xeon E E-2488 - Supermicro Super Server X13SCL-F v0123456789 - Intel Device 7a27
Ubuntu 22.04 - 6.2.0-26-generic - GNOME Shell 42.9
3 Systems - 114 Benchmark Results
AMD Ryzen 5 5500U - NB01 TUXEDO Aura 15 Gen2 NL5xNU - AMD Renoir
Tuxedo 22.04 - 6.0.0-1010-oem - KDE Plasma 5.26.5
5 Systems - 44 Benchmark Results
AMD Ryzen 9 7950X 16-Core - ASUS ROG STRIX X670E-E GAMING WIFI - AMD Device 14d8
Ubuntu 23.10 - 6.7.0-060700-generic - GNOME Shell 45.2
3 Systems - 177 Benchmark Results
Intel Xeon E E-2488 - Supermicro Super Server X13SCL-F v0123456789 - Intel Device 7a27
Ubuntu 22.04 - 6.2.0-26-generic - GNOME Shell 42.9
5 Systems - 44 Benchmark Results
Intel Core i9-14900K - ASUS PRIME Z790-P WIFI - Intel Device 7a27
Ubuntu 23.10 - 6.8.0-phx - GNOME Shell 45.1
2 Systems - 63 Benchmark Results
Intel Core i9-10980XE - ASRock X299 Steel Legend - Intel Sky Lake-E DMI3 Registers
Ubuntu 22.04 - 6.5.0-18-generic - GNOME Shell 42.2
3 Systems - 116 Benchmark Results
AMD Ryzen 7 7840HS - NB05 TUXEDO Pulse 14 Gen3 R14FA1 - AMD Device 14e8
Tuxedo 22.04 - 6.5.0-10022-tuxedo - KDE Plasma 5.27.10
4 Systems - 120 Benchmark Results
ARMv8 Neoverse-N1 - GIGABYTE G242-P36-00 MP32-AR2-00 v01000100 - Ampere Computing LLC Altra PCI Root Complex A
Ubuntu 23.10 - 6.5.0-15-generic - GCC 13.2.0