Neural Magic DeepSparse This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/).
To run this test with the Phoronix Test Suite , the basic command is: phoronix-test-suite benchmark deepsparse .
Test Created 13 October 2022
Last Updated 26 July 2023
Test Type System
Average Install Time 14 Minutes, 51 Seconds
Average Run Time 2 Minutes, 19 Seconds
Test Dependencies Python
Accolades 20k+ Downloads Public Result Uploads * Reported Installs ** Reported Test Completions ** Test Profile Page Views OpenBenchmarking.org Events Neural Magic DeepSparse Popularity Statistics pts/deepsparse 2022.10 2022.11 2022.12 2023.01 2023.02 2023.03 2023.04 2023.05 2023.06 2023.07 2023.08 2023.09 2023.10 2023.11 4K 8K 12K 16K 20K
* Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly. ** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform. Data updated weekly as of 28 November 2023.
BERT-Large, NLP Question Answering, Sparse INT8 5.5% NLP Text Classification, DistilBERT mnli 7.3% CV Detection, YOLOv5s COCO, Sparse INT8 5.4% ResNet-50, Baseline 5.4% ResNet-50, Sparse INT8 5.5% NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased 7.6% NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 7.5% NLP Text Classification, BERT base uncased SST2 7.4% BERT-Large, NLP Question Answering 5.5% CV Detection, YOLOv5s COCO 7.5% NLP Document Classification, oBERT base uncased on IMDB 7.6% NLP Token Classification, BERT base uncased conll2003 7.3% CV Classification, ResNet-50 ImageNet 7.4% NLP Text Classification, BERT base uncased SST2, Sparse INT8 5.4% CV Segmentation, 90% Pruned YOLACT Pruned 7.7% Model Option Popularity OpenBenchmarking.org
Asynchronous Multi-Stream 66.1% Synchronous Single-Stream 33.9% Scenario Option Popularity OpenBenchmarking.org
Revision Historypts/deepsparse-1.5.2 [View Source ] Wed, 26 Jul 2023 15:52:28 GMT Update against 1.5.2 point release, add more models.
pts/deepsparse-1.5.0 [View Source ] Wed, 07 Jun 2023 07:51:58 GMT Update against Deepsparse 1.5 upstream.
pts/deepsparse-1.3.2 [View Source ] Sun, 22 Jan 2023 19:05:03 GMT Update against DeepSparse 1.3.2 upstream.
pts/deepsparse-1.0.1 [View Source ] Thu, 13 Oct 2022 13:47:39 GMT Initial commit of DeepSparse benchmark.
Performance MetricsAnalyze Test Configuration: pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.5.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.3.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.3.x - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (items/sec) pts/deepsparse-1.0.x - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream (ms/batch) pts/deepsparse-1.0.x - Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream (items/sec) Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream OpenBenchmarking.org metrics for this test profile configuration based on 153 public results since 7 June 2023 with the latest data as of 13 November 2023 .
Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results. It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations.
Component
Percentile Rank
# Compatible Public Results
ms/batch (Average)
OpenBenchmarking.org Distribution Of Public Results - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream 153 Results Range From 183 To 2115 ms/batch 183 227 271 315 359 403 447 491 535 579 623 667 711 755 799 843 887 931 975 1019 1063 1107 1151 1195 1239 1283 1327 1371 1415 1459 1503 1547 1591 1635 1679 1723 1767 1811 1855 1899 1943 1987 2031 2075 2119 7 14 21 28 35
Based on OpenBenchmarking.org data, the selected test / test configuration (Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ) has an average run-time of 3 minutes . By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result.
OpenBenchmarking.org Minutes Time Required To Complete Benchmark Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream Run-Time 2 4 6 8 10 Min: 2 / Avg: 2.67 / Max: 5
Based on public OpenBenchmarking.org results, the selected test / test configuration has an average standard deviation of 0.1% .
OpenBenchmarking.org Percent, Fewer Is Better Average Deviation Between Runs Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream Deviation 2 4 6 8 10 Min: 0 / Avg: 0.05 / Max: 2
Tested CPU Architectures This benchmark has been successfully tested on the below mentioned architectures. The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.
CPU Architecture
Kernel Identifier
Verified On
Intel / AMD x86 64-bit
x86_64
(Many Processors)
Recent Test Results
Featured Kernel Comparison
1 System - 264 Benchmark Results
Intel Pentium Gold G6405 - ASRock H510M-HDV/M.2 SE - Intel Comet Lake PCH
Ubuntu 20.04 - 5.15.0-88-generic - GNOME Shell 3.36.9
1 System - 264 Benchmark Results
Intel Pentium Gold G6400 - ASRock H510M-HDV/M.2 SE - Intel Comet Lake PCH
Ubuntu 20.04 - 5.15.0-86-generic - GNOME Shell 3.36.9
2 Systems - 358 Benchmark Results
AMD Ryzen 9 3900XT 12-Core - MSI MEG X570 GODLIKE - AMD Starship
Ubuntu 22.04 - 6.2.0-35-generic - GNOME Shell 42.2
2 Systems - 246 Benchmark Results
Intel Core i5-12600K - ASUS PRIME Z690-P WIFI D4 - Intel Device 7aa7
Ubuntu 22.04 - 5.19.0-051900rc6daily20220716-generic - GNOME Shell 42.1
1 System - 279 Benchmark Results
Intel Core i3-10105 - ASRock H510M-HDV/M.2 SE - Intel Comet Lake PCH
Ubuntu 20.04 - 5.15.0-83-generic - GNOME Shell 3.36.9
1 System - 29 Benchmark Results
AMD Ryzen 7 3700X 8-Core - ASUS PRIME B450M-A - AMD Starship
Linuxmint 21.2 - 6.2.0-34-generic - MATE 1.26.0
1 System - 279 Benchmark Results
Intel Core i3-10105 - ASRock H510M-HDV/M.2 SE - Intel Comet Lake PCH
Ubuntu 20.04 - 5.15.0-83-generic - GNOME Shell 3.36.9
1 System - 279 Benchmark Results
Intel Core i3-10105 - ASRock H510M-HDV/M.2 SE - Intel Comet Lake PCH
Ubuntu 20.04 - 5.15.0-83-generic - GNOME Shell 3.36.9
1 System - 543 Benchmark Results
AMD Ryzen Threadripper PRO 5995WX 64-Cores - ASRock WRX80 Creator - AMD Starship
Ubuntu 22.04 - 6.2.0-33-generic - GNOME Shell 42.9
1 System - 332 Benchmark Results
AMD Ryzen Threadripper 3970X 32-Core - ASUS ROG ZENITH II EXTREME - AMD Starship
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2
3 Systems - 190 Benchmark Results
Intel Core i9-13900K - ASUS PRIME Z790-P WIFI - Intel Device 7a27
Ubuntu 22.04 - 6.6.0-060600rc1-generic - GNOME Shell 42.9
2 Systems - 191 Benchmark Results
AMD Ryzen 7 PRO 5850U - HP 8A78 - AMD Renoir
Pop 22.04 - 6.2.6-76060206-generic - GNOME Shell 42.5
1 System - 60 Benchmark Results
2 x Intel Xeon Platinum 8280 - Intel PURLEY - Intel Sky Lake-E DMI3 Registers
Ubuntu 22.04 - 6.2.0-26-generic - GNOME Shell 42.9
Most Popular Test Results
Featured Graphics Comparison
AMD Ryzen 9 5900HX - ASUS G513QY v1.0 - AMD Renoir
Ubuntu 22.10 - 5.19.0-43-generic - GNOME Shell 43.0
2 Systems - 234 Benchmark Results
AMD Ryzen Threadripper 3970X 32-Core - ASUS ROG ZENITH II EXTREME - AMD Starship
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2
2 Systems - 234 Benchmark Results
AMD Ryzen Threadripper 3970X 32-Core - ASUS ROG ZENITH II EXTREME - AMD Starship
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2
Featured Graphics Comparison
AMD Ryzen 5 4500U - LENOVO LNVNB161216 - AMD Renoir
Pop 22.04 - 5.17.5-76051705-generic - GNOME Shell 42.1
2 Systems - 166 Benchmark Results
3 Systems - 153 Benchmark Results
AMD Ryzen 5 5500U - NB01 NL5xNU - AMD Renoir
Tuxedo 22.04 - 6.0.0-1010-oem - KDE Plasma 5.26.5
3 Systems - 122 Benchmark Results
AMD EPYC 7763 64-Core - AMD DAYTONA_X - AMD Starship
Ubuntu 22.04 - 6.2.0-phx - GNOME Shell 42.5
2 Systems - 127 Benchmark Results
AMD EPYC 9684X 96-Core - AMD Titanite_4G - AMD Device 14a4
Ubuntu 22.04 - 5.19.0-41-generic - GNOME Shell 42.5
5 Systems - 85 Benchmark Results
Intel Xeon Platinum 8490H - Quanta Cloud S6Q-MB-MPS - Intel Device 1bce
Ubuntu 22.04 - 5.15.0-47-generic - GNOME Shell 42.4
Featured Compiler Comparison
Intel Core i7-8565U - Dell 0KTW76 - Intel Cannon Point-LP
Ubuntu 22.04 - 5.19.0-rc6-phx-retbleed - GNOME Shell 42.2
3 Systems - 60 Benchmark Results
AMD Ryzen 9 5900HX - ASUS G513QY v1.0 - AMD Renoir
Ubuntu 22.10 - 5.19.0-46-generic - GNOME Shell 43.0
3 Systems - 143 Benchmark Results
Intel Core i9-10980XE - ASRock X299 Steel Legend - Intel Sky Lake-E DMI3 Registers
Ubuntu 22.04 - 5.19.0-051900rc7-generic - GNOME Shell 42.2
3 Systems - 55 Benchmark Results
Intel Core i7-1065G7 - Dell 06CDVY - Intel Ice Lake-LP DRAM
Ubuntu 22.04 - 5.19.0-41-generic - GNOME Shell 42.2