deepsparse 1.7 raptor lake

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and ASUS Intel RPL-S 31GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2403152-PTS-DEEPSPAR00&grs.

deepsparse 1.7 raptor lakeProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionabcdeIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a272 x 16GB DRAM-6000MT/s Corsair CMK32GX5M2B6000C36Western Digital WD_BLACK SN850X 1000GBASUS Intel RPL-S 31GB (1650MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.8.0-phx (x86_64)GNOME Shell 45.1X Server 1.21.1.74.6 Mesa 24.0~git2312240600.c05261~oibaf~m (git-c05261a 2023-12-24 mantic-oibaf-ppa)GCC 13.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseProcessor Details- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x122 - Thermald 2.5.4Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Mitigation of Clear Register File + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

deepsparse 1.7 raptor lakedeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamabcde975.80771.02253.46071152.25197.6693518.245820.082549.786472.989113.6991374.657310.67738.0391124.3319104.11889.604127.9745105.8800142.91519.4444324.56303.07767.9967124.9918269.972614.815681.491249.0714379.714110.53389.1560436.248367.262414.862820.3301196.62027.9348125.919427.9492143.037888.868011.250871.697555.7746941.78631.05913.44881156.17667.7518512.689319.660750.854471.426713.9983368.585510.85167.8904126.6727104.3929.578927.7941106.2107143.83589.4148327.19123.05317.9282126.0684268.082514.919581.502149.066379.142210.54959.199434.220567.624114.783820.2407197.48387.9643125.448727.9208143.186688.870111.250671.745755.7375927.77261.07463.531129.7277.5635525.386219.646250.89271.438313.9961373.123310.71977.9405125.8707105.19349.505927.9005105.4393143.28849.4838326.8883.0567.9467125.7691267.995214.924581.508649.0612378.262610.57399.1596436.115867.381714.836920.2619197.27727.9518125.633427.8803143.391488.938111.24271.762755.72421011.71460.98563.46011152.75237.6435520.011519.822750.440572.066713.8746375.629310.64967.9734125.3512105.53789.475027.9110106.7958143.23679.3639327.29073.05217.9961125.0031269.620814.834881.682348.9564378.167610.57699.1642435.854167.411014.830520.2703197.19627.9571125.568527.9572142.998288.928011.243371.760555.7256994.76771.00243.43361161.29787.6128521.999219.958750.094271.938313.8988368.225610.86237.9248126.1129104.51839.567327.6188106.6262144.74579.3782328.56053.04047.9452125.7981268.551914.893581.174549.2619377.363910.59939.2062433.882267.319514.850420.2315197.57477.9673125.410827.8453143.570188.97811.237171.775355.7142OpenBenchmarking.org

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcde2004006008001000SE +/- 10.53, N = 5SE +/- 1.14, N = 3975.81941.79927.771011.71994.77

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcde0.24180.48360.72540.96721.209SE +/- 0.0112, N = 5SE +/- 0.0010, N = 31.02251.05911.07460.98561.0024

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde0.79431.58862.38293.17723.9715SE +/- 0.0108, N = 3SE +/- 0.0320, N = 33.46073.44883.53003.46013.4336

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde2004006008001000SE +/- 3.50, N = 3SE +/- 10.56, N = 31152.251156.181129.731152.751161.30

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcde246810SE +/- 0.0698, N = 3SE +/- 0.0734, N = 37.66937.75187.56357.64357.6128

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcde110220330440550SE +/- 4.73, N = 3SE +/- 4.93, N = 3518.25512.69525.39520.01522.00

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcde510152025SE +/- 0.06, N = 3SE +/- 0.09, N = 320.0819.6619.6519.8219.96

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcde1122334455SE +/- 0.14, N = 3SE +/- 0.23, N = 349.7950.8550.8950.4450.09

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcde1632486480SE +/- 0.25, N = 3SE +/- 0.34, N = 372.9971.4371.4472.0771.94

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcde48121620SE +/- 0.05, N = 3SE +/- 0.06, N = 313.7014.0014.0013.8713.90

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcde80160240320400SE +/- 3.10, N = 3SE +/- 3.00, N = 3374.66368.59373.12375.63368.23

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcde3691215SE +/- 0.09, N = 3SE +/- 0.09, N = 310.6810.8510.7210.6510.86

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcde246810SE +/- 0.0189, N = 3SE +/- 0.0250, N = 38.03917.89047.94057.97347.9248

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcde306090120150SE +/- 0.29, N = 3SE +/- 0.39, N = 3124.33126.67125.87125.35126.11

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcde20406080100SE +/- 0.22, N = 3SE +/- 0.16, N = 3104.12104.39105.19105.54104.52

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcde3691215SE +/- 0.0203, N = 3SE +/- 0.0143, N = 39.60419.57899.50599.47509.5673

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcde714212835SE +/- 0.03, N = 3SE +/- 0.04, N = 327.9727.7927.9027.9127.62

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcde20406080100SE +/- 0.24, N = 3SE +/- 0.61, N = 3105.88106.21105.44106.80106.63

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcde306090120150SE +/- 0.17, N = 3SE +/- 0.22, N = 3142.92143.84143.29143.24144.75

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcde3691215SE +/- 0.0212, N = 3SE +/- 0.0538, N = 39.44449.41489.48389.36399.3782

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcde70140210280350SE +/- 0.47, N = 3SE +/- 0.60, N = 3324.56327.19326.89327.29328.56

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcde0.69251.3852.07752.773.4625SE +/- 0.0044, N = 3SE +/- 0.0056, N = 33.07763.05313.05603.05213.0404

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcde246810SE +/- 0.0195, N = 3SE +/- 0.0415, N = 37.99677.92827.94677.99617.9452

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcde306090120150SE +/- 0.30, N = 3SE +/- 0.65, N = 3124.99126.07125.77125.00125.80

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcde60120180240300SE +/- 1.11, N = 3SE +/- 0.87, N = 3269.97268.08268.00269.62268.55

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcde48121620SE +/- 0.06, N = 3SE +/- 0.05, N = 314.8214.9214.9214.8314.89

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcde20406080100SE +/- 0.08, N = 3SE +/- 0.22, N = 381.4981.5081.5181.6881.17

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcde1122334455SE +/- 0.05, N = 3SE +/- 0.13, N = 349.0749.0749.0648.9649.26

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcde80160240320400SE +/- 1.12, N = 3SE +/- 1.11, N = 3379.71379.14378.26378.17377.36

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcde3691215SE +/- 0.03, N = 3SE +/- 0.03, N = 310.5310.5510.5710.5810.60

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde3691215SE +/- 0.0346, N = 3SE +/- 0.0105, N = 39.15609.19909.15969.16429.2062

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde90180270360450SE +/- 1.64, N = 3SE +/- 0.49, N = 3436.25434.22436.12435.85433.88

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcde1530456075SE +/- 0.03, N = 3SE +/- 0.14, N = 367.2667.6267.3867.4167.32

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcde48121620SE +/- 0.01, N = 3SE +/- 0.03, N = 314.8614.7814.8414.8314.85

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 320.3320.2420.2620.2720.23

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde4080120160200SE +/- 0.14, N = 3SE +/- 0.19, N = 3196.62197.48197.28197.20197.57

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcde246810SE +/- 0.0106, N = 3SE +/- 0.0196, N = 37.93487.96437.95187.95717.9673

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcde306090120150SE +/- 0.17, N = 3SE +/- 0.31, N = 3125.92125.45125.63125.57125.41

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcde714212835SE +/- 0.05, N = 3SE +/- 0.01, N = 327.9527.9227.8827.9627.85

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcde306090120150SE +/- 0.27, N = 3SE +/- 0.03, N = 3143.04143.19143.39143.00143.57

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcde20406080100SE +/- 0.03, N = 3SE +/- 0.02, N = 388.8788.8788.9488.9388.98

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcde3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 311.2511.2511.2411.2411.24

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde1632486480SE +/- 0.04, N = 3SE +/- 0.03, N = 371.7071.7571.7671.7671.78

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcde1326395265SE +/- 0.03, N = 3SE +/- 0.02, N = 355.7755.7455.7255.7355.71


Phoronix Test Suite v10.8.4