deepsparse 1.7 raptor lake

Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1402 BIOS) and ASUS Intel RPL-S 31GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403152-PTS-DEEPSPAR00
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 15
  1 Hour, 30 Minutes
b
March 15
  30 Minutes
c
March 15
  30 Minutes
d
March 15
  1 Hour, 28 Minutes
e
March 15
  30 Minutes
Invert Hiding All Results Option
  54 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


deepsparse 1.7 raptor lakeOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-14900K @ 5.70GHz (24 Cores / 32 Threads)ASUS PRIME Z790-P WIFI (1402 BIOS)Intel Device 7a272 x 16GB DRAM-6000MT/s Corsair CMK32GX5M2B6000C36Western Digital WD_BLACK SN850X 1000GBASUS Intel RPL-S 31GB (1650MHz)Realtek ALC897ASUS VP28UUbuntu 23.106.8.0-phx (x86_64)GNOME Shell 45.1X Server 1.21.1.74.6 Mesa 24.0~git2312240600.c05261~oibaf~m (git-c05261a 2023-12-24 mantic-oibaf-ppa)GCC 13.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionDeepsparse 1.7 Raptor Lake BenchmarksSystem Logs- Transparent Huge Pages: madvise- Scaling Governor: intel_pstate powersave (EPP: performance) - CPU Microcode: 0x122 - Thermald 2.5.4- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Mitigation of Clear Register File + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamebcad3691215SE +/- 0.09, N = 3SE +/- 0.09, N = 310.8610.8510.7210.6810.65

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabecd3691215SE +/- 0.0203, N = 3SE +/- 0.0143, N = 39.60419.57899.56739.50599.4750

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamacdbe90180270360450SE +/- 1.64, N = 3SE +/- 0.49, N = 3436.25436.12435.85434.22433.88

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamedbca70140210280350SE +/- 0.60, N = 3SE +/- 0.47, N = 3328.56327.29327.19326.89324.56

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamecbad306090120150SE +/- 0.27, N = 3SE +/- 0.03, N = 3143.57143.39143.19143.04143.00

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambecda306090120150SE +/- 0.65, N = 3SE +/- 0.30, N = 3126.07125.80125.77125.00124.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamebdac2004006008001000SE +/- 10.56, N = 3SE +/- 3.50, N = 31161.301156.181152.751152.251129.73

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdeabc2004006008001000SE +/- 1.14, N = 3SE +/- 10.53, N = 51011.71994.77975.81941.79927.77

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streambadec246810SE +/- 0.0698, N = 3SE +/- 0.0734, N = 37.75187.66937.64357.61287.5635

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabdce3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 311.2511.2511.2411.2411.24

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamebcda306090120150SE +/- 0.22, N = 3SE +/- 0.17, N = 3144.75143.84143.29143.24142.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambecda306090120150SE +/- 0.39, N = 3SE +/- 0.29, N = 3126.67126.11125.87125.35124.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamecdba1632486480SE +/- 0.03, N = 3SE +/- 0.04, N = 371.7871.7671.7671.7571.70

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streambdcea1530456075SE +/- 0.14, N = 3SE +/- 0.03, N = 367.6267.4167.3867.3267.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcbae20406080100SE +/- 0.22, N = 3SE +/- 0.08, N = 381.6881.5181.5081.4981.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcbdea1122334455SE +/- 0.23, N = 3SE +/- 0.14, N = 350.8950.8550.4450.0949.79

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcbeda48121620SE +/- 0.05, N = 3SE +/- 0.06, N = 314.9214.9214.8914.8314.82

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambceda48121620SE +/- 0.06, N = 3SE +/- 0.05, N = 314.0014.0013.9013.8713.70

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamebcda4080120160200SE +/- 0.19, N = 3SE +/- 0.14, N = 3197.57197.48197.28197.20196.62

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamacdbe306090120150SE +/- 0.17, N = 3SE +/- 0.31, N = 3125.92125.63125.57125.45125.41

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamedcba3691215SE +/- 0.03, N = 3SE +/- 0.03, N = 310.6010.5810.5710.5510.53

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcabed3691215SE +/- 0.0212, N = 3SE +/- 0.0538, N = 39.48389.44449.41489.37829.3639

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamebcad80160240320400SE +/- 3.10, N = 3SE +/- 3.00, N = 3368.23368.59373.12374.66375.63

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabecd20406080100SE +/- 0.22, N = 3SE +/- 0.16, N = 3104.12104.39104.52105.19105.54

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamacdbe3691215SE +/- 0.0346, N = 3SE +/- 0.0105, N = 39.15609.15969.16429.19909.2062

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamedbca0.69251.3852.07752.773.4625SE +/- 0.0056, N = 3SE +/- 0.0044, N = 33.04043.05213.05313.05603.0776

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamecbad714212835SE +/- 0.05, N = 3SE +/- 0.01, N = 327.8527.8827.9227.9527.96

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streambecda246810SE +/- 0.0415, N = 3SE +/- 0.0195, N = 37.92827.94527.94677.99617.9967

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamebdac0.79431.58862.38293.17723.9715SE +/- 0.0320, N = 3SE +/- 0.0108, N = 33.43363.44883.46013.46073.5300

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdeabc0.24180.48360.72540.96721.209SE +/- 0.0010, N = 3SE +/- 0.0112, N = 50.98561.00241.02251.05911.0746

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streambadec110220330440550SE +/- 4.73, N = 3SE +/- 4.93, N = 3512.69518.25520.01522.00525.39

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabdce20406080100SE +/- 0.03, N = 3SE +/- 0.02, N = 388.8788.8788.9388.9488.98

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamebcda714212835SE +/- 0.04, N = 3SE +/- 0.03, N = 327.6227.7927.9027.9127.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streambecda246810SE +/- 0.0250, N = 3SE +/- 0.0189, N = 37.89047.92487.94057.97348.0391

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamecdba1326395265SE +/- 0.02, N = 3SE +/- 0.03, N = 355.7155.7255.7355.7455.77

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streambdcea48121620SE +/- 0.03, N = 3SE +/- 0.01, N = 314.7814.8314.8414.8514.86

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcbae1122334455SE +/- 0.13, N = 3SE +/- 0.05, N = 348.9649.0649.0749.0749.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamcbdea510152025SE +/- 0.09, N = 3SE +/- 0.06, N = 319.6519.6619.8219.9620.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamcbeda60120180240300SE +/- 0.87, N = 3SE +/- 1.11, N = 3268.00268.08268.55269.62269.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streambceda1632486480SE +/- 0.34, N = 3SE +/- 0.25, N = 371.4371.4471.9472.0772.99

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamebcda510152025SE +/- 0.02, N = 3SE +/- 0.01, N = 320.2320.2420.2620.2720.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamacdbe246810SE +/- 0.0106, N = 3SE +/- 0.0196, N = 37.93487.95187.95717.96437.9673

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamedcba80160240320400SE +/- 1.11, N = 3SE +/- 1.12, N = 3377.36378.17378.26379.14379.71

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamcabed20406080100SE +/- 0.24, N = 3SE +/- 0.61, N = 3105.44105.88106.21106.63106.80

44 Results Shown

Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Baseline - Synchronous Single-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Synchronous Single-Stream
  Llama2 Chat 7b Quantized - Asynchronous Multi-Stream
  Llama2 Chat 7b Quantized - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream
  ResNet-50, Baseline - Asynchronous Multi-Stream
  ResNet-50, Baseline - Synchronous Single-Stream
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream
  ResNet-50, Sparse INT8 - Synchronous Single-Stream
  Llama2 Chat 7b Quantized - Asynchronous Multi-Stream
  Llama2 Chat 7b Quantized - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream