8490h 1s

Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2307296-NE-8490H1S1663
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 2 Tests
Creator Workloads 2 Tests
Database Test Suite 3 Tests
Multi-Core 2 Tests
Server 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
July 28 2023
  1 Hour, 52 Minutes
b
July 28 2023
  2 Hours, 53 Minutes
c
July 28 2023
  1 Hour, 26 Minutes
d
July 28 2023
  1 Hour, 25 Minutes
e
July 29 2023
  1 Hour, 25 Minutes
Invert Hiding All Results Option
  1 Hour, 48 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


8490h 1s, "BRL-CAD 7.36 - VGR Performance Metric", Higher Results Are Better "a", "b", "c", "d", "e", "Crypto++ 8.8 - Test: All Algorithms", Higher Results Are Better "a", "Crypto++ 8.8 - Test: Keyed Algorithms", Higher Results Are Better "a", "Blender 3.6 - Blend File: Barbershop - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "d", "e", "Crypto++ 8.8 - Test: Unkeyed Algorithms", Higher Results Are Better "a", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",491.0668,485.666,481.8226 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",60.7709,61.761,62.2548 "c", "d", "e", "Apache Cassandra 4.1.3 - Test: Writes", Higher Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",46.9835,47.2374,47.5302 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",21.2809,21.1664,21.0361 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "d", "e", "Dragonflydb 1.6.2 - Clients Per Thread: 10 - Set To Get Ratio: 1:5", Higher Results Are Better "a", "b",14334901.06,14553637.61,14541963.64 "c", "d", "e", "Dragonflydb 1.6.2 - Clients Per Thread: 10 - Set To Get Ratio: 1:100", Higher Results Are Better "a", "b",14179386.08,14682820.88,14433895.67 "c", "d", "e", "Dragonflydb 1.6.2 - Clients Per Thread: 10 - Set To Get Ratio: 1:10", Higher Results Are Better "a", "b",14236827.92,14347678.31,14292281.32 "c", "d", "e", "Blender 3.6 - Blend File: Pabellon Barcelona - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",34.4295,34.4588,34.5018 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",870.4772,869.2706,868.6628 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",4.7791,4.8087,4.7527 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",209.1462,207.8506,210.3094 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",396.8915,395.8967,392.8414 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",75.5738,75.7505,76.3212 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",15.8121,15.8445,15.8383 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",1893.4355,1889.2889,1890.445 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",519.2032,517.3496,514.9375 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",57.5868,57.6286,58.1521 "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10", Higher Results Are Better "a", "b", "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5", Higher Results Are Better "a", "b", "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1", Higher Results Are Better "a", "b", "c", "d", "e", "Blender 3.6 - Blend File: Classroom - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",156.2628,156.3135,156.0564 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",191.9325,191.8605,192.1875 "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5", Higher Results Are Better "a", "b", "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10", Higher Results Are Better "a", "b", "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1", Higher Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",28.2761,28.366,28.1783 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",35.3577,35.2457,35.4804 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",47.3603,46.1885,46.0833 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",632.3595,649.1548,650.6343 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",24.9993,24.9846,24.9913 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",39.9479,39.9658,39.9613 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",11.4768,11.4433,11.4759 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",87.0636,87.3302,87.0808 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",5.5667,5.5645,5.5465 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",179.4229,179.5182,180.1038 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",61.6163,61.6205,61.2939 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",486.6855,486.5461,489.1622 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",8.1256,8.0293,8.0862 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",122.9849,124.4637,123.5857 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",5.3909,5.4052,5.3398 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",185.3252,184.8247,187.0859 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",1.3349,1.3378,1.3387 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",747.9747,746.3364,745.8107 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",38.3577,38.2615,38.392 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",781.6289,783.5762,780.9149 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",3.7104,3.6798,3.6982 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",269.1984,271.4047,270.0854 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",86.3993,86.1227,86.2303 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",347.0506,348.1693,347.7458 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",86.0126,86.1214,85.6657 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",348.1034,348.1907,349.8678 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",5.4159,5.4457,5.4405 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",184.5483,183.5328,183.6992 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",38.4084,38.3726,38.337 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",780.5745,781.2978,782.0348 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b",3.6807,3.6991,3.6815 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b",271.3413,269.9591,271.2603 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b",5.2564,5.265,5.2668 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b",5685.0857,5675.7993,5674.6607 "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "c", "d", "e", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "c", "d", "e", "Blender 3.6 - Blend File: Fishy Cat - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "d", "e", "Blender 3.6 - Blend File: BMW27 - Compute: CPU-Only", Lower Results Are Better "a", "b", "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5", "a", "b", "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1", "a", "b", "c", "d", "e", "Redis 7.0.12 + memtier_benchmark 2.0 - Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10", "a", "b", "c", "d", "e", "Dragonflydb 1.6.2 - Clients Per Thread: 20 - Set To Get Ratio: 1:5", "a", "b", "c", "d", "e", "Dragonflydb 1.6.2 - Clients Per Thread: 20 - Set To Get Ratio: 1:10", "a", "b", "c", "d", "e", "Dragonflydb 1.6.2 - Clients Per Thread: 20 - Set To Get Ratio: 1:100", "a", "b", "c", "d", "e",