xeon jan

Intel Xeon Silver 4216 testing with a TYAN S7100AG2NR (V4.02 BIOS) and ASPEED on Debian 12 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401144-NE-XEONJAN1706
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
CPU Massive 4 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 5 Tests
Python Tests 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
January 14
  1 Hour, 57 Minutes
b
January 14
  1 Hour, 56 Minutes
c
January 15
  1 Hour, 57 Minutes
Invert Hiding All Results Option
  1 Hour, 57 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xeon jan Intel Xeon Silver 4216 testing with a TYAN S7100AG2NR (V4.02 BIOS) and ASPEED on Debian 12 via the Phoronix Test Suite. a: Processor: Intel Xeon Silver 4216 @ 3.20GHz (16 Cores / 32 Threads), Motherboard: TYAN S7100AG2NR (V4.02 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 6 x 8 GB DDR4-2400MT/s, Disk: 240GB Corsair Force MP500, Graphics: ASPEED, Audio: Realtek ALC892, Network: 2 x Intel I350 OS: Debian 12, Kernel: 6.1.0-11-amd64 (x86_64), Display Server: X Server, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 b: Processor: Intel Xeon Silver 4216 @ 3.20GHz (16 Cores / 32 Threads), Motherboard: TYAN S7100AG2NR (V4.02 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 6 x 8 GB DDR4-2400MT/s, Disk: 240GB Corsair Force MP500, Graphics: ASPEED, Audio: Realtek ALC892, Network: 2 x Intel I350 OS: Debian 12, Kernel: 6.1.0-11-amd64 (x86_64), Display Server: X Server, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 c: Processor: Intel Xeon Silver 4216 @ 3.20GHz (16 Cores / 32 Threads), Motherboard: TYAN S7100AG2NR (V4.02 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 6 x 8 GB DDR4-2400MT/s, Disk: 240GB Corsair Force MP500, Graphics: ASPEED, Audio: Realtek ALC892, Network: 2 x Intel I350 OS: Debian 12, Kernel: 6.1.0-11-amd64 (x86_64), Display Server: X Server, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1024x768 Quicksilver 20230818 Input: CTS2 Figure Of Merit > Higher Is Better a . 8446000 |================================================================= b . 8497000 |================================================================= c . 8607000 |================================================================== Quicksilver 20230818 Input: CORAL2 P2 Figure Of Merit > Higher Is Better a . 9287000 |================================================================== b . 9354000 |================================================================== c . 9308000 |================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 4.72 |==================================================================== b . 4.76 |===================================================================== c . 4.70 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 4.79 |===================================================================== b . 4.75 |==================================================================== c . 4.75 |==================================================================== LeelaChessZero 0.30 Backend: BLAS Nodes Per Second > Higher Is Better a . 37 |===================================================================== b . 38 |======================================================================= c . 37 |===================================================================== LeelaChessZero 0.30 Backend: Eigen Nodes Per Second > Higher Is Better a . 33 |======================================================================= b . 33 |======================================================================= c . 32 |===================================================================== Llama.cpp b1808 Model: llama-2-70b-chat.Q5_0.gguf Tokens Per Second > Higher Is Better a . 1.50 |===================================================================== b . 1.51 |===================================================================== c . 1.50 |===================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: VGG-16 images/sec > Higher Is Better a . 5.96 |===================================================================== b . 5.96 |===================================================================== c . 5.96 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 batches/sec > Higher Is Better a . 8.18 |===================================================================== b . 8.09 |==================================================================== c . 8.08 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better a . 8.14 |==================================================================== b . 8.13 |==================================================================== c . 8.22 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 6.91 |===================================================================== b . 6.90 |===================================================================== c . 6.95 |===================================================================== CacheBench Test: Read / Modify / Write MB/s > Higher Is Better a . 61680.56 |================================================================= b . 59877.34 |=============================================================== c . 60843.70 |================================================================ CacheBench Test: Write MB/s > Higher Is Better a . 23161.61 |================================================================= b . 23134.97 |================================================================= c . 23165.59 |================================================================= CacheBench Test: Read MB/s > Higher Is Better a . 6062.37 |================================================================== b . 6058.74 |================================================================== c . 6057.99 |================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better a . 16.22 |==================================================================== b . 16.25 |==================================================================== c . 16.21 |==================================================================== Quicksilver 20230818 Input: CORAL2 P1 Figure Of Merit > Higher Is Better a . 10170000 |================================================================= b . 10110000 |================================================================= c . 10150000 |================================================================= PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better a . 11.11 |==================================================================== b . 11.17 |==================================================================== c . 11.07 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 845.95 |=================================================================== b . 846.00 |=================================================================== c . 845.20 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 9.4169 |=================================================================== b . 9.4557 |=================================================================== c . 9.4646 |=================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better a . 21.57 |=================================================================== b . 21.27 |=================================================================== c . 21.73 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 batches/sec > Higher Is Better a . 21.56 |=================================================================== b . 21.64 |==================================================================== c . 21.74 |==================================================================== Llama.cpp b1808 Model: llama-2-13b.Q4_0.gguf Tokens Per Second > Higher Is Better a . 8.70 |===================================================================== b . 8.73 |===================================================================== c . 8.62 |==================================================================== SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 2.462 |==================================================================== b . 2.423 |=================================================================== c . 2.421 |=================================================================== Speedb 2.7 Test: Random Fill Sync Op/s > Higher Is Better a . 8962 |============================================= b . 13397 |==================================================================== c . 10150 |==================================================== Speedb 2.7 Test: Random Fill Op/s > Higher Is Better a . 379730 |=================================================================== b . 298026 |===================================================== c . 377206 |=================================================================== Speedb 2.7 Test: Update Random Op/s > Higher Is Better a . 172891 |=================================================================== b . 163726 |=============================================================== c . 151137 |=========================================================== Speedb 2.7 Test: Read While Writing Op/s > Higher Is Better a . 3897119 |================================================================ b . 3867484 |================================================================ c . 4014397 |================================================================== Speedb 2.7 Test: Read Random Write Random Op/s > Higher Is Better a . 1640953 |================================================================= b . 1658156 |================================================================== c . 1656172 |================================================================== Speedb 2.7 Test: Random Read Op/s > Higher Is Better a . 53271554 |================================================================= b . 52915603 |================================================================= c . 52443533 |================================================================ Speedb 2.7 Test: Sequential Fill Op/s > Higher Is Better a . 565169 |=================================================================== b . 558662 |================================================================== c . 549382 |================================================================= Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 28.14 |==================================================================== b . 27.85 |=================================================================== c . 27.95 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 283.96 |================================================================== b . 286.91 |=================================================================== c . 285.94 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 1061.89 |================================================================= b . 1073.13 |================================================================== c . 1060.60 |================================================================= Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 7.5161 |=================================================================== b . 7.2935 |================================================================= c . 7.5416 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 1071.41 |================================================================== b . 1063.84 |================================================================= c . 1072.98 |================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 7.3201 |================================================================= b . 7.5195 |=================================================================== c . 7.2847 |================================================================= Y-Cruncher 0.8.3 Pi Digits To Calculate: 1B Seconds < Lower Is Better a . 46.09 |==================================================================== b . 45.45 |=================================================================== c . 45.93 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 63.48 |==================================================================== b . 63.39 |==================================================================== c . 63.53 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 125.96 |=================================================================== b . 126.10 |=================================================================== c . 125.77 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 552.53 |=================================================================== b . 549.40 |=================================================================== c . 553.47 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 14.38 |=================================================================== b . 14.50 |==================================================================== c . 14.30 |=================================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better a . 29.51 |==================================================================== b . 29.64 |==================================================================== c . 28.89 |================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 123.22 |=================================================================== b . 123.89 |=================================================================== c . 123.78 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 64.91 |==================================================================== b . 64.54 |==================================================================== c . 64.62 |==================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: GoogLeNet images/sec > Higher Is Better a . 47.63 |==================================================================== b . 47.51 |==================================================================== c . 47.36 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 68.77 |==================================================================== b . 68.91 |==================================================================== c . 68.89 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 116.28 |=================================================================== b . 116.04 |=================================================================== c . 116.08 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 164.63 |=================================================================== b . 164.59 |=================================================================== c . 164.08 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 48.57 |==================================================================== b . 48.60 |==================================================================== c . 48.65 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 165.97 |=================================================================== b . 165.69 |=================================================================== c . 165.88 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 48.19 |==================================================================== b . 48.27 |==================================================================== c . 48.22 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 10.90 |==================================================================== b . 10.84 |==================================================================== c . 10.87 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 732.36 |=================================================================== b . 736.87 |=================================================================== c . 734.42 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 68.76 |==================================================================== b . 68.95 |==================================================================== c . 68.68 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 116.18 |=================================================================== b . 115.88 |=================================================================== c . 116.40 |=================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: VGG-16 images/sec > Higher Is Better a . 3.27 |===================================================================== b . 3.24 |==================================================================== c . 3.26 |===================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better a . 4.81 |==================================================================== b . 4.87 |===================================================================== c . 4.88 |===================================================================== SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 24.38 |==================================================================== b . 23.99 |=================================================================== c . 24.07 |=================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: AlexNet images/sec > Higher Is Better a . 83.17 |==================================================================== b . 82.83 |==================================================================== c . 83.33 |==================================================================== SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 7.326 |==================================================================== b . 7.379 |==================================================================== c . 7.251 |=================================================================== Y-Cruncher 0.8.3 Pi Digits To Calculate: 500M Seconds < Lower Is Better a . 20.62 |==================================================================== b . 20.68 |==================================================================== c . 20.58 |==================================================================== Llama.cpp b1808 Model: llama-2-7b.Q4_0.gguf Tokens Per Second > Higher Is Better a . 16.95 |==================================================================== b . 15.89 |================================================================ c . 16.55 |================================================================== SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 45.63 |================================================================== b . 46.69 |==================================================================== c . 45.94 |=================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: GoogLeNet images/sec > Higher Is Better a . 17.26 |==================================================================== b . 15.86 |============================================================== c . 16.19 |================================================================ SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 82.81 |==================================================================== b . 78.54 |================================================================ c . 82.22 |==================================================================== SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 82.39 |=================================================================== b . 82.62 |=================================================================== c . 83.27 |==================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: AlexNet images/sec > Higher Is Better a . 18.21 |=================================================================== b . 18.25 |==================================================================== c . 18.35 |==================================================================== SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 165.27 |================================================================= b . 170.98 |=================================================================== c . 168.01 |================================================================== SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 188.99 |=================================================================== b . 184.72 |================================================================= c . 187.02 |==================================================================