Intel Xeon Silver 4216 testing with a TYAN S7100AG2NR (V4.02 BIOS) and ASPEED on Debian 12 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2401144-NE-XEONJAN1706
xeon jan,
"Speedb 2.7 - Test: Random Fill Sync",
Higher Results Are Better
"a",
"b",
"c",
"Speedb 2.7 - Test: Random Fill",
Higher Results Are Better
"a",
"b",
"c",
"Speedb 2.7 - Test: Update Random",
Higher Results Are Better
"a",
"b",
"c",
"TensorFlow 2.12 - Device: CPU - Batch Size: 1 - Model: GoogLeNet",
Higher Results Are Better
"a",
"b",
"c",
"Llama.cpp b1808 - Model: llama-2-7b.Q4_0.gguf",
Higher Results Are Better
"a",
"b",
"c",
"SVT-AV1 1.8 - Encoder Mode: Preset 12 - Input: Bosphorus 4K",
Higher Results Are Better
"a",
"b",
"c",
"Speedb 2.7 - Test: Read While Writing",
Higher Results Are Better
"a",
"b",
"c",
"SVT-AV1 1.8 - Encoder Mode: Preset 12 - Input: Bosphorus 1080p",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"LeelaChessZero 0.30 - Backend: Eigen",
Higher Results Are Better
"a",
"b",
"c",
"CacheBench - Test: Read / Modify / Write",
Higher Results Are Better
"a",
"b",
"c",
"Speedb 2.7 - Test: Sequential Fill",
Higher Results Are Better
"a",
"b",
"c",
"LeelaChessZero 0.30 - Backend: BLAS",
Higher Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 1 - Model: ResNet-50",
Higher Results Are Better
"a",29.509110874568
"b",29.640059216909
"c",28.886546484101
"SVT-AV1 1.8 - Encoder Mode: Preset 13 - Input: Bosphorus 1080p",
Higher Results Are Better
"a",
"b",
"c",
"SVT-AV1 1.8 - Encoder Mode: Preset 8 - Input: Bosphorus 1080p",
Higher Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 16 - Model: ResNet-50",
Higher Results Are Better
"a",21.568822410364
"b",21.267855889606
"c",21.733981332035
"Quicksilver 20230818 - Input: CTS2",
Higher Results Are Better
"a",
"b",
"c",
"SVT-AV1 1.8 - Encoder Mode: Preset 4 - Input: Bosphorus 1080p",
Higher Results Are Better
"a",
"b",
"c",
"SVT-AV1 1.8 - Encoder Mode: Preset 4 - Input: Bosphorus 4K",
Higher Results Are Better
"a",
"b",
"c",
"SVT-AV1 1.8 - Encoder Mode: Preset 8 - Input: Bosphorus 4K",
Higher Results Are Better
"a",
"b",
"c",
"Speedb 2.7 - Test: Random Read",
Higher Results Are Better
"a",
"b",
"c",
"TensorFlow 2.12 - Device: CPU - Batch Size: 1 - Model: ResNet-50",
Higher Results Are Better
"a",
"b",
"c",
"Y-Cruncher 0.8.3 - Pi Digits To Calculate: 1B",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l",
Higher Results Are Better
"a",4.7246735977075
"b",4.7596244543049
"c",4.7035795124708
"Llama.cpp b1808 - Model: llama-2-13b.Q4_0.gguf",
Higher Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 32 - Model: ResNet-152",
Higher Results Are Better
"a",8.1781407308881
"b",8.0871448450319
"c",8.0791956829767
"Neural Magic DeepSparse 1.6 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 16 - Model: ResNet-152",
Higher Results Are Better
"a",8.141681749525
"b",8.1320696610744
"c",8.2168268901175
"SVT-AV1 1.8 - Encoder Mode: Preset 13 - Input: Bosphorus 4K",
Higher Results Are Better
"a",
"b",
"c",
"Speedb 2.7 - Test: Read Random Write Random",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"TensorFlow 2.12 - Device: CPU - Batch Size: 1 - Model: VGG-16",
Higher Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 1 - Model: ResNet-152",
Higher Results Are Better
"a",11.107127300243
"b",11.174509473602
"c",11.074178715269
"Neural Magic DeepSparse 1.6 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l",
Higher Results Are Better
"a",4.7900581462672
"b",4.7470649671512
"c",4.7473638739154
"PyTorch 2.1 - Device: CPU - Batch Size: 32 - Model: ResNet-50",
Higher Results Are Better
"a",21.55958672164
"b",21.641689572976
"c",21.740934963526
"TensorFlow 2.12 - Device: CPU - Batch Size: 1 - Model: AlexNet",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"PyTorch 2.1 - Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l",
Higher Results Are Better
"a",6.9101438075342
"b",6.9002796176463
"c",6.9479563531711
"Quicksilver 20230818 - Input: CORAL2 P2",
Higher Results Are Better
"a",
"b",
"c",
"Llama.cpp b1808 - Model: llama-2-70b-chat.Q5_0.gguf",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"TensorFlow 2.12 - Device: CPU - Batch Size: 16 - Model: AlexNet",
Higher Results Are Better
"a",
"b",
"c",
"Quicksilver 20230818 - Input: CORAL2 P1",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"TensorFlow 2.12 - Device: CPU - Batch Size: 16 - Model: GoogLeNet",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"Y-Cruncher 0.8.3 - Pi Digits To Calculate: 500M",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"TensorFlow 2.12 - Device: CPU - Batch Size: 16 - Model: ResNet-50",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream",
Higher Results Are Better
"a",
"b",
"c",
"CacheBench - Test: Write",
Higher Results Are Better
"a",
"b",
"c",
"Neural Magic DeepSparse 1.6 - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream",
Lower Results Are Better
"a",
"b",
"c",
"CacheBench - Test: Read",
Higher Results Are Better
"a",
"b",
"c",
"TensorFlow 2.12 - Device: CPU - Batch Size: 16 - Model: VGG-16",
Higher Results Are Better
"a",
"b",
"c",