AMD EPYC 8534P AMD EPYC 8534P 64-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite. a: Processor: AMD EPYC 8534P 64-Core @ 2.30GHz (64 Cores / 128 Threads), Motherboard: AMD Cinnabar (RCB1009C BIOS), Chipset: AMD Device 14a4, Memory: 6 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG, Disk: 3201GB Micron_7450_MTFDKCB3T2TFS, Graphics: ASPEED, Network: 2 x Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.10, Kernel: 6.5.0-5-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server 1.21.1.7, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 640x480 b: Processor: AMD EPYC 8534P 64-Core @ 2.30GHz (64 Cores / 128 Threads), Motherboard: AMD Cinnabar (RCB1009C BIOS), Chipset: AMD Device 14a4, Memory: 6 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG, Disk: 3201GB Micron_7450_MTFDKCB3T2TFS, Graphics: ASPEED, Network: 2 x Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.10, Kernel: 6.5.0-5-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server 1.21.1.7, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 640x480 c: Processor: AMD EPYC 8534P 64-Core @ 2.30GHz (64 Cores / 128 Threads), Motherboard: AMD Cinnabar (RCB1009C BIOS), Chipset: AMD Device 14a4, Memory: 6 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG, Disk: 3201GB Micron_7450_MTFDKCB3T2TFS, Graphics: ASPEED, Network: 2 x Broadcom NetXtreme BCM5720 PCIe OS: Ubuntu 23.10, Kernel: 6.5.0-5-generic (x86_64), Desktop: GNOME Shell, Display Server: X Server 1.21.1.7, Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 640x480 Quicksilver 20230818 Input: CTS2 Figure Of Merit > Higher Is Better a . 16320000 |================================================================= b . 16360000 |================================================================= c . 16293333 |================================================================= Quicksilver 20230818 Input: CORAL2 P1 Figure Of Merit > Higher Is Better a . 21300000 |================================================================= b . 21330000 |================================================================= c . 21316667 |================================================================= Quicksilver 20230818 Input: CORAL2 P2 Figure Of Merit > Higher Is Better a . 16250000 |================================================================= b . 16260000 |================================================================= c . 16193333 |================================================================= SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 6.697 |=================================================================== b . 6.770 |==================================================================== c . 6.721 |==================================================================== SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 67.32 |=================================================================== b . 68.42 |==================================================================== c . 67.78 |=================================================================== SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 192.63 |=================================================================== b . 193.58 |=================================================================== c . 193.43 |=================================================================== SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 195.90 |=================================================================== b . 191.93 |================================================================== c . 194.47 |=================================================================== SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 17.12 |================================================================== b . 17.63 |==================================================================== c . 17.57 |==================================================================== SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 129.02 |================================================================= b . 133.52 |=================================================================== c . 130.35 |================================================================= SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 506.25 |=================================================================== b . 505.54 |=================================================================== c . 503.51 |=================================================================== SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 599.85 |=================================================================== b . 599.42 |=================================================================== c . 603.60 |=================================================================== Y-Cruncher 0.8.3 Pi Digits To Calculate: 1B Seconds < Lower Is Better a . 9.946 |==================================================================== b . 9.915 |==================================================================== c . 9.929 |==================================================================== Y-Cruncher 0.8.3 Pi Digits To Calculate: 500M Seconds < Lower Is Better a . 4.897 |==================================================================== b . 4.914 |==================================================================== c . 4.918 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 batches/sec > Higher Is Better a . 45.12 |=================================================================== b . 45.60 |==================================================================== c . 45.33 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 batches/sec > Higher Is Better a . 16.86 |==================================================================== b . 16.44 |================================================================== c . 16.58 |=================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 batches/sec > Higher Is Better a . 36.63 |==================================================================== b . 36.13 |=================================================================== c . 36.68 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 batches/sec > Higher Is Better a . 14.67 |==================================================================== b . 14.55 |=================================================================== c . 14.67 |==================================================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 batches/sec > Higher Is Better a . 36.44 |================================================================== b . 37.28 |==================================================================== c . 36.71 |=================================================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 batches/sec > Higher Is Better a . 14.38 |================================================================== b . 14.79 |==================================================================== c . 14.55 |=================================================================== PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 9.38 |=================================================================== b . 9.59 |===================================================================== c . 9.61 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 6.02 |===================================================================== b . 6.04 |===================================================================== c . 6.02 |===================================================================== PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l batches/sec > Higher Is Better a . 6.04 |===================================================================== b . 6.05 |===================================================================== c . 6.03 |===================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 1 - Model: ResNet-50 images/sec > Higher Is Better a . 5.92 |==================================================================== b . 5.98 |===================================================================== c . 5.94 |===================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better a . 53.65 |==================================================================== b . 53.61 |==================================================================== c . 53.62 |==================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 512 - Model: ResNet-50 images/sec > Higher Is Better a . 103.30 |=================================================================== b . 103.20 |=================================================================== c . 103.19 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 35.65 |==================================================================== b . 35.69 |==================================================================== c . 35.59 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 882.43 |=================================================================== b . 882.88 |=================================================================== c . 883.11 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 29.64 |==================================================================== b . 29.64 |==================================================================== c . 29.58 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 33.73 |==================================================================== b . 33.73 |==================================================================== c . 33.80 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 1439.77 |================================================================== b . 1441.04 |================================================================== c . 1442.40 |================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 22.19 |==================================================================== b . 22.18 |==================================================================== c . 22.16 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 202.16 |=================================================================== b . 202.11 |=================================================================== c . 198.37 |================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 4.9412 |================================================================== b . 4.9419 |================================================================== c . 5.0360 |=================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 476.84 |=================================================================== b . 476.68 |=================================================================== c . 477.06 |=================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 67.00 |==================================================================== b . 67.00 |==================================================================== c . 66.97 |==================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 187.45 |=================================================================== b . 187.87 |=================================================================== c . 187.46 |=================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 5.3282 |=================================================================== b . 5.3168 |=================================================================== c . 5.3279 |=================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 3766.13 |================================================================== b . 3749.03 |================================================================== c . 3754.34 |================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 8.4779 |=================================================================== b . 8.5134 |=================================================================== c . 8.5026 |=================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 822.75 |=================================================================== b . 823.45 |=================================================================== c . 821.13 |=================================================================== Neural Magic DeepSparse 1.6 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 1.2115 |=================================================================== b . 1.2107 |=================================================================== c . 1.2141 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 211.56 |=================================================================== b . 212.39 |=================================================================== c . 211.91 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 150.77 |=================================================================== b . 150.16 |=================================================================== c . 150.48 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 151.49 |=================================================================== b . 151.56 |=================================================================== c . 152.14 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 6.5900 |=================================================================== b . 6.5866 |=================================================================== c . 6.5609 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 45.58 |==================================================================== b . 45.64 |==================================================================== c . 45.53 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 695.38 |=================================================================== b . 694.56 |=================================================================== c . 694.93 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 32.16 |=================================================================== b . 32.25 |==================================================================== c . 32.40 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 31.08 |==================================================================== b . 30.99 |==================================================================== c . 30.85 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 477.00 |=================================================================== b . 477.28 |=================================================================== c . 477.25 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 66.97 |==================================================================== b . 66.95 |==================================================================== c . 66.95 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 187.62 |=================================================================== b . 187.48 |=================================================================== c . 184.66 |================================================================== Neural Magic DeepSparse 1.6 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 5.3237 |================================================================== b . 5.3280 |================================================================== c . 5.4112 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 213.60 |=================================================================== b . 214.55 |=================================================================== c . 213.87 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 149.39 |=================================================================== b . 148.78 |=================================================================== c . 149.21 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 151.25 |================================================================== b . 152.04 |================================================================== c . 153.94 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 6.6050 |=================================================================== b . 6.5703 |=================================================================== c . 6.4896 |================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 313.99 |=================================================================== b . 313.80 |=================================================================== c . 313.75 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 101.65 |=================================================================== b . 101.73 |=================================================================== c . 101.79 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 136.47 |=================================================================== b . 136.53 |=================================================================== c . 137.02 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 7.3198 |=================================================================== b . 7.3167 |=================================================================== c . 7.2910 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 62.39 |==================================================================== b . 62.16 |==================================================================== c . 62.14 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 508.89 |=================================================================== b . 512.17 |=================================================================== c . 510.48 |=================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 44.12 |==================================================================== b . 44.00 |==================================================================== c . 44.06 |==================================================================== Neural Magic DeepSparse 1.6 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 22.64 |==================================================================== b . 22.70 |==================================================================== c . 22.67 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 688.57 |=================================================================== b . 685.11 |=================================================================== c . 685.74 |=================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 46.41 |==================================================================== b . 46.64 |==================================================================== c . 46.60 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 63.98 |==================================================================== b . 63.70 |==================================================================== c . 64.05 |==================================================================== Neural Magic DeepSparse 1.6 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 15.61 |==================================================================== b . 15.68 |==================================================================== c . 15.60 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 35.54 |==================================================================== b . 35.61 |==================================================================== c . 35.65 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 885.82 |=================================================================== b . 883.46 |=================================================================== c . 882.97 |=================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 29.53 |==================================================================== b . 29.58 |==================================================================== c . 29.63 |==================================================================== Neural Magic DeepSparse 1.6 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 33.86 |==================================================================== b . 33.79 |==================================================================== c . 33.74 |==================================================================== Speedb 2.7 Test: Random Read Op/s > Higher Is Better a . 287702223 |============================================================== b . 298445565 |================================================================ c . 298332203 |================================================================ Speedb 2.7 Test: Update Random Op/s > Higher Is Better a . 355264 |=================================================================== b . 345515 |================================================================= c . 351394 |================================================================== Speedb 2.7 Test: Read While Writing Op/s > Higher Is Better a . 15381020 |================================================================= b . 15284219 |================================================================ c . 15442420 |================================================================= Speedb 2.7 Test: Read Random Write Random Op/s > Higher Is Better a . 2506528 |================================================================== b . 2514196 |================================================================== c . 2508082 |================================================================== Llama.cpp b1808 Model: llama-2-7b.Q4_0.gguf Tokens Per Second > Higher Is Better a . 27.95 |==================================================================== b . 27.94 |==================================================================== c . 26.50 |================================================================ Llama.cpp b1808 Model: llama-2-13b.Q4_0.gguf Tokens Per Second > Higher Is Better a . 16.65 |==================================================================== b . 16.72 |==================================================================== c . 16.63 |==================================================================== Llama.cpp b1808 Model: llama-2-70b-chat.Q5_0.gguf Tokens Per Second > Higher Is Better a . 3.40 |===================================================================== b . 3.40 |===================================================================== c . 3.41 |===================================================================== Llamafile 0.6 Test: llava-v1.5-7b-q4 - Acceleration: CPU Tokens Per Second > Higher Is Better a . 21.94 |==================================================================== b . 21.89 |==================================================================== c . 21.97 |==================================================================== Llamafile 0.6 Test: mistral-7b-instruct-v0.2.Q8_0 - Acceleration: CPU Tokens Per Second > Higher Is Better a . 14.27 |==================================================================== b . 14.29 |==================================================================== c . 14.27 |==================================================================== Llamafile 0.6 Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU Tokens Per Second > Higher Is Better a . 5.50 |===================================================================== b . 5.46 |==================================================================== c . 5.50 |=====================================================================