epyc 9654 AMD March
2 x AMD EPYC 9654 96-Core testing with a AMD Titanite_4G (RTI1004D BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2303292-NE-EPYC9654A80.
SPECFEM3D
Model: Mount St. Helens
SPECFEM3D
Model: Layered Halfspace
SPECFEM3D
Model: Tomographic Model
SPECFEM3D
Model: Homogeneous Halfspace
SPECFEM3D
Model: Water-layered Halfspace
Zstd Compression
Compression Level: 8 - Compression Speed
Zstd Compression
Compression Level: 8 - Decompression Speed
Zstd Compression
Compression Level: 12 - Compression Speed
Zstd Compression
Compression Level: 12 - Decompression Speed
Zstd Compression
Compression Level: 19 - Compression Speed
Zstd Compression
Compression Level: 19 - Decompression Speed
Zstd Compression
Compression Level: 8, Long Mode - Compression Speed
Zstd Compression
Compression Level: 8, Long Mode - Decompression Speed
Zstd Compression
Compression Level: 19, Long Mode - Compression Speed
Zstd Compression
Compression Level: 19, Long Mode - Decompression Speed
John The Ripper
Test: bcrypt
John The Ripper
Test: WPA PSK
John The Ripper
Test: Blowfish
John The Ripper
Test: HMAC-SHA512
John The Ripper
Test: MD5
dav1d
Video Input: Chimera 1080p
dav1d
Video Input: Summer Nature 4K
dav1d
Video Input: Summer Nature 1080p
dav1d
Video Input: Chimera 1080p 10-bit
Embree
Binary: Pathtracer - Model: Crown
Embree
Binary: Pathtracer ISPC - Model: Crown
Embree
Binary: Pathtracer - Model: Asian Dragon
Embree
Binary: Pathtracer - Model: Asian Dragon Obj
Embree
Binary: Pathtracer ISPC - Model: Asian Dragon
Embree
Binary: Pathtracer ISPC - Model: Asian Dragon Obj
Timed FFmpeg Compilation
Time To Compile
Timed Godot Game Engine Compilation
Time To Compile
Timed LLVM Compilation
Build System: Ninja
Timed LLVM Compilation
Build System: Unix Makefiles
Timed Node.js Compilation
Time To Compile
Build2
Time To Compile
FFmpeg
Encoder: libx264 - Scenario: Live
FFmpeg
Encoder: libx264 - Scenario: Live
FFmpeg
Encoder: libx265 - Scenario: Live
FFmpeg
Encoder: libx265 - Scenario: Live
FFmpeg
Encoder: libx264 - Scenario: Upload
FFmpeg
Encoder: libx264 - Scenario: Upload
FFmpeg
Encoder: libx265 - Scenario: Upload
FFmpeg
Encoder: libx265 - Scenario: Upload
FFmpeg
Encoder: libx264 - Scenario: Platform
FFmpeg
Encoder: libx264 - Scenario: Platform
FFmpeg
Encoder: libx265 - Scenario: Platform
FFmpeg
Encoder: libx265 - Scenario: Platform
FFmpeg
Encoder: libx264 - Scenario: Video On Demand
FFmpeg
Encoder: libx264 - Scenario: Video On Demand
FFmpeg
Encoder: libx265 - Scenario: Video On Demand
FFmpeg
Encoder: libx265 - Scenario: Video On Demand
OpenSSL
Algorithm: SHA256
OpenSSL
Algorithm: SHA512
OpenSSL
Algorithm: RSA4096
OpenSSL
Algorithm: RSA4096
OpenSSL
Algorithm: ChaCha20
OpenSSL
Algorithm: AES-128-GCM
OpenSSL
Algorithm: AES-256-GCM
OpenSSL
Algorithm: ChaCha20-Poly1305
ClickHouse
100M Rows Hits Dataset, First Run / Cold Cache
ClickHouse
100M Rows Hits Dataset, Second Run
ClickHouse
100M Rows Hits Dataset, Third Run
Memcached
Set To Get Ratio: 1:5
Memcached
Set To Get Ratio: 1:10
Memcached
Set To Get Ratio: 1:100
GROMACS
Implementation: MPI CPU - Input: water_GMX50_bare
Darmstadt Automotive Parallel Heterogeneous Suite
Backend: OpenMP - Kernel: NDT Mapping
Darmstadt Automotive Parallel Heterogeneous Suite
Backend: OpenMP - Kernel: Points2Image
Darmstadt Automotive Parallel Heterogeneous Suite
Backend: OpenMP - Kernel: Euclidean Cluster
MariaDB
Clients: 512
MariaDB
Clients: 1024
MariaDB
Clients: 2048
MariaDB
Clients: 4096
MariaDB
Clients: 8192
PostgreSQL
Scaling Factor: 1 - Clients: 800 - Mode: Read Only
PostgreSQL
Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency
PostgreSQL
Scaling Factor: 1 - Clients: 1000 - Mode: Read Only
PostgreSQL
Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency
PostgreSQL
Scaling Factor: 1 - Clients: 800 - Mode: Read Write
PostgreSQL
Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency
PostgreSQL
Scaling Factor: 1 - Clients: 1000 - Mode: Read Write
PostgreSQL
Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Only
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency
PostgreSQL
Scaling Factor: 100 - Clients: 1000 - Mode: Read Only
PostgreSQL
Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Write
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency
PostgreSQL
Scaling Factor: 100 - Clients: 1000 - Mode: Read Write
PostgreSQL
Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream
Google Draco
Model: Lion
Google Draco
Model: Church Facade
RocksDB
Test: Random Fill
RocksDB
Test: Random Read
RocksDB
Test: Update Random
RocksDB
Test: Sequential Fill
RocksDB
Test: Random Fill Sync
RocksDB
Test: Read While Writing
RocksDB
Test: Read Random Write Random
nginx
Connections: 200
nginx
Connections: 500
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallel
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
Apache HTTP Server
Concurrent Requests: 200
Apache HTTP Server
Concurrent Requests: 500
OpenCV
Test: Core
OpenCV
Test: Video
OpenCV
Test: Graph API
OpenCV
Test: Stitching
OpenCV
Test: Features 2D
OpenCV
Test: Image Processing
OpenCV
Test: Object Detection
OpenCV
Test: DNN - Deep Neural Network
TensorFlow
Device: CPU - Batch Size: 16 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 512 - Model: AlexNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 16 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 32 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 32 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 64 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 64 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 256 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 256 - Model: ResNet-50
TensorFlow
Device: CPU - Batch Size: 512 - Model: GoogLeNet
TensorFlow
Device: CPU - Batch Size: 512 - Model: ResNet-50
Phoronix Test Suite v10.8.5