Gigabyte G242-P36 Ampere Altra Max Server
Benchmarks by Michael Larabel for a future article.
HTML result view exported from: https://openbenchmarking.org/result/2401176-NE-GIGABYTEG67&grw&rdt.
Stress-NG
Test: CPU Stress
Stress-NG
Test: Crypto
Stress-NG
Test: Memory Copying
Stress-NG
Test: Glibc Qsort Data Sorting
Stress-NG
Test: Glibc C String Functions
Stress-NG
Test: Vector Math
Stress-NG
Test: Matrix Math
Stress-NG
Test: Forking
Stress-NG
Test: System V Message Passing
Stress-NG
Test: Semaphores
Stress-NG
Test: Socket Activity
Stress-NG
Test: Context Switching
Stress-NG
Test: Atomic
Stress-NG
Test: CPU Cache
Stress-NG
Test: Malloc
Stress-NG
Test: MEMFD
Stress-NG
Test: MMAP
Stress-NG
Test: NUMA
Stress-NG
Test: SENDFILE
Stress-NG
Test: IO_uring
Stress-NG
Test: Futex
Stress-NG
Test: Mutex
Stress-NG
Test: Function Call
Stress-NG
Test: Poll
Stress-NG
Test: Hash
Stress-NG
Test: Pthread
Stress-NG
Test: Zlib
Stress-NG
Test: Floating Point
Stress-NG
Test: Fused Multiply-Add
Stress-NG
Test: Pipe
Stress-NG
Test: Matrix 3D Math
Stress-NG
Test: AVL Tree
Stress-NG
Test: Vector Floating Point
Stress-NG
Test: Vector Shuffle
Quicksilver
Input: CORAL2 P2
Quicksilver
Input: CTS2
Quicksilver
Input: CORAL2 P1
Stress-NG
Test: Wide Vector Math
Stress-NG
Test: Cloning
Stress-NG
Test: AVX-512 VNNI
Stress-NG
Test: Mixed Scheduler
CacheBench
Test: Read
CacheBench
Test: Write
CacheBench
Test: Read / Modify / Write
Xmrig
Variant: Monero - Hash Count: 1M
Xmrig
Variant: Wownero - Hash Count: 1M
LeelaChessZero
Backend: BLAS
LeelaChessZero
Backend: Eigen
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 1 - Model: ResNet-152
PyTorch
Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-50
PyTorch
Device: CPU - Batch Size: 16 - Model: ResNet-152
Llama.cpp
Model: llama-2-7b.Q4_0.gguf
Llama.cpp
Model: llama-2-13b.Q4_0.gguf
Llama.cpp
Model: llama-2-70b-chat.Q5_0.gguf
GROMACS
Implementation: MPI CPU - Input: water_GMX50_bare
ACES DGEMM
Sustained Floating-Point Rate
Algebraic Multi-Grid Benchmark
miniFE
Problem Size: Small
Stockfish
Total Time
7-Zip Compression
Test: Compression Rating
7-Zip Compression
Test: Decompression Rating
Timed LLVM Compilation
Build System: Ninja
Timed LLVM Compilation
Build System: Unix Makefiles
Timed Linux Kernel Compilation
Build: defconfig
Timed Linux Kernel Compilation
Build: allmodconfig
Speedb
Test: Sequential Fill
Speedb
Test: Random Fill
Speedb
Test: Random Fill Sync
Speedb
Test: Random Read
Speedb
Test: Read While Writing
Speedb
Test: Read Random Write Random
Speedb
Test: Update Random
OpenSSL
Algorithm: RSA4096
OpenSSL
Algorithm: RSA4096
OpenSSL
Algorithm: SHA256
OpenSSL
Algorithm: SHA512
OpenSSL
Algorithm: AES-128-GCM
OpenSSL
Algorithm: AES-256-GCM
OpenSSL
Algorithm: ChaCha20
OpenSSL
Algorithm: ChaCha20-Poly1305
RocksDB
Test: Random Read
RocksDB
Test: Read While Writing
RocksDB
Test: Read Random Write Random
RocksDB
Test: Update Random
Phoronix Test Suite v10.8.5