eoy2024
AMD EPYC 4564P 16-Core testing with a Supermicro AS-3015A-I H13SAE-MF v1.00 (2.1 BIOS) and ASPEED on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2412061-NE-EOY20243073&gru&sor.
OpenSSL
Algorithm: ChaCha20
OpenSSL
Algorithm: AES-128-GCM
OpenSSL
Algorithm: AES-256-GCM
OpenSSL
Algorithm: ChaCha20-Poly1305
SVT-AV1
Encoder Mode: Preset 3 - Input: Bosphorus 4K
SVT-AV1
Encoder Mode: Preset 5 - Input: Bosphorus 4K
SVT-AV1
Encoder Mode: Preset 8 - Input: Bosphorus 4K
SVT-AV1
Encoder Mode: Preset 13 - Input: Bosphorus 4K
SVT-AV1
Encoder Mode: Preset 3 - Input: Bosphorus 1080p
SVT-AV1
Encoder Mode: Preset 5 - Input: Bosphorus 1080p
SVT-AV1
Encoder Mode: Preset 8 - Input: Bosphorus 1080p
SVT-AV1
Encoder Mode: Preset 13 - Input: Bosphorus 1080p
SVT-AV1
Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit
SVT-AV1
Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit
SVT-AV1
Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit
SVT-AV1
Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit
x265
Video Input: Bosphorus 4K
x265
Video Input: Bosphorus 1080p
simdjson
Throughput Test: Kostya
simdjson
Throughput Test: TopTweet
simdjson
Throughput Test: LargeRandom
simdjson
Throughput Test: PartialTweets
simdjson
Throughput Test: DistinctUserID
ACES DGEMM
Sustained Floating-Point Rate
Rustls
Benchmark: handshake - Suite: TLS13_CHACHA20_POLY1305_SHA256
Rustls
Benchmark: handshake - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Rustls
Benchmark: handshake-resume - Suite: TLS13_CHACHA20_POLY1305_SHA256
Rustls
Benchmark: handshake-ticket - Suite: TLS13_CHACHA20_POLY1305_SHA256
Rustls
Benchmark: handshake - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
Rustls
Benchmark: handshake-resume - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Rustls
Benchmark: handshake-ticket - Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Rustls
Benchmark: handshake-resume - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
Rustls
Benchmark: handshake-ticket - Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ZFNet-512 - Device: CPU - Executor: Standard
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
OSPRay
Benchmark: particle_volume/ao/real_time
OSPRay
Benchmark: particle_volume/scivis/real_time
OSPRay
Benchmark: particle_volume/pathtracer/real_time
OSPRay
Benchmark: gravity_spheres_volume/dim_512/ao/real_time
OSPRay
Benchmark: gravity_spheres_volume/dim_512/scivis/real_time
OSPRay
Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time
BYTE Unix Benchmark
Computational Test: Pipe
BYTE Unix Benchmark
Computational Test: Dhrystone 2
BYTE Unix Benchmark
Computational Test: System Call
7-Zip Compression
Test: Compression Rating
7-Zip Compression
Test: Decompression Rating
Etcpak
Benchmark: Multi-Threaded - Configuration: ETC2
ASTC Encoder
Preset: Fast
ASTC Encoder
Preset: Medium
ASTC Encoder
Preset: Thorough
ASTC Encoder
Preset: Exhaustive
ASTC Encoder
Preset: Very Thorough
BYTE Unix Benchmark
Computational Test: Whetstone Double
Stockfish
Chess Benchmark
Stockfish
Chess Benchmark
GROMACS
Input: water_GMX50_bare
NAMD
Input: ATPase with 327,506 Atoms
NAMD
Input: STMV with 1,066,628 Atoms
Apache Cassandra
Test: Writes
Numpy Benchmark
QuantLib
Size: S
QuantLib
Size: XXS
Llama.cpp
Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Text Generation 128
Llama.cpp
Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 512
Llama.cpp
Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 1024
Llama.cpp
Backend: CPU BLAS - Model: Llama-3.1-Tulu-3-8B-Q8_0 - Test: Prompt Processing 2048
Llama.cpp
Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Text Generation 128
Llama.cpp
Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 512
Llama.cpp
Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 1024
Llama.cpp
Backend: CPU BLAS - Model: Mistral-7B-Instruct-v0.3-Q8_0 - Test: Prompt Processing 2048
Llama.cpp
Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Text Generation 128
Llama.cpp
Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 512
Llama.cpp
Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 1024
Llama.cpp
Backend: CPU BLAS - Model: granite-3.0-3b-a800m-instruct-Q8_0 - Test: Prompt Processing 2048
Llamafile
Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 16
Llamafile
Model: Llama-3.2-3B-Instruct.Q6_K - Test: Text Generation 128
Llamafile
Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 256
Llamafile
Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 512
Llamafile
Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 16
Llamafile
Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 1024
Llamafile
Model: Llama-3.2-3B-Instruct.Q6_K - Test: Prompt Processing 2048
Llamafile
Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Text Generation 128
Llamafile
Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 16
Llamafile
Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 256
Llamafile
Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 512
Llamafile
Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Text Generation 128
Llamafile
Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 16
Llamafile
Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 1024
Llamafile
Model: TinyLlama-1.1B-Chat-v1.0.BF16 - Test: Prompt Processing 2048
Llamafile
Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Text Generation 128
Llamafile
Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 256
Llamafile
Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 512
Llamafile
Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 1024
Llamafile
Model: mistral-7b-instruct-v0.2.Q5_K_M - Test: Prompt Processing 2048
Llamafile
Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 256
Llamafile
Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 512
Llamafile
Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 1024
Llamafile
Model: wizardcoder-python-34b-v1.0.Q6_K - Test: Prompt Processing 2048
OpenVINO GenAI
Model: Gemma-7b-int4-ov - Device: CPU
OpenVINO GenAI
Model: Falcon-7b-instruct-int4-ov - Device: CPU
OpenVINO GenAI
Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU
ONNX Runtime
Model: GPT-2 - Device: CPU - Executor: Standard
ONNX Runtime
Model: yolov4 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ZFNet-512 - Device: CPU - Executor: Standard
ONNX Runtime
Model: T5 Encoder - Device: CPU - Executor: Standard
ONNX Runtime
Model: bertsquad-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: CaffeNet 12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standard
ONNX Runtime
Model: super-resolution-10 - Device: CPU - Executor: Standard
ONNX Runtime
Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standard
ONNX Runtime
Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standard
LiteRT
Model: DeepLab V3
LiteRT
Model: SqueezeNet
LiteRT
Model: Inception V4
LiteRT
Model: NASNet Mobile
LiteRT
Model: Mobilenet Float
LiteRT
Model: Mobilenet Quant
LiteRT
Model: Inception ResNet V2
LiteRT
Model: Quantized COCO SSD MobileNet v1
PyPerformance
Benchmark: go
PyPerformance
Benchmark: chaos
PyPerformance
Benchmark: float
PyPerformance
Benchmark: nbody
PyPerformance
Benchmark: pathlib
PyPerformance
Benchmark: raytrace
PyPerformance
Benchmark: xml_etree
PyPerformance
Benchmark: gc_collect
PyPerformance
Benchmark: json_loads
PyPerformance
Benchmark: crypto_pyaes
PyPerformance
Benchmark: async_tree_io
PyPerformance
Benchmark: regex_compile
PyPerformance
Benchmark: python_startup
PyPerformance
Benchmark: asyncio_tcp_ssl
PyPerformance
Benchmark: django_template
PyPerformance
Benchmark: asyncio_websockets
PyPerformance
Benchmark: pickle_pure_python
Renaissance
Test: Scala Dotty
Renaissance
Test: Random Forest
Renaissance
Test: ALS Movie Lens
Renaissance
Test: Apache Spark Bayes
Renaissance
Test: Savina Reactors.IO
Renaissance
Test: Apache Spark PageRank
Renaissance
Test: Finagle HTTP Requests
Renaissance
Test: Gaussian Mixture Model
Renaissance
Test: In-Memory Database Shootout
Renaissance
Test: Akka Unbalanced Cobwebbed Tree
Renaissance
Test: Genetic Algorithm Using Jenetics + Futures
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
FinanceBench
Benchmark: Repo OpenMP
FinanceBench
Benchmark: Bonds OpenMP
OpenVINO GenAI
Model: Gemma-7b-int4-ov - Device: CPU - Time To First Token
OpenVINO GenAI
Model: Gemma-7b-int4-ov - Device: CPU - Time Per Output Token
OpenVINO GenAI
Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time To First Token
OpenVINO GenAI
Model: Falcon-7b-instruct-int4-ov - Device: CPU - Time Per Output Token
OpenVINO GenAI
Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time To First Token
OpenVINO GenAI
Model: Phi-3-mini-128k-instruct-int4-ov - Device: CPU - Time Per Output Token
CP2K Molecular Dynamics
Input: H20-64
CP2K Molecular Dynamics
Input: H20-256
CP2K Molecular Dynamics
Input: Fayalite-FIST
RELION
Test: Basic - Device: CPU
Build2
Time To Compile
Primesieve
Length: 1e12
Primesieve
Length: 1e13
Y-Cruncher
Pi Digits To Calculate: 1B
Y-Cruncher
Pi Digits To Calculate: 500M
POV-Ray
Trace Time
Timed Eigen Compilation
Time To Compile
Gcrypt Library
Apache CouchDB
Bulk Size: 100 - Inserts: 1000 - Rounds: 30
Apache CouchDB
Bulk Size: 100 - Inserts: 3000 - Rounds: 30
Apache CouchDB
Bulk Size: 300 - Inserts: 1000 - Rounds: 30
Apache CouchDB
Bulk Size: 300 - Inserts: 3000 - Rounds: 30
Apache CouchDB
Bulk Size: 500 - Inserts: 1000 - Rounds: 30
Apache CouchDB
Bulk Size: 500 - Inserts: 3000 - Rounds: 30
Blender
Blend File: BMW27 - Compute: CPU-Only
Blender
Blend File: Junkshop - Compute: CPU-Only
Blender
Blend File: Classroom - Compute: CPU-Only
Blender
Blend File: Fishy Cat - Compute: CPU-Only
Blender
Blend File: Barbershop - Compute: CPU-Only
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
Whisper.cpp
Model: ggml-base.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-small.en - Input: 2016 State of the Union
Whisper.cpp
Model: ggml-medium.en - Input: 2016 State of the Union
Whisperfile
Model Size: Tiny
Whisperfile
Model Size: Small
Whisperfile
Model Size: Medium
XNNPACK
Model: FP32MobileNetV1
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP16MobileNetV1
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: QS8MobileNetV2
Phoronix Test Suite v10.8.5