old epyc ai

Tests for a future article. AMD EPYC 7551 32-Core testing with a GIGABYTE MZ31-AR0-00 v01010101 (F10 BIOS) and ASPEED on Debian 12 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2406025-NE-OLDEPYCAI16
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

HPC - High Performance Computing 3 Tests
Large Language Models 2 Tests
Machine Learning 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
June 02
  1 Hour, 48 Minutes
b
June 02
  32 Minutes
c
June 02
  14 Minutes
Invert Hiding All Results Option
  51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


old epyc ai Suite 1.0.0 System Test suite extracted from old epyc ai. pts/whisper-cpp-1.1.0 -m models/ggml-base.en.bin -f ../2016-state-of-the-union.wav Model: ggml-base.en - Input: 2016 State of the Union pts/whisper-cpp-1.1.0 -m models/ggml-small.en.bin -f ../2016-state-of-the-union.wav Model: ggml-small.en - Input: 2016 State of the Union pts/whisper-cpp-1.1.0 -m models/ggml-medium.en.bin -f ../2016-state-of-the-union.wav Model: ggml-medium.en - Input: 2016 State of the Union pts/llama-cpp-1.1.0 -m ../Meta-Llama-3-8B-Instruct-Q8_0.gguf Model: Meta-Llama-3-8B-Instruct-Q8_0.gguf pts/llamafile-1.2.0 run-llava --gpu DISABLE Test: llava-v1.6-mistral-7b.Q8_0 - Acceleration: CPU pts/llamafile-1.2.0 run-llama3 --gpu DISABLE Test: Meta-Llama-3-8B-Instruct.F16 - Acceleration: CPU pts/llamafile-1.2.0 run-tinyllama --gpu DISABLE Test: TinyLlama-1.1B-Chat-v1.0.BF16 - Acceleration: CPU pts/llamafile-1.2.0 run-mistral --gpu DISABLE Test: mistral-7b-instruct-v0.2.Q5_K_M - Acceleration: CPU pts/llamafile-1.2.0 run-wizardcoder --gpu DISABLE Test: wizardcoder-python-34b-v1.0.Q6_K - Acceleration: CPU