AMD EPYC 9554 EPYC 9654 Benchmarks Genoa

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211102-PTS-GENOAEXT88
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 4 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 8 Tests
Creator Workloads 8 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Database Test Suite 2 Tests
Encoding 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 5 Tests
Imaging 3 Tests
Machine Learning 3 Tests
Multi-Core 11 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 7 Tests
Server 2 Tests
Server CPU Tests 7 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 9654 2P
October 15 2022
  3 Hours, 46 Minutes
EPYC 9654 2P Repeat
October 15 2022
  2 Hours, 13 Minutes
EPYC 9654 2P - Perf M
October 15 2022
  2 Hours, 16 Minutes
EPYC 9654
October 15 2022
  2 Hours, 19 Minutes
EPYC 9654 Repeat
October 15 2022
  2 Hours, 20 Minutes
EPYC 9654 Power
October 16 2022
  2 Hours, 15 Minutes
9654 Power
October 17 2022
  2 Hours, 5 Minutes
AMD EPYC 75F3 32-Core
October 29 2022
  11 Hours, 39 Minutes
75F32
October 30 2022
  11 Hours, 50 Minutes
9554 TwoP
November 01 2022
  4 Hours, 1 Minute
EPYC 9554
November 03 2022
  1 Hour, 15 Minutes
8380 2p
November 05 2022
  6 Hours, 45 Minutes
Invert Hiding All Results Option
  4 Hours, 24 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 9554 EPYC 9654 Benchmarks Genoa Suite 1.0.0 System Test suite extracted from AMD EPYC 9554 EPYC 9654 Benchmarks Genoa. pts/cpuminer-opt-1.6.0 -a lbry Algorithm: LBC, LBRY Credits pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=alexnet Device: CPU - Batch Size: 256 - Model: AlexNet pts/stress-ng-1.6.0 --sock -1 Test: Socket Activity pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=vgg16 Device: CPU - Batch Size: 32 - Model: VGG-16 pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=googlenet Device: CPU - Batch Size: 512 - Model: GoogLeNet pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/stress-ng-1.6.0 --mmap -1 Test: MMAP pts/cpuminer-opt-1.6.0 -a minotaur Algorithm: Ringcoin pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=resnet50 Device: CPU - Batch Size: 512 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=googlenet Device: CPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=vgg16 Device: CPU - Batch Size: 16 - Model: VGG-16 pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/cpuminer-opt-1.6.0 -a scrypt Algorithm: scrypt pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=alexnet Device: CPU - Batch Size: 64 - Model: AlexNet pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m M Input: drivaerFastback, Medium Mesh Size - Execution Time pts/stress-ng-1.6.0 --cache -1 Test: CPU Cache pts/cpuminer-opt-1.6.0 -a sha256q Algorithm: Quad SHA-256, Pyrite pts/blender-3.3.1 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/cpuminer-opt-1.6.0 -a blake2s Algorithm: Blake-2 S pts/blender-3.3.1 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/blender-3.3.1 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/cpuminer-opt-1.6.0 -a skein Algorithm: Skeincoin pts/blender-3.3.1 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/compress-7zip-1.10.0 Test: Decompression Rating pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=alexnet Device: CPU - Batch Size: 32 - Model: AlexNet pts/stress-ng-1.6.0 --qsort -1 Test: Glibc Qsort Data Sorting pts/stress-ng-1.6.0 --str -1 Test: Glibc C String Functions pts/blender-3.3.1 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=googlenet Device: CPU - Batch Size: 64 - Model: GoogLeNet pts/stress-ng-1.6.0 --matrix -1 Test: Matrix Math pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=alexnet Device: CPU - Batch Size: 16 - Model: AlexNet pts/cpuminer-opt-1.6.0 -a x25x Algorithm: x25x pts/stress-ng-1.6.0 --crypt -1 Test: Crypto pts/cpuminer-opt-1.6.0 -a m7m Algorithm: Magi pts/stress-ng-1.6.0 --cpu -1 --cpu-method all Test: CPU Stress pts/stress-ng-1.6.0 --mutex -1 Test: Mutex pts/stress-ng-1.6.0 --malloc -1 Test: Malloc pts/stress-ng-1.6.0 --vecmath -1 Test: Vector Math pts/stress-ng-1.6.0 --sendfile -1 Test: SENDFILE pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/stress-ng-1.6.0 --fork -1 Test: Forking pts/cpuminer-opt-1.6.0 -a sha256t Algorithm: Triple SHA-256, Onecoin pts/build-linux-kernel-1.14.0 allmodconfig Build: allmodconfig pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/cpuminer-opt-1.6.0 -a deep Algorithm: Deepcoin pts/hammerdb-mariadb-1.1.0 64 500 Virtual Users: 64 - Warehouses: 500 pts/hammerdb-mariadb-1.1.0 64 250 Virtual Users: 64 - Warehouses: 250 pts/xmrig-1.1.0 -a rx/wow --bench=1M Variant: Wownero - Hash Count: 1M pts/stress-ng-1.6.0 --switch -1 Test: Context Switching pts/stress-ng-1.6.0 --memfd -1 Test: MEMFD pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=googlenet Device: CPU - Batch Size: 32 - Model: GoogLeNet pts/compress-7zip-1.10.0 Test: Compression Rating pts/xmrig-1.1.0 --bench=1M Variant: Monero - Hash Count: 1M pts/stress-ng-1.6.0 --memcpy -1 Test: Memory Copying pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/stress-ng-1.6.0 --sem -1 Test: Semaphores pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=googlenet Device: CPU - Batch Size: 16 - Model: GoogLeNet pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Execution Time pts/openradioss-1.0.0 RUBBER_SEAL_IMPDISP_GEOM_0000.rad RUBBER_SEAL_IMPDISP_GEOM_0001.rad Model: Rubber O-Ring Seal Installation pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/openradioss-1.0.0 fsi_drop_container_0000.rad fsi_drop_container_0001.rad Model: INIVOL and Fluid Structure Interaction Drop Container pts/stress-ng-1.6.0 --futex -1 Test: Futex pts/build-nodejs-1.2.0 Time To Compile pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/openradioss-1.0.0 Cell_Phone_Drop_0000.rad Cell_Phone_Drop_0001.rad Model: Cell Phone Drop Test pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/y-cruncher-1.2.0 1b Pi Digits To Calculate: 1B pts/y-cruncher-1.2.0 500m Pi Digits To Calculate: 500M pts/x264-2.7.0 Bosphorus_3840x2160.y4m Video Input: Bosphorus 4K pts/build-linux-kernel-1.14.0 defconfig Build: defconfig pts/spacy-1.0.0 Model: en_core_web_trf pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/cpuminer-opt-1.6.0 -a allium Algorithm: Garlicoin pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=vgg16 Device: CPU - Batch Size: 512 - Model: VGG-16 pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=vgg16 Device: CPU - Batch Size: 256 - Model: VGG-16 pts/cpuminer-opt-1.6.0 -a myr-gr Algorithm: Myriad-Groestl pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=alexnet Device: CPU - Batch Size: 512 - Model: AlexNet pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=vgg16 Device: CPU - Batch Size: 64 - Model: VGG-16 pts/openradioss-1.0.0 BIRD_WINDSHIELD_v1_0000.rad BIRD_WINDSHIELD_v1_0001.rad Model: Bird Strike on Windshield pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/hbase-1.1.0 --rows=10000 randomRead 256 Rows: 10000 - Test: Random Read - Clients: 256 pts/build-godot-1.0.0 Time To Compile pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m M Input: drivaerFastback, Medium Mesh Size - Mesh Time pts/smhasher-1.1.0 --test=Speed t1ha0_aes_avx2 Hash: t1ha0_aes_avx2 x86_64 pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Mesh Time pts/stress-ng-1.6.0 --atomic -1 Test: Atomic pts/hbase-1.1.0 --rows=10000 increment 1 Rows: 10000 - Test: Increment - Clients: 1 pts/avifenc-1.3.0 -s 6 Encoder Speed: 6 pts/avifenc-1.3.0 -s 6 -l Encoder Speed: 6, Lossless pts/avifenc-1.3.0 -s 10 -l Encoder Speed: 10, Lossless pts/jpegxl-1.5.0 sample-4.png out.jxl -q 100 --num_reps 10 Input: PNG - Quality: 100 pts/hbase-1.1.0 --rows=1000000 randomRead 1 Rows: 1000000 - Test: Random Read - Clients: 1 pts/openfoam-1.2.0 incompressible/simpleFoam/motorBike/ Input: motorBike - Execution Time pts/x265-1.3.0 Bosphorus_3840x2160.y4m Video Input: Bosphorus 4K pts/jpegxl-decode-1.5.0 --num_reps=200 CPU Threads: All pts/hbase-1.1.0 --rows=10000 randomWrite 256 Rows: 10000 - Test: Random Write - Clients: 256 pts/stress-ng-1.6.0 --msg -1 Test: System V Message Passing pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 100 --num_reps 10 Input: JPEG - Quality: 100 pts/x264-2.7.0 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Video Input: Bosphorus 1080p pts/jpegxl-decode-1.5.0 --num_threads=1 --num_reps=100 CPU Threads: 1 pts/x265-1.3.0 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Video Input: Bosphorus 1080p pts/stress-ng-1.6.0 --numa -1 Test: NUMA pts/build-mesa-1.0.0 Time To Compile pts/hbase-1.1.0 --rows=10000 randomRead 1 Rows: 10000 - Test: Random Read - Clients: 1 pts/avifenc-1.3.0 -s 2 Encoder Speed: 2 pts/hbase-1.1.0 --rows=10000 increment 256 Rows: 10000 - Test: Increment - Clients: 256 pts/openradioss-1.0.0 Bumper_Beam_AP_meshed_0000.rad Bumper_Beam_AP_meshed_0001.rad Model: Bumper Beam pts/ffmpeg-3.0.0 --encoder=libx265 platform Encoder: libx265 - Scenario: Platform pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 90 --num_reps 40 Input: JPEG - Quality: 90 pts/hbase-1.1.0 --rows=1000000 increment 1 Rows: 1000000 - Test: Increment - Clients: 1 pts/smhasher-1.1.0 --test=Speed t1ha2_atonce Hash: t1ha2_atonce pts/avifenc-1.3.0 -s 0 Encoder Speed: 0 pts/jpegxl-1.5.0 sample-4.png out.jxl -q 80 --num_reps 50 Input: PNG - Quality: 80 pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 80 --num_reps 50 Input: JPEG - Quality: 80 pts/jpegxl-1.5.0 sample-4.png out.jxl -q 90 --num_reps 40 Input: PNG - Quality: 90 pts/ffmpeg-3.0.0 --encoder=libx265 vod Encoder: libx265 - Scenario: Video On Demand pts/ffmpeg-3.0.0 --encoder=libx265 upload Encoder: libx265 - Scenario: Upload pts/smhasher-1.1.0 --test=Speed wyhash Hash: wyhash pts/smhasher-1.1.0 --test=Speed fasthash32 Hash: fasthash32 pts/smhasher-1.1.0 --test=Speed sha3-256 Hash: SHA3-256 pts/smhasher-1.1.0 --test=Speed MeowHash Hash: MeowHash x86_64 AES-NI pts/smhasher-1.1.0 --test=Speed Spooky32 Hash: Spooky32 pts/smhasher-1.1.0 --test=Speed FarmHash128 Hash: FarmHash128 pts/smhasher-1.1.0 --test=Speed FarmHash32 Hash: FarmHash32 x86_64 AVX pts/hbase-1.1.0 --rows=10000 randomRead 500 Rows: 10000 - Test: Random Read - Clients: 500 pts/hammerdb-mariadb-1.1.0 64 100 Virtual Users: 64 - Warehouses: 100 pts/spacy-1.0.0 Model: en_core_web_lg pts/ffmpeg-3.0.0 --encoder=libx264 platform Encoder: libx264 - Scenario: Platform pts/ffmpeg-3.0.0 --encoder=libx264 vod Encoder: libx264 - Scenario: Video On Demand pts/ffmpeg-3.0.0 --encoder=libx264 live Encoder: libx264 - Scenario: Live pts/ffmpeg-3.0.0 --encoder=libx264 upload Encoder: libx264 - Scenario: Upload pts/hbase-1.1.0 --rows=10000 increment 500 Rows: 10000 - Test: Increment - Clients: 500 pts/ffmpeg-3.0.0 --encoder=libx265 live Encoder: libx265 - Scenario: Live pts/clickhouse-1.1.0 100M Rows Web Analytics Dataset, Second Run pts/hbase-1.1.0 --rows=10000 randomWrite 1 Rows: 10000 - Test: Random Write - Clients: 1 pts/clickhouse-1.1.0 100M Rows Web Analytics Dataset, Third Run pts/stress-ng-1.6.0 --io-uring -1 Test: IO_uring pts/clickhouse-1.1.0 100M Rows Web Analytics Dataset, First Run / Cold Cache pts/hbase-1.1.0 --rows=1000000 increment 256 Rows: 1000000 - Test: Increment - Clients: 256 pts/hbase-1.1.0 --rows=1000000 increment 500 Rows: 1000000 - Test: Increment - Clients: 500 pts/hbase-1.1.0 --rows=1000000 scan 256 Rows: 1000000 - Test: Scan - Clients: 256 pts/hbase-1.1.0 --rows=1000000 scan 1 Rows: 1000000 - Test: Scan - Clients: 1 pts/hbase-1.1.0 --rows=10000 scan 1 Rows: 10000 - Test: Scan - Clients: 1 pts/hbase-1.1.0 --rows=1000000 scan 500 Rows: 1000000 - Test: Scan - Clients: 500 pts/hbase-1.1.0 --rows=10000 scan 500 Rows: 10000 - Test: Scan - Clients: 500 pts/hbase-1.1.0 --rows=10000 scan 256 Rows: 10000 - Test: Scan - Clients: 256 pts/clickhouse-1.1.0