3900xt-november

AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211180-SYST-3900XTN38
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 6 Tests
Creator Workloads 8 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Encoding 4 Tests
HPC - High Performance Computing 8 Tests
Imaging 3 Tests
Machine Learning 4 Tests
Multi-Core 6 Tests
OpenMPI Tests 3 Tests
Python Tests 6 Tests
Server CPU Tests 3 Tests
Single-Threaded 2 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 17 2022
  5 Hours, 24 Minutes
aa
November 17 2022
  5 Hours, 21 Minutes
b
November 17 2022
  13 Hours, 42 Minutes
Invert Hiding All Results Option
  8 Hours, 9 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3900xt-november Suite 1.0.0 System Test suite extracted from 3900xt-november . pts/minibude-1.0.0 --deck ../data/bm1 --iterations 500 Implementation: OpenMP - Input Deck: BM1 pts/minibude-1.0.0 --deck ../data/bm2 --iterations 10 Implementation: OpenMP - Input Deck: BM2 pts/stress-ng-1.6.0 --mmap -1 Test: MMAP pts/stress-ng-1.6.0 --numa -1 Test: NUMA pts/stress-ng-1.6.0 --futex -1 Test: Futex pts/stress-ng-1.6.0 --memfd -1 Test: MEMFD pts/stress-ng-1.6.0 --mutex -1 Test: Mutex pts/stress-ng-1.6.0 --atomic -1 Test: Atomic pts/stress-ng-1.6.0 --crypt -1 Test: Crypto pts/stress-ng-1.6.0 --malloc -1 Test: Malloc pts/stress-ng-1.6.0 --fork -1 Test: Forking pts/stress-ng-1.6.0 --io-uring -1 Test: IO_uring pts/stress-ng-1.6.0 --sendfile -1 Test: SENDFILE pts/stress-ng-1.6.0 --cache -1 Test: CPU Cache pts/stress-ng-1.6.0 --cpu -1 --cpu-method all Test: CPU Stress pts/stress-ng-1.6.0 --sem -1 Test: Semaphores pts/stress-ng-1.6.0 --matrix -1 Test: Matrix Math pts/stress-ng-1.6.0 --vecmath -1 Test: Vector Math pts/stress-ng-1.6.0 --memcpy -1 Test: Memory Copying pts/stress-ng-1.6.0 --sock -1 Test: Socket Activity pts/stress-ng-1.6.0 --switch -1 Test: Context Switching pts/stress-ng-1.6.0 --str -1 Test: Glibc C String Functions pts/stress-ng-1.6.0 --qsort -1 Test: Glibc Qsort Data Sorting pts/stress-ng-1.6.0 --msg -1 Test: System V Message Passing pts/nekrs-1.0.0 turbPipePeriodic turbPipe.par Input: TurboPipe Periodic pts/libplacebo-1.1.0 Test: deband_heavy pts/libplacebo-1.1.0 Test: polar_nocompute pts/libplacebo-1.1.0 Test: hdr_peakdetect pts/libplacebo-1.1.0 Test: hdr_lut pts/libplacebo-1.1.0 Test: av1_grain_lap pts/quadray-1.0.0 -d 1 -x 3840 -y 2160 Scene: 1 - Resolution: 4K pts/quadray-1.0.0 -d 2 -x 3840 -y 2160 Scene: 2 - Resolution: 4K pts/quadray-1.0.0 -d 3 -x 3840 -y 2160 Scene: 3 - Resolution: 4K pts/quadray-1.0.0 -d 5 -x 3840 -y 2160 Scene: 5 - Resolution: 4K pts/quadray-1.0.0 -d 1 -x 1920 -y 1080 Scene: 1 - Resolution: 1080p pts/quadray-1.0.0 -d 2 -x 1920 -y 1080 Scene: 2 - Resolution: 1080p pts/quadray-1.0.0 -d 3 -x 1920 -y 1080 Scene: 3 - Resolution: 1080p pts/quadray-1.0.0 -d 5 -x 1920 -y 1080 Scene: 5 - Resolution: 1080p pts/ffmpeg-3.0.0 --encoder=libx264 live Encoder: libx264 - Scenario: Live pts/ffmpeg-3.0.0 --encoder=libx265 live Encoder: libx265 - Scenario: Live pts/ffmpeg-3.0.0 --encoder=libx264 upload Encoder: libx264 - Scenario: Upload pts/ffmpeg-3.0.0 --encoder=libx265 upload Encoder: libx265 - Scenario: Upload pts/ffmpeg-3.0.0 --encoder=libx264 platform Encoder: libx264 - Scenario: Platform pts/ffmpeg-3.0.0 --encoder=libx265 platform Encoder: libx265 - Scenario: Platform pts/ffmpeg-3.0.0 --encoder=libx264 vod Encoder: libx264 - Scenario: Video On Demand pts/ffmpeg-3.0.0 --encoder=libx265 vod Encoder: libx265 - Scenario: Video On Demand pts/aom-av1-3.5.0 --cpu-used=0 --limit=20 Bosphorus_3840x2160.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=4 Bosphorus_3840x2160.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=6 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=6 Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=8 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=9 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=10 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=0 --limit=20 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.5.0 --cpu-used=4 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.5.0 --cpu-used=6 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.5.0 --cpu-used=6 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.5.0 --cpu-used=8 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.5.0 --cpu-used=9 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.5.0 --cpu-used=10 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p pts/xmrig-1.1.0 --bench=1M Variant: Monero - Hash Count: 1M pts/xmrig-1.1.0 -a rx/wow --bench=1M Variant: Wownero - Hash Count: 1M pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=alexnet Device: CPU - Batch Size: 16 - Model: AlexNet pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=alexnet Device: CPU - Batch Size: 32 - Model: AlexNet pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=alexnet Device: CPU - Batch Size: 64 - Model: AlexNet pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=alexnet Device: CPU - Batch Size: 256 - Model: AlexNet pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=alexnet Device: CPU - Batch Size: 512 - Model: AlexNet pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=googlenet Device: CPU - Batch Size: 16 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=googlenet Device: CPU - Batch Size: 32 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=googlenet Device: CPU - Batch Size: 64 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=googlenet Device: CPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=googlenet Device: CPU - Batch Size: 512 - Model: GoogLeNet pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/cpuminer-opt-1.6.0 -a m7m Algorithm: Magi pts/cpuminer-opt-1.6.0 -a x25x Algorithm: x25x pts/cpuminer-opt-1.6.0 -a scrypt Algorithm: scrypt pts/cpuminer-opt-1.6.0 -a deep Algorithm: Deepcoin pts/cpuminer-opt-1.6.0 -a minotaur Algorithm: Ringcoin pts/cpuminer-opt-1.6.0 -a blake2s Algorithm: Blake-2 S pts/cpuminer-opt-1.6.0 -a allium Algorithm: Garlicoin pts/cpuminer-opt-1.6.0 -a skein Algorithm: Skeincoin pts/cpuminer-opt-1.6.0 -a myr-gr Algorithm: Myriad-Groestl pts/cpuminer-opt-1.6.0 -a lbry Algorithm: LBC, LBRY Credits pts/cpuminer-opt-1.6.0 -a sha256q Algorithm: Quad SHA-256, Pyrite pts/cpuminer-opt-1.6.0 -a sha256t Algorithm: Triple SHA-256, Onecoin pts/smhasher-1.1.0 --test=Speed wyhash Hash: wyhash pts/smhasher-1.1.0 --test=Speed sha3-256 Hash: SHA3-256 pts/smhasher-1.1.0 --test=Speed Spooky32 Hash: Spooky32 pts/smhasher-1.1.0 --test=Speed fasthash32 Hash: fasthash32 pts/smhasher-1.1.0 --test=Speed FarmHash128 Hash: FarmHash128 pts/smhasher-1.1.0 --test=Speed t1ha2_atonce Hash: t1ha2_atonce pts/smhasher-1.1.0 --test=Speed FarmHash32 Hash: FarmHash32 x86_64 AVX pts/smhasher-1.1.0 --test=Speed t1ha0_aes_avx2 Hash: t1ha0_aes_avx2 x86_64 pts/smhasher-1.1.0 --test=Speed MeowHash Hash: MeowHash x86_64 AES-NI pts/jpegxl-1.5.0 sample-4.png out.jxl -q 80 --num_reps 50 Input: PNG - Quality: 80 pts/jpegxl-1.5.0 sample-4.png out.jxl -q 90 --num_reps 40 Input: PNG - Quality: 90 pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 80 --num_reps 50 Input: JPEG - Quality: 80 pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 90 --num_reps 40 Input: JPEG - Quality: 90 pts/jpegxl-1.5.0 sample-4.png out.jxl -q 100 --num_reps 10 Input: PNG - Quality: 100 pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 100 --num_reps 10 Input: JPEG - Quality: 100 pts/jpegxl-decode-1.5.0 --num_threads=1 --num_reps=100 CPU Threads: 1 pts/jpegxl-decode-1.5.0 --num_reps=200 CPU Threads: All pts/nginx-3.0.0 -c 100 Connections: 100 pts/nginx-3.0.0 -c 200 Connections: 200 pts/nginx-3.0.0 -c 500 Connections: 500 pts/nginx-3.0.0 -c 1000 Connections: 1000 pts/spacy-1.0.0 Model: en_core_web_lg pts/spacy-1.0.0 Model: en_core_web_trf pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Mesh Time pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Execution Time pts/openradioss-1.0.0 Bumper_Beam_AP_meshed_0000.rad Bumper_Beam_AP_meshed_0001.rad Model: Bumper Beam pts/openradioss-1.0.0 Cell_Phone_Drop_0000.rad Cell_Phone_Drop_0001.rad Model: Cell Phone Drop Test pts/openradioss-1.0.0 BIRD_WINDSHIELD_v1_0000.rad BIRD_WINDSHIELD_v1_0001.rad Model: Bird Strike on Windshield pts/openradioss-1.0.0 RUBBER_SEAL_IMPDISP_GEOM_0000.rad RUBBER_SEAL_IMPDISP_GEOM_0001.rad Model: Rubber O-Ring Seal Installation pts/openradioss-1.0.0 fsi_drop_container_0000.rad fsi_drop_container_0001.rad Model: INIVOL and Fluid Structure Interaction Drop Container pts/avifenc-1.3.0 -s 0 Encoder Speed: 0 pts/avifenc-1.3.0 -s 2 Encoder Speed: 2 pts/avifenc-1.3.0 -s 6 Encoder Speed: 6 pts/avifenc-1.3.0 -s 6 -l Encoder Speed: 6, Lossless pts/avifenc-1.3.0 -s 10 -l Encoder Speed: 10, Lossless pts/y-cruncher-1.2.0 1b Pi Digits To Calculate: 1B pts/y-cruncher-1.2.0 500m Pi Digits To Calculate: 500M pts/encode-flac-1.8.1 WAV To FLAC pts/encodec-1.0.1 -b 3 Target Bandwidth: 3 kbps pts/encodec-1.0.1 -b 6 Target Bandwidth: 6 kbps pts/encodec-1.0.1 -b 24 Target Bandwidth: 24 kbps pts/encodec-1.0.1 -b 1.5 Target Bandwidth: 1.5 kbps