epyc-75f3-new

2 x AMD EPYC 75F3 32-Core testing with a ASRockRack ROME2D16-2T (P3.30 BIOS) and ASPEED on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204097-NE-EPYC75F3N46
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Creator Workloads 2 Tests
HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
Multi-Core 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 09 2022
  1 Hour, 46 Minutes
AA
April 09 2022
  39 Minutes
B
April 09 2022
  39 Minutes
C
April 09 2022
  39 Minutes
Invert Hiding All Results Option
  56 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc-75f3-new Suite 1.0.0 System Test suite extracted from epyc-75f3-new . pts/onnx-1.5.0 super_resolution/super_resolution.onnx -e cpu Model: super-resolution-10 - Device: CPU - Executor: Standard pts/onnx-1.5.0 GPT2/model.onnx -e cpu Model: GPT-2 - Device: CPU - Executor: Standard pts/onednn-1.8.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=u8s8f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/perf-bench-1.0.4 epoll wait -r 30 Benchmark: Epoll Wait pts/onnx-1.5.0 yolov4/yolov4.onnx -e cpu Model: yolov4 - Device: CPU - Executor: Standard pts/onnx-1.5.0 bertsquad-12/bertsquad-12.onnx -e cpu Model: bertsquad-12 - Device: CPU - Executor: Standard pts/onednn-1.8.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onnx-1.5.0 fcn-resnet101-11/model.onnx -e cpu Model: fcn-resnet101-11 - Device: CPU - Executor: Standard pts/onednn-1.8.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/avifenc-1.2.0 -s 6 Encoder Speed: 6 pts/onnx-1.5.0 resnet100/resnet100.onnx -e cpu Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard pts/onnx-1.5.0 GPT2/model.onnx -e cpu -P Model: GPT-2 - Device: CPU - Executor: Parallel pts/avifenc-1.2.0 -s 2 Encoder Speed: 2 pts/avifenc-1.2.0 -s 10 -l Encoder Speed: 10, Lossless pts/avifenc-1.2.0 -s 6 -l Encoder Speed: 6, Lossless pts/perf-bench-1.0.4 futex lock-pi -r 30 -s Benchmark: Futex Lock-Pi pts/onednn-1.8.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/perf-bench-1.0.4 sched pipe -l 5000000 Benchmark: Sched Pipe pts/onednn-1.8.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/perf-bench-1.0.4 mem memcpy -l 100000 -s 1MB Benchmark: Memcpy 1MB pts/perf-bench-1.0.4 mem memset -l 100000 -s 1MB Benchmark: Memset 1MB pts/onnx-1.5.0 super_resolution/super_resolution.onnx -e cpu -P Model: super-resolution-10 - Device: CPU - Executor: Parallel pts/avifenc-1.2.0 -s 0 Encoder Speed: 0 pts/onednn-1.8.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/perf-bench-1.0.4 futex hash -r 30 -s Benchmark: Futex Hash pts/onednn-1.8.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onnx-1.5.0 resnet100/resnet100.onnx -e cpu -P Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel pts/onnx-1.5.0 fcn-resnet101-11/model.onnx -e cpu -P Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel pts/perf-bench-1.0.4 syscall basic -l 100000000 Benchmark: Syscall Basic pts/onnx-1.5.0 yolov4/yolov4.onnx -e cpu -P Model: yolov4 - Device: CPU - Executor: Parallel pts/onnx-1.5.0 bertsquad-12/bertsquad-12.onnx -e cpu -P Model: bertsquad-12 - Device: CPU - Executor: Parallel pts/onednn-1.8.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=bf16bf16bf16 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-1.8.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-1.8.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-1.8.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-1.8.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-1.8.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU