EPYC 7F72 2P Linux 5.11

Benchmarks by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012203-HA-EPYC7F72218
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 8 Tests
CPU Massive 11 Tests
Creator Workloads 2 Tests
Database Test Suite 6 Tests
Fortran Tests 4 Tests
HPC - High Performance Computing 12 Tests
Imaging 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 4 Tests
Multi-Core 9 Tests
NVIDIA GPU Compute 2 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 3 Tests
Scientific Computing 8 Tests
Server 6 Tests
Server CPU Tests 5 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
5.10.1
December 19 2020
  12 Hours, 11 Minutes
Linux 5.11 GIt
December 19 2020
  14 Hours, 50 Minutes
Invert Hiding All Results Option
  13 Hours, 30 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 7F72 2P Linux 5.11 Suite 1.0.0 System Test suite extracted from EPYC 7F72 2P Linux 5.11. pts/fio-1.14.1 randread io_uring 0 1 2m Type: Random Read - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/fio-1.14.1 randread io_uring 0 1 4k Type: Random Read - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/fio-1.14.1 randwrite io_uring 0 1 2m Type: Random Write - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/fio-1.14.1 randwrite io_uring 0 1 4k Type: Random Write - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/fio-1.14.1 read io_uring 0 1 2m Type: Sequential Read - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/fio-1.14.1 read io_uring 0 1 4k Type: Sequential Read - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/fio-1.14.1 write io_uring 0 1 2m Type: Sequential Write - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 2MB - Disk Target: Default Test Directory pts/fio-1.14.1 write io_uring 0 1 4k Type: Sequential Write - Engine: IO_uring - Buffered: No - Direct: Yes - Block Size: 4KB - Disk Target: Default Test Directory pts/hpcg-1.2.1 pts/namd-1.2.1 ATPase Simulation - 327,506 Atoms pts/dolfyn-1.0.3 Computational Fluid Dynamics pts/ffte-1.2.1 N=256, 3D Complex FFT Routine pts/hmmer-1.2.2 Pfam Database Search pts/mafft-1.6.2 Multiple Sequence Alignment - LSU RNA pts/lammps-1.3.0 benchmark_20k_atoms.in Model: 20k Atoms pts/lammps-1.3.0 in.rhodo Model: Rhodopsin Protein pts/webp-1.0.0 -q 100 -lossless Encode Settings: Quality 100, Lossless pts/webp-1.0.0 -q 100 -m 6 Encode Settings: Quality 100, Highest Compression pts/webp-1.0.0 -q 100 -lossless -m 6 Encode Settings: Quality 100, Lossless, Highest Compression pts/byte-1.2.2 TEST_DHRY2 Computational Test: Dhrystone 2 pts/libraw-1.0.0 Post-Processing Benchmark pts/build-linux-kernel-1.10.2 Time To Compile pts/build-llvm-1.2.1 Time To Compile pts/numpy-1.2.1 pts/keydb-1.2.0 pts/gromacs-1.4.1 Water Benchmark pts/tensorflow-lite-1.0.0 --graph=squeezenet.tflite Model: SqueezeNet pts/tensorflow-lite-1.0.0 --graph=inception_v4.tflite Model: Inception V4 pts/tensorflow-lite-1.0.0 --graph=nasnet_mobile.tflite Model: NASNet Mobile pts/tensorflow-lite-1.0.0 --graph=mobilenet_v1_1.0_224.tflite Model: Mobilenet Float pts/tensorflow-lite-1.0.0 --graph=mobilenet_v1_1.0_224_quant.tflite Model: Mobilenet Quant pts/tensorflow-lite-1.0.0 --graph=inception_resnet_v2.tflite Model: Inception ResNet V2 pts/mysqlslap-1.1.0 --concurrency=128 Clients: 128 pts/mysqlslap-1.1.0 --concurrency=256 Clients: 256 pts/mysqlslap-1.1.0 --concurrency=512 Clients: 512 pts/pgbench-1.10.1 -s 1 -c 50 -S Scaling Factor: 1 - Clients: 50 - Mode: Read Only pts/pgbench-1.10.1 -s 1 -c 50 -S Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency pts/pgbench-1.10.1 -s 1 -c 100 -S Scaling Factor: 1 - Clients: 100 - Mode: Read Only pts/pgbench-1.10.1 -s 1 -c 100 -S Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency pts/pgbench-1.10.1 -s 1 -c 250 -S Scaling Factor: 1 - Clients: 250 - Mode: Read Only pts/pgbench-1.10.1 -s 1 -c 250 -S Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency pts/pgbench-1.10.1 -s 1 -c 50 Scaling Factor: 1 - Clients: 50 - Mode: Read Write pts/pgbench-1.10.1 -s 1 -c 50 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency pts/pgbench-1.10.1 -s 1 -c 100 Scaling Factor: 1 - Clients: 100 - Mode: Read Write pts/pgbench-1.10.1 -s 1 -c 100 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency pts/pgbench-1.10.1 -s 1 -c 250 Scaling Factor: 1 - Clients: 250 - Mode: Read Write pts/pgbench-1.10.1 -s 1 -c 250 Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency pts/pgbench-1.10.1 -s 100 -c 50 -S Scaling Factor: 100 - Clients: 50 - Mode: Read Only pts/pgbench-1.10.1 -s 100 -c 50 -S Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency pts/pgbench-1.10.1 -s 100 -c 100 -S Scaling Factor: 100 - Clients: 100 - Mode: Read Only pts/pgbench-1.10.1 -s 100 -c 100 -S Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency pts/pgbench-1.10.1 -s 100 -c 250 -S Scaling Factor: 100 - Clients: 250 - Mode: Read Only pts/pgbench-1.10.1 -s 100 -c 250 -S Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency pts/pgbench-1.10.1 -s 100 -c 50 Scaling Factor: 100 - Clients: 50 - Mode: Read Write pts/pgbench-1.10.1 -s 100 -c 50 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency pts/pgbench-1.10.1 -s 100 -c 100 Scaling Factor: 100 - Clients: 100 - Mode: Read Write pts/pgbench-1.10.1 -s 100 -c 100 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency pts/pgbench-1.10.1 -s 100 -c 250 Scaling Factor: 100 - Clients: 250 - Mode: Read Write pts/pgbench-1.10.1 -s 100 -c 250 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency pts/sqlite-speedtest-1.0.1 Timed Time - Size 1,000 pts/gpaw-1.0.0 carbon-nanotube Input: Carbon Nanotube pts/ncnn-1.1.0 -1 Target: CPU - Model: mobilenet pts/ncnn-1.1.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.1.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.1.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.1.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.1.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.1.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.1.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.1.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.1.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.1.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.1.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.1.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.1.0 -1 Target: CPU - Model: regnety_400m pts/cassandra-1.0.3 WRITE Test: Writes pts/influxdb-1.0.0 -c 4 -b 10000 -t 2,5000,1 -p 10000 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 pts/influxdb-1.0.0 -c 64 -b 10000 -t 2,5000,1 -p 10000 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 pts/influxdb-1.0.0 -c 1024 -b 10000 -t 2,5000,1 -p 10000 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000