epyc-7f72-june

AMD EPYC 7F72 24-Core testing with a ASRockRack EPYCD8 (P2.10 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2006266-NE-EPYC7F72J53
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 4 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 13 Tests
Creator Workloads 10 Tests
Encoding 5 Tests
HPC - High Performance Computing 5 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 2 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 4 Tests
Programmer / Developer System Benchmarks 4 Tests
Renderers 3 Tests
Server CPU Tests 8 Tests
Video Encoding 5 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F72
June 25 2020
  3 Hours, 32 Minutes
EPYC 7F72 Run 2
June 26 2020
  1 Hour, 48 Minutes
Invert Hiding All Results Option
  2 Hours, 40 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


epyc-7f72-june OpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7F72 24-Core @ 3.20GHz (24 Cores / 48 Threads)ASRockRack EPYCD8 (P2.10 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDAMD Starship/MatisseVE2282 x Intel I350Ubuntu 20.045.8.0-rc1-phx-fsgsbase (x86_64) 20200620GNOME Shell 3.36.2X Server 1.20.8modesetting 1.20.8GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionEpyc-7f72-june BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x830101c- EPYC 7F72: OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1) - EPYC 7F72: Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

epyc-7f72-june wireguard: lczero: BLASlczero: Eigenlczero: Randrodinia: OpenMP LavaMDrodinia: OpenMP Myocyterodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterjava-gradle-perf: Reactorcompress-zstd: 3compress-zstd: 19onednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimesvt-av1: Enc Mode 0 - 1080psvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080pluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-linux-kernel: Time To Compilebuild2: Time To Compileyafaray: Total Time For Sample Scenedaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clusteroctave-benchmark: stress-ng: MMAPstress-ng: NUMAstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingv-ray: CPUgit: Time To Complete Common Git Commandsbrl-cad: VGR Performance MetricEPYC 7F72EPYC 7F72 Run 2296.8912525244213999499.88531.15497.94255.26910.91614.530327.9258409.7102.61.6509226.74991.3421413.51841.979022.024473.070234.042235.813582.19918229.41167.54780.5501931.13833678.67295.00689.95121.180.302.2918.503.5332.390.1126.67556.346361.37364.68283.753.974.4162.71638.0415.6445.50438.89468.41389.827903.098401.5102.1681.55294.22688.13121.160.302.2818.573.5332.5862.46137.8515.7605.61338.96868.557900.2419784.8868792391018.628.835358.17575.24849.73359926.036170.75490855907.5263224.09395262.3250.958680.213438713.73101144.44193600.8912234.2112964.418992728.623332245.82360.1913772877.083400254.741314458OpenBenchmarking.org

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7F7260120180240300SE +/- 0.38, N = 3296.89

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: BLASEPYC 7F725001000150020002500SE +/- 79.93, N = 825251. (CXX) g++ options: -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: EigenEPYC 7F725001000150020002500SE +/- 34.33, N = 324421. (CXX) g++ options: -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.25Backend: RandomEPYC 7F7230K60K90K120K150KSE +/- 208.94, N = 31399941. (CXX) g++ options: -pthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7F7220406080100SE +/- 1.73, N = 399.891. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP MyocyteEPYC 7F72714212835SE +/- 0.27, N = 331.151. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7F7220406080100SE +/- 0.13, N = 397.941. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7F721224364860SE +/- 0.21, N = 355.271. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7F723691215SE +/- 0.08, N = 310.921. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7F7248121620SE +/- 0.09, N = 314.531. (CXX) g++ options: -O2 -lOpenCL

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorEPYC 7F7270140210280350SE +/- 2.89, N = 3327.93

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F72EPYC 7F72 Run 22K4K6K8K10KSE +/- 19.72, N = 3SE +/- 41.43, N = 38409.78401.51. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F72EPYC 7F72 Run 215003000450060007500Min: 8386.6 / Avg: 8409.67 / Max: 8448.9Min: 8321.1 / Avg: 8401.47 / Max: 8459.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F72EPYC 7F72 Run 220406080100SE +/- 0.30, N = 3SE +/- 0.22, N = 3102.6102.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F72EPYC 7F72 Run 220406080100Min: 102 / Avg: 102.57 / Max: 103Min: 101.7 / Avg: 102.13 / Max: 102.41. (CC) gcc options: -O3 -pthread -lz -llzma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUEPYC 7F720.37150.7431.11451.4861.8575SE +/- 0.00608, N = 31.65092MIN: 1.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUEPYC 7F72612182430SE +/- 0.34, N = 1526.75MIN: 23.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7F720.3020.6040.9061.2081.51SE +/- 0.00493, N = 31.34214MIN: 1.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUEPYC 7F723691215SE +/- 0.02, N = 313.52MIN: 13.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F720.44530.89061.33591.78122.2265SE +/- 0.01015, N = 31.97902MIN: 1.91. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUEPYC 7F720.45550.9111.36651.8222.2775SE +/- 0.00374, N = 32.02447MIN: 1.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUEPYC 7F720.69081.38162.07242.76323.454SE +/- 0.00540, N = 33.07023MIN: 2.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F720.90951.8192.72853.6384.5475SE +/- 0.00642, N = 34.04223MIN: 3.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F721.30812.61623.92435.23246.5405SE +/- 0.00467, N = 35.81358MIN: 5.631. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F720.49480.98961.48441.97922.474SE +/- 0.00467, N = 32.19918MIN: 2.121. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F7250100150200250SE +/- 1.20, N = 3229.41MIN: 224.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F721530456075SE +/- 0.10, N = 367.55MIN: 66.621. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F720.12380.24760.37140.49520.619SE +/- 0.004511, N = 30.550193MIN: 0.521. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7F720.25610.51220.76831.02441.2805SE +/- 0.00080, N = 31.13833MIN: 1.11. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pEPYC 7F72EPYC 7F72 Run 2150300450600750SE +/- 1.27, N = 3SE +/- 1.55, N = 3678.67681.55MIN: 494.67 / MAX: 867.67MIN: 498.31 / MAX: 867.491. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pEPYC 7F72EPYC 7F72 Run 2120240360480600Min: 676.65 / Avg: 678.67 / Max: 681Min: 679.69 / Avg: 681.55 / Max: 684.621. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KEPYC 7F72EPYC 7F72 Run 260120180240300SE +/- 0.86, N = 3SE +/- 0.88, N = 3295.00294.22MIN: 169.16 / MAX: 317.81MIN: 172.06 / MAX: 315.541. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KEPYC 7F72EPYC 7F72 Run 250100150200250Min: 293.92 / Avg: 295 / Max: 296.7Min: 292.98 / Avg: 294.22 / Max: 295.921. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pEPYC 7F72EPYC 7F72 Run 2150300450600750SE +/- 0.73, N = 3SE +/- 0.82, N = 3689.95688.13MIN: 391.2 / MAX: 758.76MIN: 377.43 / MAX: 755.971. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pEPYC 7F72EPYC 7F72 Run 2120240360480600Min: 689.19 / Avg: 689.95 / Max: 691.41Min: 686.5 / Avg: 688.13 / Max: 689.041. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitEPYC 7F72EPYC 7F72 Run 2306090120150SE +/- 0.05, N = 3SE +/- 0.23, N = 3121.18121.16MIN: 80.9 / MAX: 215.78MIN: 80.89 / MAX: 221.151. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitEPYC 7F72EPYC 7F72 Run 220406080100Min: 121.12 / Avg: 121.18 / Max: 121.29Min: 120.86 / Avg: 121.16 / Max: 121.621. (CC) gcc options: -pthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassEPYC 7F72EPYC 7F72 Run 20.06750.1350.20250.270.3375SE +/- 0.00, N = 3SE +/- 0.00, N = 30.300.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassEPYC 7F72EPYC 7F72 Run 212345Min: 0.29 / Avg: 0.3 / Max: 0.3Min: 0.29 / Avg: 0.3 / Max: 0.31. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F72EPYC 7F72 Run 20.51531.03061.54592.06122.5765SE +/- 0.01, N = 3SE +/- 0.00, N = 32.292.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F72EPYC 7F72 Run 2246810Min: 2.28 / Avg: 2.29 / Max: 2.3Min: 2.28 / Avg: 2.28 / Max: 2.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeEPYC 7F72EPYC 7F72 Run 2510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 318.5018.571. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeEPYC 7F72EPYC 7F72 Run 2510152025Min: 18.47 / Avg: 18.5 / Max: 18.54Min: 18.54 / Avg: 18.57 / Max: 18.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassEPYC 7F72EPYC 7F72 Run 20.79431.58862.38293.17723.9715SE +/- 0.01, N = 3SE +/- 0.00, N = 33.533.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassEPYC 7F72EPYC 7F72 Run 2246810Min: 3.52 / Avg: 3.53 / Max: 3.54Min: 3.52 / Avg: 3.53 / Max: 3.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F72EPYC 7F72 Run 2816243240SE +/- 0.02, N = 3SE +/- 0.02, N = 332.3932.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F72EPYC 7F72 Run 2714212835Min: 32.37 / Avg: 32.39 / Max: 32.42Min: 32.54 / Avg: 32.58 / Max: 32.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pEPYC 7F720.02520.05040.07560.10080.126SE +/- 0.000, N = 30.1121. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7F72246810SE +/- 0.006, N = 36.6751. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7F721326395265SE +/- 0.19, N = 356.351. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pEPYC 7F7280160240320400SE +/- 3.54, N = 3361.371. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7F7280160240320400SE +/- 1.21, N = 3364.681. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7F7260120180240300SE +/- 2.99, N = 3283.751. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7F720.89331.78662.67993.57324.4665SE +/- 0.04, N = 33.97MIN: 3.77 / MAX: 4.26

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7F720.99231.98462.97693.96924.9615SE +/- 0.01, N = 34.41MIN: 4.32 / MAX: 4.45

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0EPYC 7F72EPYC 7F72 Run 21428425670SE +/- 0.21, N = 3SE +/- 0.18, N = 362.7262.461. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0EPYC 7F72EPYC 7F72 Run 21224364860Min: 62.48 / Avg: 62.72 / Max: 63.13Min: 62.25 / Avg: 62.46 / Max: 62.811. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2EPYC 7F72EPYC 7F72 Run 2918273645SE +/- 0.18, N = 3SE +/- 0.03, N = 338.0437.851. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2EPYC 7F72EPYC 7F72 Run 2816243240Min: 37.86 / Avg: 38.04 / Max: 38.4Min: 37.81 / Avg: 37.85 / Max: 37.911. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8EPYC 7F72EPYC 7F72 Run 21.2962.5923.8885.1846.48SE +/- 0.022, N = 3SE +/- 0.027, N = 35.6445.7601. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8EPYC 7F72EPYC 7F72 Run 2246810Min: 5.62 / Avg: 5.64 / Max: 5.69Min: 5.71 / Avg: 5.76 / Max: 5.81. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10EPYC 7F72EPYC 7F72 Run 21.26292.52583.78875.05166.3145SE +/- 0.015, N = 3SE +/- 0.006, N = 35.5045.6131. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10EPYC 7F72EPYC 7F72 Run 2246810Min: 5.48 / Avg: 5.5 / Max: 5.53Min: 5.6 / Avg: 5.61 / Max: 5.621. (CXX) g++ options: -O3 -fPIC

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F72EPYC 7F72 Run 2918273645SE +/- 0.62, N = 3SE +/- 0.54, N = 338.8938.97
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F72EPYC 7F72 Run 2816243240Min: 38.18 / Avg: 38.89 / Max: 40.13Min: 38.42 / Avg: 38.97 / Max: 40.05

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileEPYC 7F72EPYC 7F72 Run 21530456075SE +/- 0.22, N = 3SE +/- 0.18, N = 368.4168.56
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileEPYC 7F72EPYC 7F72 Run 21326395265Min: 68.01 / Avg: 68.41 / Max: 68.78Min: 68.37 / Avg: 68.56 / Max: 68.93

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7F7220406080100SE +/- 0.34, N = 389.831. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F72EPYC 7F72 Run 22004006008001000SE +/- 0.89, N = 3SE +/- 2.30, N = 3903.09900.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F72EPYC 7F72 Run 2160320480640800Min: 901.63 / Avg: 903.09 / Max: 904.71Min: 896.91 / Avg: 900.24 / Max: 904.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F72 Run 24K8K12K16K20KSE +/- 108.70, N = 319784.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F72 Run 22004006008001000SE +/- 1.61, N = 31018.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0EPYC 7F72 Run 2246810SE +/- 0.049, N = 58.835

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPEPYC 7F72 Run 280160240320400SE +/- 0.53, N = 3358.171. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAEPYC 7F72 Run 2120240360480600SE +/- 3.66, N = 3575.241. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDEPYC 7F72 Run 22004006008001000SE +/- 0.96, N = 3849.731. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEPYC 7F72 Run 280K160K240K320K400KSE +/- 1217.33, N = 3359926.031. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7F72 Run 213002600390052006500SE +/- 23.43, N = 36170.751. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocEPYC 7F72 Run 2110M220M330M440M550MSE +/- 6056135.38, N = 3490855907.521. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingEPYC 7F72 Run 214K28K42K56K70KSE +/- 1114.69, N = 1263224.091. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEEPYC 7F72 Run 280K160K240K320K400KSE +/- 1896.82, N = 3395262.321. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F72 Run 21122334455SE +/- 0.82, N = 1550.951. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7F72 Run 22K4K6K8K10KSE +/- 9.82, N = 38680.211. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresEPYC 7F72 Run 2700K1400K2100K2800K3500KSE +/- 14318.02, N = 33438713.731. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7F72 Run 220K40K60K80K100KSE +/- 494.88, N = 3101144.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7F72 Run 240K80K120K160K200KSE +/- 430.21, N = 3193600.891. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEPYC 7F72 Run 23K6K9K12K15KSE +/- 81.56, N = 312234.211. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7F72 Run 23K6K9K12K15KSE +/- 24.21, N = 312964.411. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7F72 Run 22M4M6M8M10MSE +/- 34750.11, N = 38992728.621. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsEPYC 7F72 Run 2700K1400K2100K2800K3500KSE +/- 6547.03, N = 33332245.821. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingEPYC 7F72 Run 280160240320400SE +/- 1.67, N = 3360.191. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingEPYC 7F72 Run 23M6M9M12M15MSE +/- 192031.18, N = 313772877.081. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7F72 Run 27K14K21K28K35KSE +/- 106.71, N = 334002

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsEPYC 7F72 Run 21224364860SE +/- 0.02, N = 354.741. git version 2.25.1

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 7F72 Run 270K140K210K280K350K3144581. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -lm

77 Results Shown

WireGuard + Linux Networking Stack Stress Test
LeelaChessZero:
  BLAS
  Eigen
  Rand
Rodinia:
  OpenMP LavaMD
  OpenMP Myocyte
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
Java Gradle Build
Zstd Compression:
  3
  19
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  IP Batch 1D - u8s8f32 - CPU
  IP Batch All - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch deconv_1d - u8s8f32 - CPU
  Deconvolution Batch deconv_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
SVT-AV1:
  Enc Mode 0 - 1080p
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
  Visual Quality Optimized - Bosphorus 1080p
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
libavif avifenc:
  0
  2
  8
  10
Timed Linux Kernel Compilation
Build2
YafaRay
Darmstadt Automotive Parallel Heterogeneous Suite:
  OpenMP - NDT Mapping
  OpenMP - Points2Image
  OpenMP - Euclidean Cluster
GNU Octave Benchmark
Stress-NG:
  MMAP
  NUMA
  MEMFD
  Atomic
  Crypto
  Malloc
  Forking
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
Chaos Group V-RAY
Git
BRL-CAD