AMD EPYC 7742 2P June

2 x AMD EPYC 7742 64-Core testing with a AMD DAYTONA_X (RDY1006G BIOS) and llvmpipe 504GB on Ubuntu 20.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2006292-NE-AMDEPYC7709&grr.

AMD EPYC 7742 2P JuneProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionEPYC 7742 2P2 x AMD EPYC 7742 64-Core @ 2.25GHz (128 Cores / 256 Threads)AMD DAYTONA_X (RDY1006G BIOS)AMD Starship/Matisse504GB3841GB Micron_9300_MTFDHAL3T8TDPllvmpipe 504GBVE2282 x Mellanox MT27710Ubuntu 20.045.4.0-31-generic (x86_64)GNOME Shell 3.36.1X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits)GCC 9.3.0ext41920x1080OpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301034- Python 2.7.18rc1 + Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Not affected

AMD EPYC 7742 2P Junedaphne: OpenMP - Points2Imagewireguard: rodinia: OpenMP Myocyteonednn: Recurrent Neural Network Training - f32 - CPUrodinia: OpenMP HotSpot3Dpyperformance: raytracebuild-linux-kernel: Time To Compilepyperformance: python_startuppyperformance: 2to3pyperformance: goonednn: IP Batch All - f32 - CPUonednn: IP Batch All - u8s8f32 - CPUrodinia: OpenMP Streamclusterrodinia: OpenMP Leukocytepyperformance: regex_compileonednn: Recurrent Neural Network Inference - f32 - CPUpyperformance: pathlibdaphne: OpenMP - NDT Mappingdaphne: OpenMP - Euclidean Clusterpyperformance: pickle_pure_pythonpyperformance: json_loadspyperformance: django_templaterodinia: OpenMP LavaMDpyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: crypto_pyaesonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch 1D - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUrodinia: OpenMP CFD Solveronednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUEPYC 7742 2P9943.53399.130215.032903.599109.69754720.73815.937229119.08719.924809.98547.448200356.03020.1696.47860.5055232.659.129.0121301351301252.128222.809833.089721.966600.8034700.7239908.9102.560010.7158041.135912.67201OpenBenchmarking.org

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: Points2Image

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7742 2P2K4K6K8K10KSE +/- 107.69, N = 129943.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp

WireGuard + Linux Networking Stack Stress Test

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7742 2P90180270360450SE +/- 4.06, N = 3399.13

Rodinia

Test: OpenMP Myocyte

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP MyocyteEPYC 7742 2P50100150200250SE +/- 3.24, N = 3215.031. (CXX) g++ options: -O2 -lOpenCL

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7742 2P2004006008001000SE +/- 9.87, N = 15903.60MIN: 810.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Rodinia

Test: OpenMP HotSpot3D

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7742 2P20406080100SE +/- 1.80, N = 3109.701. (CXX) g++ options: -O2 -lOpenCL

PyPerformance

Benchmark: raytrace

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceEPYC 7742 2P120240360480600SE +/- 0.58, N = 3547

Timed Linux Kernel Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7742 2P510152025SE +/- 0.23, N = 1320.74

PyPerformance

Benchmark: python_startup

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7742 2P48121620SE +/- 0.00, N = 315.9

PyPerformance

Benchmark: 2to3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3EPYC 7742 2P80160240320400372

PyPerformance

Benchmark: go

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goEPYC 7742 2P60120180240300SE +/- 0.33, N = 3291

oneDNN

Harness: IP Batch All - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUEPYC 7742 2P510152025SE +/- 0.14, N = 319.09MIN: 15.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUEPYC 7742 2P3691215SE +/- 0.02795, N = 39.92480MIN: 9.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Rodinia

Test: OpenMP Streamcluster

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7742 2P3691215SE +/- 0.125, N = 159.9851. (CXX) g++ options: -O2 -lOpenCL

Rodinia

Test: OpenMP Leukocyte

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7742 2P1122334455SE +/- 0.23, N = 347.451. (CXX) g++ options: -O2 -lOpenCL

PyPerformance

Benchmark: regex_compile

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7742 2P4080120160200200

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7742 2P80160240320400SE +/- 2.39, N = 3356.03MIN: 329.511. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

PyPerformance

Benchmark: pathlib

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7742 2P510152025SE +/- 0.03, N = 320.1

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: NDT Mapping

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7742 2P150300450600750SE +/- 5.57, N = 3696.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Darmstadt Automotive Parallel Heterogeneous Suite

Backend: OpenMP - Kernel: Euclidean Cluster

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7742 2P2004006008001000SE +/- 10.61, N = 4860.501. (CXX) g++ options: -O3 -std=c++11 -fopenmp

PyPerformance

Benchmark: pickle_pure_python

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEPYC 7742 2P120240360480600SE +/- 0.88, N = 3552

PyPerformance

Benchmark: json_loads

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsEPYC 7742 2P816243240SE +/- 0.03, N = 332.6

PyPerformance

Benchmark: django_template

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7742 2P1326395265SE +/- 0.22, N = 359.1

Rodinia

Test: OpenMP LavaMD

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7742 2P714212835SE +/- 0.08, N = 329.011. (CXX) g++ options: -O2 -lOpenCL

PyPerformance

Benchmark: chaos

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosEPYC 7742 2P306090120150130

PyPerformance

Benchmark: float

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7742 2P306090120150135

PyPerformance

Benchmark: nbody

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7742 2P306090120150130

PyPerformance

Benchmark: crypto_pyaes

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7742 2P306090120150SE +/- 0.33, N = 3125

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7742 2P0.47880.95761.43641.91522.394SE +/- 0.00525, N = 32.12822MIN: 1.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUEPYC 7742 2P0.63221.26441.89662.52883.161SE +/- 0.01838, N = 32.80983MIN: 2.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7742 2P0.69521.39042.08562.78083.476SE +/- 0.02410, N = 33.08972MIN: 2.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: IP Batch 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUEPYC 7742 2P0.44250.8851.32751.772.2125SE +/- 0.00258, N = 31.96660MIN: 1.771. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7742 2P0.18080.36160.54240.72320.904SE +/- 0.004225, N = 30.803470MIN: 0.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7742 2P0.16290.32580.48870.65160.8145SE +/- 0.003819, N = 30.723990MIN: 0.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Rodinia

Test: OpenMP CFD Solver

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7742 2P246810SE +/- 0.064, N = 38.9101. (CXX) g++ options: -O2 -lOpenCL

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7742 2P0.5761.1521.7282.3042.88SE +/- 0.04145, N = 32.56001MIN: 1.931. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7742 2P0.16110.32220.48330.64440.8055SE +/- 0.002931, N = 30.715804MIN: 0.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7742 2P0.25560.51120.76681.02241.278SE +/- 0.00811, N = 31.13591MIN: 0.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUEPYC 7742 2P0.60121.20241.80362.40483.006SE +/- 0.01599, N = 32.67201MIN: 2.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl


Phoronix Test Suite v10.8.5