EPYC 7262

AMD EPYC 7262 8-Core testing with a ASRockRack EPYCD8 (P2.10 BIOS) and llvmpipe 126GB on Ubuntu 18.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2003167-NI-EPYC7262764.

EPYC 7262ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionEPYC 7262AMD EPYC 7262 8-Core @ 3.20GHz (8 Cores / 16 Threads)ASRockRack EPYCD8 (P2.10 BIOS)AMD Device 1480126GB280GB INTEL SSDPED1D280GAllvmpipe 126GBAMD Device 1487VE2282 x Intel I350Ubuntu 18.045.3.0-40-generic (x86_64)GNOME Shell 3.28.4X Server 1.20.5modesetting 1.20.53.3 Mesa 19.2.8 (LLVM 9.0 128 bits)GCC 7.5.0ext41920x1080OpenBenchmarking.org- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x830101c- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + tsx_async_abort: Not affected

EPYC 7262npb: BT.Cnpb: CG.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Bluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismintel-mpi: IMB-P2P PingPongintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 Exchangeintel-mpi: IMB-MPI1 PingPongintel-mpi: IMB-MPI1 Sendrecvintel-mpi: IMB-MPI1 Sendrecvmcperf: Add - 16mcperf: Get - 16EPYC 726220351.5110310.05320.71321.0124866.751206.7637649.6141750.5213410.701.391.503568395.20652176279.4084.174039.934884.3863.9735264.355440.8OpenBenchmarking.org

NAS Parallel Benchmarks

Test / Class: BT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 72624K8K12K16K20KSE +/- 33.99, N = 320351.511. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: CG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 72622K4K6K8K10KSE +/- 57.80, N = 310310.051. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 726270140210280350SE +/- 0.22, N = 3320.711. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: EP.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 726270140210280350SE +/- 0.03, N = 3321.011. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: FT.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 72625K10K15K20K25KSE +/- 248.28, N = 324866.751. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: IS.D

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 726230060090012001500SE +/- 0.64, N = 31206.761. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: LU.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 72628K16K24K32K40KSE +/- 44.07, N = 337649.611. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: MG.C

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 72629K18K27K36K45KSE +/- 37.19, N = 341750.521. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

NAS Parallel Benchmarks

Test / Class: SP.B

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 72623K6K9K12K15KSE +/- 231.40, N = 313410.701. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 2.1.1

LuxCoreRender

Scene: DLSC

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 72620.31280.62560.93841.25121.564SE +/- 0.01, N = 31.39MIN: 1.32 / MAX: 1.42

LuxCoreRender

Scene: Rainbow Colors and Prism

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 72620.33750.6751.01251.351.6875SE +/- 0.02, N = 31.50MIN: 1.45 / MAX: 1.56

Intel MPI Benchmarks

Test: IMB-P2P PingPong

OpenBenchmarking.orgAverage Msg/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-P2P PingPongEPYC 7262800K1600K2400K3200K4000KSE +/- 49321.07, N = 43568395.21MIN: 3660 / MAX: 84614611. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 ExchangeEPYC 726213002600390052006500SE +/- 96.46, N = 36279.40MAX: 22079.211. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Exchange

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 ExchangeEPYC 726220406080100SE +/- 1.50, N = 384.17MIN: 0.83 / MAX: 1023.381. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 PingPong

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 PingPongEPYC 72629001800270036004500SE +/- 35.34, N = 114039.93MIN: 4.02 / MAX: 131171. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage Mbytes/sec, More Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 SendrecvEPYC 726210002000300040005000SE +/- 29.97, N = 34884.38MAX: 23740.321. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Intel MPI Benchmarks

Test: IMB-MPI1 Sendrecv

OpenBenchmarking.orgAverage usec, Fewer Is BetterIntel MPI Benchmarks 2019.3Test: IMB-MPI1 SendrecvEPYC 72621428425670SE +/- 0.87, N = 363.97MIN: 0.5 / MAX: 928.81. (CXX) g++ options: -O0 -pedantic -fopenmp -pthread -lmpi_cxx -lmpi

Memcached mcperf

Method: Add - Connections: 16

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Add - Connections: 16EPYC 72628K16K24K32K40KSE +/- 263.07, N = 335264.31. (CC) gcc options: -O2 -lm -rdynamic

Memcached mcperf

Method: Get - Connections: 16

OpenBenchmarking.orgOperations Per Second, More Is BetterMemcached mcperf 1.6.0Method: Get - Connections: 16EPYC 726212K24K36K48K60KSE +/- 290.01, N = 355440.81. (CC) gcc options: -O2 -lm -rdynamic


Phoronix Test Suite v10.8.4