2 x AMD EPYC 7642 48-Core testing with a AMD DAYTONA_X (RDY1001C BIOS) and llvmpipe 504GB on Ubuntu 19.10 via the Phoronix Test Suite.
EPYC 7642 2P Processor: 2 x AMD EPYC 7642 48-Core @ 2.30GHz (96 Cores / 192 Threads), Motherboard: AMD DAYTONA_X (RDY1001C BIOS), Chipset: AMD Starship/Matisse, Memory: 516096MB, Disk: 280GB INTEL SSDPED1D280GA + 256GB Micron_1100_MTFD, Graphics: llvmpipe 504GB, Network: 2 x Mellanox MT27710
OS: Ubuntu 19.10, Kernel: 5.3.0-18-generic (x86_64), Desktop: GNOME Shell 3.34.1, Display Server: X Server 1.20.5, Display Driver: modesetting 1.20.5, OpenGL: 3.3 Mesa 19.2.1 (LLVM 9.0 128 bits), Compiler: GCC 9.2.1 20191008, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rwProcessor Notes: Scaling Governor: acpi-cpufreq ondemandJava Notes: OpenJDK Runtime Environment (build 11.0.5-ea+9-post-Ubuntu-1ubuntu1)Python Notes: Python 2.7.17rc1 + Python 3.7.5rc1Security Notes: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling
OpenBenchmarking.org MB/s, More Is Better IOR 3.2.1 Read Test EPYC 7642 2P 400 800 1200 1600 2000 SE +/- 5.47, N = 3 1853.75 MIN: 947.1 / MAX: 2038.08 1. (CC) gcc options: -O2 -lm -pthread -lmpi
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C EPYC 7642 2P 40K 80K 120K 160K 200K SE +/- 114.28, N = 3 201645.45 1. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi 2. Open MPI 3.1.3
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.C EPYC 7642 2P 1400 2800 4200 5600 7000 SE +/- 33.28, N = 3 6390.63 1. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi 2. Open MPI 3.1.3
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D EPYC 7642 2P 1400 2800 4200 5600 7000 SE +/- 4.17, N = 3 6617.27 1. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi 2. Open MPI 3.1.3
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C EPYC 7642 2P 20K 40K 60K 80K 100K SE +/- 119.77, N = 3 79464.67 1. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi 2. Open MPI 3.1.3
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C EPYC 7642 2P 50K 100K 150K 200K 250K SE +/- 448.47, N = 3 230894.32 1. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi 2. Open MPI 3.1.3
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C EPYC 7642 2P 20K 40K 60K 80K 100K SE +/- 908.49, N = 10 99614.54 1. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi 2. Open MPI 3.1.3
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B EPYC 7642 2P 30K 60K 90K 120K 150K SE +/- 1182.49, N = 9 123514.78 1. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi 2. Open MPI 3.1.3
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.13b1 ATPase Simulation - 327,506 Atoms EPYC 7642 2P 0.0687 0.1374 0.2061 0.2748 0.3435 SE +/- 0.00151, N = 3 0.30544
OpenBenchmarking.org Hydro Cycle Time - Seconds, Fewer Is Better Pennant 1.0.1 Test: leblancbig EPYC 7642 2P 40 80 120 160 200 SE +/- 0.20, N = 3 177.25 1. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis EPYC 7642 2P 20 40 60 80 100 SE +/- 1.58, N = 4 107.51 1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm
MKL-DNN DNNL This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch 1D - Data Type: f32 EPYC 7642 2P 0.4253 0.8506 1.2759 1.7012 2.1265 SE +/- 0.01, N = 3 1.89 MIN: 1.62 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: IP Batch All - Data Type: f32 EPYC 7642 2P 3 6 9 12 15 SE +/- 0.10, N = 3 10.33 MIN: 9.3 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_3d - Data Type: f32 EPYC 7642 2P 0.9338 1.8676 2.8014 3.7352 4.669 SE +/- 0.02, N = 3 4.15 MIN: 3.55 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_all - Data Type: f32 EPYC 7642 2P 100 200 300 400 500 SE +/- 7.97, N = 3 472.70 MIN: 440.83 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_1d - Data Type: f32 EPYC 7642 2P 0.6345 1.269 1.9035 2.538 3.1725 SE +/- 0.04, N = 4 2.82 MIN: 2.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_3d - Data Type: f32 EPYC 7642 2P 0.6773 1.3546 2.0319 2.7092 3.3865 SE +/- 0.09, N = 12 3.01 MIN: 2.28 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_alexnet - Data Type: f32 EPYC 7642 2P 13 26 39 52 65 SE +/- 0.55, N = 15 59.54 MIN: 53.81 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Deconvolution Batch deconv_all - Data Type: f32 EPYC 7642 2P 500 1000 1500 2000 2500 SE +/- 26.20, N = 3 2217.91 MIN: 2049.87 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Recurrent Neural Network Training - Data Type: f32 EPYC 7642 2P 140 280 420 560 700 SE +/- 6.71, N = 3 666.62 MIN: 603.35 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better MKL-DNN DNNL 1.1 Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32 EPYC 7642 2P 7 14 21 28 35 SE +/- 0.54, N = 3 32.06 MIN: 28.52 1. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl
OpenBenchmarking.org FPS, More Is Better dav1d 0.5.0 Video Input: Summer Nature 4K EPYC 7642 2P 70 140 210 280 350 SE +/- 1.29, N = 3 325.32 MIN: 73.11 / MAX: 396.1 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.5.0 Video Input: Summer Nature 1080p EPYC 7642 2P 160 320 480 640 800 SE +/- 3.89, N = 3 753.59 MIN: 173.57 / MAX: 911.57 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.5.0 Video Input: Chimera 1080p 10-bit EPYC 7642 2P 20 40 60 80 100 SE +/- 0.09, N = 3 98.99 MIN: 66.44 / MAX: 163.99 1. (CC) gcc options: -pthread
OSPray Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: San Miguel - Renderer: SciVis EPYC 7642 2P 20 40 60 80 100 SE +/- 0.00, N = 12 76.92 MIN: 23.26 / MAX: 90.91
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: NASA Streamlines - Renderer: SciVis EPYC 7642 2P 20 40 60 80 100 SE +/- 1.01, N = 15 101.48 MIN: 14.71 / MAX: 111.11
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: NASA Streamlines - Renderer: Path Tracer EPYC 7642 2P 6 12 18 24 30 SE +/- 0.00, N = 12 24.39 MIN: 10.1 / MAX: 25
OpenBenchmarking.org FPS, More Is Better OSPray 1.8.5 Demo: Magnetic Reconnection - Renderer: Path Tracer EPYC 7642 2P 70 140 210 280 350 SE +/- 0.00, N = 12 333.33 MIN: 62.5 / MAX: 500
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer ISPC - Model: Crown EPYC 7642 2P 14 28 42 56 70 SE +/- 0.21, N = 3 64.84 MIN: 61.46 / MAX: 69.24
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer - Model: Asian Dragon EPYC 7642 2P 13 26 39 52 65 SE +/- 0.05, N = 3 57.18 MIN: 55.25 / MAX: 60.16
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer - Model: Asian Dragon Obj EPYC 7642 2P 11 22 33 44 55 SE +/- 0.07, N = 3 50.25 MIN: 48.61 / MAX: 53.26
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer ISPC - Model: Asian Dragon EPYC 7642 2P 12 24 36 48 60 SE +/- 0.15, N = 3 55.07 MIN: 53.07 / MAX: 58.31
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.6.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj EPYC 7642 2P 11 22 33 44 55 SE +/- 0.06, N = 3 47.54 MIN: 46.02 / MAX: 50.79
SVT-AV1 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.7 Encoder Mode: Enc Mode 0 - Input: 1080p EPYC 7642 2P 0.0113 0.0226 0.0339 0.0452 0.0565 SE +/- 0.00, N = 3 0.05 1. (CXX) g++ options: -fPIE -fPIC -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.7 Encoder Mode: Enc Mode 4 - Input: 1080p EPYC 7642 2P 3 6 9 12 15 SE +/- 0.07, N = 3 9.64 1. (CXX) g++ options: -fPIE -fPIC -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.7 Encoder Mode: Enc Mode 8 - Input: 1080p EPYC 7642 2P 20 40 60 80 100 SE +/- 0.50, N = 3 103.72 1. (CXX) g++ options: -fPIE -fPIC -pie
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: VMAF Optimized - Input: Bosphorus 1080p EPYC 7642 2P 70 140 210 280 350 SE +/- 2.66, N = 3 299.85 1. (CC) gcc options: -fPIE -fPIC -fvisibility=hidden -O3 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p EPYC 7642 2P 70 140 210 280 350 SE +/- 3.89, N = 3 308.06 1. (CC) gcc options: -fPIE -fPIC -fvisibility=hidden -O3 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p EPYC 7642 2P 50 100 150 200 250 SE +/- 3.42, N = 3 249.96 1. (CC) gcc options: -fPIE -fPIC -fvisibility=hidden -O3 -pie -rdynamic -lpthread -lrt -lm
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel EPYC 7642 2P 2 4 6 8 10 SE +/- 0.03, N = 3 7.93 1. (CC) gcc options: -lm -lpthread -O3
Tungsten Renderer Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Hair EPYC 7642 2P 1.3185 2.637 3.9555 5.274 6.5925 SE +/- 0.08, N = 3 5.86 1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Water Caustic EPYC 7642 2P 6 12 18 24 30 SE +/- 0.36, N = 3 23.58 1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Non-Exponential EPYC 7642 2P 0.549 1.098 1.647 2.196 2.745 SE +/- 0.01, N = 3 2.44 1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
OpenBenchmarking.org Seconds, Fewer Is Better Tungsten Renderer 0.2.2 Scene: Volumetric Caustic EPYC 7642 2P 1.035 2.07 3.105 4.14 5.175 SE +/- 0.01, N = 3 4.60 1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
glibc bench The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more.
Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org nanoseconds, Fewer Is Better glibc bench 1.0 Benchmark: cos EPYC 7642 2P 12 24 36 48 60 SE +/- 0.02, N = 3 55.45
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 12.0 Scaling: Buffer Test - Test: Normal Load - Mode: Read Write EPYC 7642 2P 4K 8K 12K 16K 20K SE +/- 701.38, N = 9 17610.43 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 12.0 Scaling: Buffer Test - Test: Single Thread - Mode: Read Only EPYC 7642 2P 4K 8K 12K 16K 20K SE +/- 126.56, N = 3 20533.90 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 12.0 Scaling: Buffer Test - Test: Single Thread - Mode: Read Write EPYC 7642 2P 500 1000 1500 2000 2500 SE +/- 46.47, N = 9 2288.46 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
OpenBenchmarking.org Requests Per Second, More Is Better Redis 5.0.5 Test: SADD EPYC 7642 2P 400K 800K 1200K 1600K 2000K SE +/- 22741.29, N = 15 1678729.14 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 5.0.5 Test: LPUSH EPYC 7642 2P 300K 600K 900K 1200K 1500K SE +/- 14528.15, N = 15 1259232.88 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 5.0.5 Test: GET EPYC 7642 2P 400K 800K 1200K 1600K 2000K SE +/- 34040.00, N = 3 1978766.83 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 5.0.5 Test: SET EPYC 7642 2P 300K 600K 900K 1200K 1500K SE +/- 21744.22, N = 12 1434092.45 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Bsearch EPYC 7642 2P 10K 20K 30K 40K 50K SE +/- 313.35, N = 3 46032.72 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Forking EPYC 7642 2P 4K 8K 12K 16K 20K SE +/- 314.91, N = 15 16624.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Hsearch EPYC 7642 2P 110K 220K 330K 440K 550K SE +/- 1266.53, N = 3 529090.47 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Lsearch EPYC 7642 2P 200 400 600 800 1000 SE +/- 11.01, N = 3 957.83 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Tsearch EPYC 7642 2P 600 1200 1800 2400 3000 SE +/- 16.20, N = 3 2800.13 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: CPU Stress EPYC 7642 2P 6K 12K 18K 24K 30K SE +/- 30.73, N = 3 28442.23 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Semaphores EPYC 7642 2P 300K 600K 900K 1200K 1500K SE +/- 8569.56, N = 3 1371739.85 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Matrix Math EPYC 7642 2P 100K 200K 300K 400K 500K SE +/- 7826.09, N = 3 465311.04 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Vector Math EPYC 7642 2P 40K 80K 120K 160K 200K SE +/- 378.92, N = 3 195708.79 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Memory Copying EPYC 7642 2P 3K 6K 9K 12K 15K SE +/- 80.12, N = 3 12383.11 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Socket Activity EPYC 7642 2P 7K 14K 21K 28K 35K SE +/- 365.28, N = 15 30423.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Context Switching EPYC 7642 2P 20M 40M 60M 80M 100M SE +/- 1932864.46, N = 15 80545931.25 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Glibc C String Functions EPYC 7642 2P 1.5M 3M 4.5M 6M 7.5M SE +/- 85811.53, N = 15 6777365.71 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: Glibc Qsort Data Sorting EPYC 7642 2P 200 400 600 800 1000 SE +/- 5.73, N = 3 1041.20 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.07.26 Test: System V Message Passing EPYC 7642 2P 5M 10M 15M 20M 25M SE +/- 270855.79, N = 15 21122449.00 1. (CC) gcc options: -O2 -std=gnu99 -lm -lz -lcrypt -lrt -lpthread -laio -lc
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 6.3.6 Test: Random Read EPYC 7642 2P 80M 160M 240M 320M 400M SE +/- 2904879.33, N = 3 355524652 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 6.3.6 Test: Sequential Fill EPYC 7642 2P 60K 120K 180K 240K 300K SE +/- 983.91, N = 3 267279 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 6.3.6 Test: Random Fill Sync EPYC 7642 2P 40K 80K 120K 160K 200K SE +/- 1172.48, N = 3 168644 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 6.3.6 Test: Read While Writing EPYC 7642 2P 2M 4M 6M 8M 10M SE +/- 172564.92, N = 15 9821726 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
EPYC 7642 2P Processor: 2 x AMD EPYC 7642 48-Core @ 2.30GHz (96 Cores / 192 Threads), Motherboard: AMD DAYTONA_X (RDY1001C BIOS), Chipset: AMD Starship/Matisse, Memory: 516096MB, Disk: 280GB INTEL SSDPED1D280GA + 256GB Micron_1100_MTFD, Graphics: llvmpipe 504GB, Network: 2 x Mellanox MT27710
OS: Ubuntu 19.10, Kernel: 5.3.0-18-generic (x86_64), Desktop: GNOME Shell 3.34.1, Display Server: X Server 1.20.5, Display Driver: modesetting 1.20.5, OpenGL: 3.3 Mesa 19.2.1 (LLVM 9.0 128 bits), Compiler: GCC 9.2.1 20191008, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rwProcessor Notes: Scaling Governor: acpi-cpufreq ondemandJava Notes: OpenJDK Runtime Environment (build 11.0.5-ea+9-post-Ubuntu-1ubuntu1)Python Notes: Python 2.7.17rc1 + Python 3.7.5rc1Security Notes: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling
Testing initiated at 17 October 2019 08:50 by user phoronix.