AMD EPYC 7302 / 7402 / 7502 / 7742 2P Linux Performance Benchmarks

Epyc test mirroring a run Michael did on 9/19/19

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1910120-SP-1909136AS26
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 13 Tests
CPU Massive 26 Tests
Creator Workloads 14 Tests
Encoding 7 Tests
HPC - High Performance Computing 4 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 2 Tests
Multi-Core 26 Tests
NVIDIA GPU Compute 2 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 2 Tests
Raytracing 3 Tests
Renderers 6 Tests
Scientific Computing 3 Tests
Server 3 Tests
Server CPU Tests 27 Tests
Single-Threaded 2 Tests
Video Encoding 7 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7302
September 10 2019
  15 Hours, 25 Minutes
EPYC 7302 2P
September 11 2019
  5 Hours, 43 Minutes
EPYC 7402
September 11 2019
  19 Hours, 18 Minutes
EPYC 7402 2P
September 12 2019
  5 Hours, 32 Minutes
EPYC 7502
September 09 2019
  5 Hours, 31 Minutes
EPYC 7502 2P
September 10 2019
  5 Hours, 50 Minutes
EPYC 7742
September 09 2019
  4 Hours, 48 Minutes
EPYC 7742 2P
September 08 2019
  7 Hours, 11 Minutes
EPYC 7302p Docker
October 12 2019
  8 Hours, 53 Minutes
Invert Hiding All Results Option
  8 Hours, 41 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7302 / 7402 / 7502 / 7742 2P Linux Performance BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionSystem LayerEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p DockerAMD EPYC 7302 16-Core @ 3.00GHz (16 Cores / 32 Threads)AMD DAYTONA_X (RDY1001C BIOS)AMD Device 1480258048MB280GB INTEL SSDPE21D280GA + 256GB Micron_1100_MTFDllvmpipe 252GBVE2282 x Mellanox MT27710Ubuntu 19.045.3.0-999-generic (x86_64) 20190907GNOME Shell 3.32.2X Server 1.20.4modesetting 1.20.43.3 Mesa 19.0.8 (LLVM 8.0 128 bits)GCC 8.3.0ext41920x10802 x AMD EPYC 7302 16-Core @ 3.00GHz (32 Cores / 64 Threads)516096MB280GB INTEL SSDPED1D280GA + 280GB INTEL SSDPE21D280GA + 256GB Micron_1100_MTFDllvmpipe 504GBAMD EPYC 7402 24-Core @ 2.80GHz (24 Cores / 48 Threads)258048MB280GB INTEL SSDPE21D280GA + 256GB Micron_1100_MTFDllvmpipe 252GB2 x AMD EPYC 7402 24-Core @ 2.80GHz (48 Cores / 96 Threads)516096MB280GB INTEL SSDPED1D280GA + 280GB INTEL SSDPE21D280GA + 256GB Micron_1100_MTFDllvmpipe 504GBAMD EPYC 7502 32-Core @ 2.50GHz (32 Cores / 64 Threads)258048MB280GB INTEL SSDPE21D280GA + 256GB Micron_1100_MTFDllvmpipe 252GB2 x AMD EPYC 7502 32-Core @ 2.50GHz (64 Cores / 128 Threads)516096MB280GB INTEL SSDPE21D280GA + 280GB INTEL SSDPED1D280GA + 256GB Micron_1100_MTFDllvmpipe 504GBAMD EPYC 7742 64-Core @ 2.25GHz (64 Cores / 128 Threads)258048MB280GB INTEL SSDPE21D280GA + 256GB Micron_1100_MTFDllvmpipe 252GB2 x AMD EPYC 7742 64-Core @ 2.25GHz (128 Cores / 256 Threads)516096MB280GB INTEL SSDPED1D280GA + 256GB Micron_1100_MTFD + 64GB Flash Drivellvmpipe 504GBAMD EPYC 7302P 16-Core @ 3.00GHz (16 Cores / 32 Threads)Supermicro H11SSL-i v2.00 (2.0a BIOS)96256MBSamsung SSD 970 EVO Plus 500GBastdrmfbDELL E177FPClear Linux OS 293904.18.0-80.11.2.el8_0.x86_64 (x86_64)GCC 9.1.1 20190512 gcc-9-branch@271104 + Clang 8.0.0 + LLVM 8.0.0fuseblk1280x1024container-otherOpenBenchmarking.orgCompiler Details- EPYC 7302: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7302 2P: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7402: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7402 2P: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7502: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7502 2P: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7742: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7742 2P: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - EPYC 7302p Docker: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-languages=c,c++,fortran,go --enable-ld=default --enable-libstdcxx-pch --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=westmere --with-gcc-major-version-only --with-glibc-version=2.19 --with-gnu-ld --with-isl --with-ppl=yes --with-tune=haswellProcessor Details- EPYC 7302: Scaling Governor: acpi-cpufreq ondemand- EPYC 7302 2P: Scaling Governor: acpi-cpufreq ondemand- EPYC 7402: Scaling Governor: acpi-cpufreq ondemand- EPYC 7402 2P: Scaling Governor: acpi-cpufreq ondemand- EPYC 7502: Scaling Governor: acpi-cpufreq ondemand- EPYC 7502 2P: Scaling Governor: acpi-cpufreq ondemand- EPYC 7742: Scaling Governor: acpi-cpufreq ondemand- EPYC 7742 2P: Scaling Governor: acpi-cpufreq ondemand- EPYC 7302p Docker: Scaling Governor: acpi-cpufreq performanceJava Details- EPYC 7302, EPYC 7302 2P, EPYC 7402, EPYC 7402 2P, EPYC 7502, EPYC 7502 2P, EPYC 7742, EPYC 7742 2P: OpenJDK Runtime Environment (build 11.0.4+11-post-Ubuntu-1ubuntu219.04)Python Details- EPYC 7302: Python 2.7.16 + Python 3.7.3- EPYC 7302 2P: Python 2.7.16 + Python 3.7.3- EPYC 7402: Python 2.7.16 + Python 3.7.3- EPYC 7402 2P: Python 2.7.16 + Python 3.7.3- EPYC 7502: Python 2.7.16 + Python 3.7.3- EPYC 7502 2P: Python 2.7.16 + Python 3.7.3- EPYC 7742: Python 2.7.16 + Python 3.7.3- EPYC 7742 2P: Python 2.7.16 + Python 3.7.3- EPYC 7302p Docker: Python 3.7.3Security Details- EPYC 7302: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7302 2P: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7402: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7402 2P: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7502: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7502 2P: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7742: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7742 2P: l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling- EPYC 7302p Docker: usercopy/swapgs barriers and __user pointer sanitization + Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + SSB disabled via prctl and seccompEnvironment Details- EPYC 7302p Docker: CFFLAGS=-g-O3-feliminate-unused-debug-types-pipe-Wall-Wp-D_FORTIFY_SOURCE=2-fexceptions-fstack-protector--param=ssp-buffer-size=32-m64-fasynchronous-unwind-tables-Wp-D_REENTRANT-ftree-loop-distribute-patterns-Wl-z-Wl now-Wl-z-Wl relro-malign-data=abi-fno-semantic-interposition-ftree-vectorize-ftree-loop-vectorize-Wl-sort-common-Wl--enable-new-dtags FFLAGS=-g-O3-feliminate-unused-debug-types-pipe-Wall-Wp-D_FORTIFY_SOURCE=2-fexceptions-fstack-protector--param=ssp-buffer-size=32-m64-fasynchronous-unwind-tables-Wp-D_REENTRANT-ftree-loop-distribute-patterns-Wl-z-Wl relro-malign-data=abi-fno-semantic-interposition-ftree-vectorize-ftree-loop-vectorize-Wl--enable-new-dtags CXXFLAGS=-g-O3-feliminate-unused-debug-types-pipe-Wall-Wp-D_FORTIFY_SOURCE=2-fexceptions-fstack-protector--param=ssp-buffer-size=32-Wformat-Wformat-security-m64-fasynchronous-unwind-tables-Wp-D_REENTRANT-ftree-loop-distribute-patterns-Wl-z-Wl relro-fno-semantic-interposition-ffat-lto-objects-fno-signed-zeros-fno-trapping-math-fassociative-math-Wl-sort-common-Wl--enable-new-dtags-mtune=skylake-fvisibility-inlines-hidden-Wl--enable-new-dtags CFLAGS=-g-O3-feliminate-unused-debug-types-pipe-Wall-Wp-D_FORTIFY_SOURCE=2-fexceptions-fstack-protector--param=ssp-buffer-size=32-Wformat-Wformat-security-m64-fasynchronous-unwind-tables-Wp-D_REENTRANT-ftree-loop-distribute-patterns-Wl-z-Wl relro-fno-semantic-interposition-ffat-lto-objects-fno-signed-zeros-fno-trapping-math-fassociative-math-Wl-sort-common-Wl--enable-new-dtags-mtune=skylake THEANO_FLAGS=floatX=float32 openmp=true gcc.cxxflags="-ftree-vectorize-mavx"

EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p DockerLogarithmic Result OverviewPhoronix Test SuiteTungsten RendererApache SiegeJohn The RipperCoremarkC-RayStockfishasmFishNAMDBlender7-Zip CompressionPOV-RaySVT-AV1SVT-VP9Timed GCC CompilationMKL-DNNTimed Linux Kernel Compilationdav1dAppleseedRAMspeed SMPVP9 libvpx EncodingPyBenchSVT-HEVCCP2K Molecular Dynamicsx264PHPBenchx265

AMD EPYC 7302 / 7402 / 7502 / 7742 2P Linux Performance Benchmarkspennant: sedovbigpennant: leblancbigbuild-gcc: Time To Compilemkl-dnn: Convolution Batch conv_all - f32mysqlslap: 256tungsten: Hairmkl-dnn: Deconvolution Batch deconv_all - f32blender: Barbershop - CPU-Onlydacapobench: Tradebeansmkl-dnn: Convolution Batch conv_googlenet_v3 - f32cp2k: Fayalite-FIST Datablender: Pabellon Barcelona - CPU-Onlyapache-siege: 250appleseed: Material Testerasmfish: 1024 Hash Memory, 26 Depthblender: Classroom - CPU-Onlygeekbench: CPU Multi Core - Horizon Detectiongeekbench: CPU Multi Core - Face Detectiongeekbench: CPU Multi Core - Gaussian Blurgeekbench: CPU Multi Coreappleseed: Emilyapache-siege: 200ramspeed: Add - Integerbuild-linux-kernel: Time To Compilenamd: ATPase Simulation - 327,506 Atomsvpxenc: vpxenc VP9 1080p Video Encodemkl-dnn: Convolution Batch conv_3d - f32npb: EP.Dblender: Fishy Cat - CPU-Onlynpb: BT.Cstockfish: Total Timeappleseed: Disney Materialblender: BMW27 - CPU-Onlynpb: LU.Cgeekbench: CPU Single Core - Horizon Detectiongeekbench: CPU Single Core - Face Detectiongeekbench: CPU Single Core - Gaussian Blurgeekbench: CPU Single Corecompress-7zip: Compress Speed Testmkl-dnn: IP Batch All - f32coremark: CoreMark Size 666 - Iterations Per Secondbuild-llvm: Time To Compilejohn-the-ripper: Blowfishpybench: Total For Average Test Timesnpb: FT.Ctungsten: Water Causticpovray: Trace Timedacapobench: H2npb: SP.Bmkl-dnn: IP Batch 1D - f32npb: IS.Dmkl-dnn: Deconvolution Batch deconv_1d - f32c-ray: Total Time - 4K, 16 Rays Per Pixelphpbench: PHP Benchmark Suiteramspeed: Copy - Integerramspeed: Add - Floating Pointramspeed: Triad - Floating Pointramspeed: Average - Integerramspeed: Scale - Integermkl-dnn: Convolution Batch conv_alexnet - f32ramspeed: Average - Floating Pointramspeed: Triad - Integerramspeed: Scale - Floating Pointneatbench: CPUdav1d: Summer Nature 4Kramspeed: Copy - Floating Pointx265: H.265 1080p Video Encodingx265: H.265 1080p Video Encodingnpb: MG.Cnpb: CG.Cmkl-dnn: Deconvolution Batch deconv_3d - f32npb: EP.Csvt-av1: 1080p 8-bit YUV To AV1 Video Encodetungsten: Volumetric Causticsvt-vp9: 1080p 8-bit YUV To VP9 Video Encodetungsten: Non-Exponentialsvt-vp9: 1080p 8-bit YUV To VP9 Video Encodex264: H.264 Video Encodingsvt-hevc: 1080p 8-bit YUV To HEVC Video Encodetachyon: Total TimeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker5283.965088.02833180522516.2628374006522101.56583341.228220319142909829264.66446.17148.30820.131731226688422.523105941.671.265801689.62905.39143.5967014.4842554164148.04101.3460960.7923.508.1863.17100596539200.24588635203.2431305122135850.7926.0028.88501637885.1616.661683.6018.3538.234896372920431058339654174336616243.0343435420153964328.9721.894005744.9650.5547694.1415726.624.79903.8344.767.7693.906.382731312583.067749742449.674464238839560.16726180.383705629475783369136.31813.17290.471.312882318631601.304855127.400.661561225.4882.887660251281.2854.6322.838.1962.60991150914123.291134097134.2662398122525.0716.36616010.4611.0819.354893674305043497450584336837111131.1542866430753924027.4017.474101245.0449.203.5960.685.67116.285.303191563291.798001.817762.54788128527512.003830284670170.94540234.195696317262962855181.58628.73219.671.192351420153380.724718832.730.891481566.911335.44101.6961593.1562020868109.3070.5475893.4223.808.3164.331026140234148.29880327158.5446756120732144.3823.9720.65545634934.0212.5114.5525.624952933064432629325133147730320173.4331543325333036332.9316.784237645.3950.8146418.683.371354.9362.785.30120.795.743591543442.117556872777.553454180757341.01607128.522981128911267217994.411.12415.531.533747716832588.854739722.620.470611363.8664.7511390141163.4540.5223.108.2863.47100621842599.981655894105.8492216121123.9712.1564819.568.9013.02496430333854916447842426623531897.7145771461583994731.6314.474439245.4749.412.7071.724.85162.842.973481533381.33773108729110.344264243672959.54518199.294343617278467488152.52751.03266.731.422743717839737.204697229.010.738721585.8088.347590352886.3460.9523.838.3164.401026171119124.811109475143.4357523121223.3317.34575910.4811.3320.804954663072332734327223151330339141.7631638449624084133.7015.984130643.4848.933.5967.004.66128.174.433751523601.747435832796.574151164756434.31575109.243140828913547428578.171.30501.201.664343617133487.902656620.830.390541443.4257.6013907088059.8335.4123.308.3163.7399925503481.21211876499.96110992121124.1310.3269149.617.8410.86495916366704114437295367543312376.9139326385643214231.5713.043258143.7947.862.6896.824.49191.552.283451483361.217396883176.974056169742237.98536116.923720118413171053886.251.15465.802.234206715536292.304685222.360.428791613.6259.3813312086063.5037.3924.108.4565.67104128057589.751966211102.1497345120221.8010.8666729.568.0711.85502052304563241232327312503000688.9531314320093009334.4312.013061544.1548.952.4498.774.33221.232.173931563681.09229.27221.497304122755.322465142800923.7559574.193417328823960628146.331.81792.37325.155906617431578.103757516.070.263931522.726095.9043.71246543.1324118510557.9926.84227460.18248.2864.43103335030889.65371999778.811834651206109565.8622.308.418050122309.5314.654238.567.486.29501770410523258241251311203804550.6631202321303006332.5311.414232145.0348.45100057.1649246.882.756224.78101.524.28280.221.523441513300.811508250204291.49241464131385.73610358.741036027642116857291.9626126447186.371.37249216139.21148.9840957382150.20104.4895307363.5161481728544183425.4430.0026.97100.4638.7657924021203263782426824396207263244.5524657241862257420.802248148.7158.8251.8040.197.9793.668.06220145256OpenBenchmarking.org

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7302EPYC 7402EPYC 7742 2P2K4K6K8K10KSE +/- 0.28, N = 3SE +/- 0.46, N = 3SE +/- 0.15, N = 35283.968001.81229.271. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7302EPYC 7402EPYC 7742 2P14002800420056007000Min: 5283.61 / Avg: 5283.96 / Max: 5284.51Min: 8001.26 / Avg: 8001.81 / Max: 8002.74Min: 229 / Avg: 229.27 / Max: 229.521. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7302EPYC 7402EPYC 7742 2P17003400510068008500SE +/- 0.22, N = 3SE +/- 1.21, N = 3SE +/- 0.50, N = 35088.027762.54221.491. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7302EPYC 7402EPYC 7742 2P13002600390052006500Min: 5087.6 / Avg: 5088.02 / Max: 5088.36Min: 7760.25 / Avg: 7762.54 / Max: 7764.35Min: 220.53 / Avg: 221.49 / Max: 222.231. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 8.2Time To CompileEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker30060090012001500SE +/- 0.02, N = 3SE +/- 1.51, N = 3SE +/- 0.25, N = 3SE +/- 0.62, N = 3SE +/- 0.09, N = 3SE +/- 0.28, N = 3SE +/- 0.34, N = 3SE +/- 0.37, N = 3SE +/- 4.87, N = 38337747887557737437397301508
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 8.2Time To CompileEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker30060090012001500Min: 833.23 / Avg: 833.27 / Max: 833.29Min: 771.86 / Avg: 773.81 / Max: 776.77Min: 787.88 / Avg: 788.38 / Max: 788.65Min: 754.24 / Avg: 755.46 / Max: 756.29Min: 772.81 / Avg: 773 / Max: 773.11Min: 742.03 / Avg: 742.54 / Max: 742.98Min: 738.37 / Avg: 739.05 / Max: 739.5Min: 728.99 / Avg: 729.59 / Max: 730.28Min: 1498.23 / Avg: 1507.93 / Max: 1513.62

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker5K10K15K20K25KSE +/- 2.17, N = 3SE +/- 10.43, N = 3SE +/- 5.74, N = 3SE +/- 7.17, N = 3SE +/- 5.22, N = 3SE +/- 7.44, N = 3SE +/- 7.18, N = 3SE +/- 4.36, N = 9SE +/- 44.48, N = 318059741285687108758368841225020-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 24895.91. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker4K8K12K16K20KMin: 1801.6 / Avg: 1804.96 / Max: 1809.02Min: 959.76 / Avg: 973.93 / Max: 994.28Min: 1273.99 / Avg: 1284.71 / Max: 1293.61Min: 679.22 / Avg: 687.4 / Max: 701.69Min: 1076.52 / Avg: 1086.9 / Max: 1093.11Min: 567.83 / Avg: 582.66 / Max: 591.06Min: 675.36 / Avg: 687.92 / Max: 700.22Min: 387.22 / Avg: 412.04 / Max: 432.52Min: 24966.3 / Avg: 25019.67 / Max: 251081. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.3.8Clients: 256EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P70140210280350SE +/- 0.08, N = 3SE +/- 0.25, N = 3SE +/- 0.11, N = 3SE +/- 0.34, N = 3SE +/- 0.05, N = 3SE +/- 0.25, N = 3SE +/- 0.35, N = 3SE +/- 4.45, N = 3225244275277291279317275-laio-laio-laio-laio-laio-laio-laio1. (CXX) g++ options: -pie -fPIC -fstack-protector -fno-rtti -O2 -lpthread -llzma -lbz2 -lnuma -lz -lm -lpcre -lcrypt -lssl -lcrypto -ldl
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.3.8Clients: 256EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P60120180240300Min: 225.33 / Avg: 225.41 / Max: 225.58Min: 244.08 / Avg: 244.44 / Max: 244.92Min: 274.42 / Avg: 274.6 / Max: 274.8Min: 276.63 / Avg: 277.11 / Max: 277.78Min: 290.7 / Avg: 290.78 / Max: 290.87Min: 278.78 / Avg: 279.23 / Max: 279.64Min: 316.26 / Avg: 316.92 / Max: 317.46Min: 268.6 / Avg: 275.4 / Max: 283.771. (CXX) g++ options: -pie -fPIC -fstack-protector -fno-rtti -O2 -lpthread -llzma -lbz2 -lnuma -lz -lm -lpcre -lcrypt -lssl -lcrypto -ldl

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker9001800270036004500SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 38.11, N = 316.269.6712.007.5510.346.576.975.324291.49-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker7001400210028003500Min: 16.14 / Avg: 16.26 / Max: 16.42Min: 9.63 / Avg: 9.67 / Max: 9.7Min: 11.97 / Avg: 12 / Max: 12.05Min: 7.44 / Avg: 7.55 / Max: 7.61Min: 10.28 / Avg: 10.34 / Max: 10.37Min: 6.52 / Avg: 6.57 / Max: 6.59Min: 6.96 / Avg: 6.97 / Max: 6.98Min: 5.27 / Avg: 5.32 / Max: 5.39Min: 4217.85 / Avg: 4291.49 / Max: 4345.371. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker5K10K15K20K25KSE +/- 0.83, N = 3SE +/- 11.59, N = 3SE +/- 21.13, N = 3SE +/- 31.64, N = 3SE +/- 8.32, N = 3SE +/- 57.08, N = 4SE +/- 15.00, N = 3SE +/- 24.40, N = 3SE +/- 12.87, N = 32837446438303454426441514056246524146-lmklml_intel - MIN: 2805.29-lmklml_intel - MIN: 3952.66-lmklml_intel - MIN: 3522.13-lmklml_intel - MIN: 2389.86-lmklml_intel - MIN: 3799.97-lmklml_intel - MIN: 3399.96-lmklml_intel - MIN: 3583.31-lmklml_intel - MIN: 2014.81-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 23990.81. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker4K8K12K16K20KMin: 2835.76 / Avg: 2837.03 / Max: 2838.58Min: 4440.9 / Avg: 4464.07 / Max: 4476.18Min: 3792.02 / Avg: 3829.73 / Max: 3865.11Min: 3406.33 / Avg: 3454.22 / Max: 3513.98Min: 4247.94 / Avg: 4264.43 / Max: 4274.63Min: 4025.2 / Avg: 4150.77 / Max: 4298.53Min: 4034.48 / Avg: 4056.3 / Max: 4085.05Min: 2415.88 / Avg: 2464.67 / Max: 2489.12Min: 24130.5 / Avg: 24146.07 / Max: 24171.61. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Barbershop - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker90180270360450SE +/- 0.27, N = 3SE +/- 0.29, N = 3SE +/- 0.14, N = 3SE +/- 0.33, N = 3SE +/- 0.42, N = 3SE +/- 0.39, N = 3SE +/- 0.23, N = 3SE +/- 0.21, N = 3SE +/- 0.80, N = 3400238284180243164169142413
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Barbershop - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker70140210280350Min: 399.76 / Avg: 400.29 / Max: 400.59Min: 237.77 / Avg: 238.1 / Max: 238.67Min: 284 / Avg: 284.2 / Max: 284.47Min: 179.42 / Avg: 179.77 / Max: 180.43Min: 242.23 / Avg: 242.81 / Max: 243.63Min: 163.66 / Avg: 164.45 / Max: 164.9Min: 168.57 / Avg: 168.95 / Max: 169.36Min: 141.36 / Avg: 141.73 / Max: 142.09Min: 412.55 / Avg: 413.47 / Max: 415.06

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P2K4K6K8K10KSE +/- 91.00, N = 2SE +/- 142.00, N = 3SE +/- 72.00, N = 3SE +/- 11.50, N = 2SE +/- 71.42, N = 3SE +/- 90.84, N = 2065228395670175736729756474228009
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P15003000450060007500Min: 6610 / Avg: 6701 / Max: 6792Min: 7304 / Avg: 7573.33 / Max: 7786Min: 6613 / Avg: 6729.33 / Max: 6861Min: 7552 / Avg: 7563.5 / Max: 7575Min: 7285 / Avg: 7422.33 / Max: 7525Min: 7580 / Avg: 8009 / Max: 8833

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker30060090012001500SE +/- 0.45, N = 3SE +/- 0.86, N = 4SE +/- 0.84, N = 3SE +/- 0.64, N = 12SE +/- 0.06, N = 3SE +/- 0.53, N = 12SE +/- 0.31, N = 3SE +/- 0.39, N = 3SE +/- 0.47, N = 3101.5660.1670.9441.0159.5434.3137.9823.751385.73-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 1375.661. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker2004006008001000Min: 100.82 / Avg: 101.56 / Max: 102.37Min: 58.37 / Avg: 60.16 / Max: 62.44Min: 69.94 / Avg: 70.94 / Max: 72.61Min: 38.47 / Avg: 41.01 / Max: 47.1Min: 59.47 / Avg: 59.54 / Max: 59.66Min: 32.11 / Avg: 34.31 / Max: 37.85Min: 37.52 / Avg: 37.98 / Max: 38.58Min: 23.13 / Avg: 23.75 / Max: 24.48Min: 1384.81 / Avg: 1385.73 / Max: 1386.381. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. This test profile currently makes use of the OpenMP implementation and using the Fayalite-FIST molecular dynamics run and measures the total time to complete. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 6.1Fayalite-FIST DataEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker160320480640800583726540607518575536595610

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker80160240320400SE +/- 0.34, N = 3SE +/- 0.95, N = 3SE +/- 0.87, N = 3SE +/- 0.67, N = 3SE +/- 0.85, N = 3SE +/- 0.36, N = 3SE +/- 0.51, N = 3SE +/- 0.23, N = 3SE +/- 0.32, N = 3341.22180.38234.19128.52199.29109.24116.9274.19358.74
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker60120180240300Min: 340.58 / Avg: 341.22 / Max: 341.72Min: 178.94 / Avg: 180.38 / Max: 182.18Min: 232.62 / Avg: 234.19 / Max: 235.64Min: 127.35 / Avg: 128.52 / Max: 129.66Min: 198.04 / Avg: 199.29 / Max: 200.9Min: 108.85 / Avg: 109.24 / Max: 109.96Min: 116 / Avg: 116.92 / Max: 117.77Min: 73.78 / Avg: 74.19 / Max: 74.58Min: 358.13 / Avg: 358.74 / Max: 359.24

Apache Siege

This is a test of the Apache web server performance being facilitated by the Siege web serverb enchmark program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 250EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker20K40K60K80K100KSE +/- 2390.80, N = 15SE +/- 1193.02, N = 12SE +/- 1860.01, N = 15SE +/- 331.61, N = 3SE +/- 761.44, N = 15SE +/- 91.71, N = 3SE +/- 128.76, N = 3SE +/- 209.09, N = 3SE +/- 28.17, N = 3822033705656963298114343631408372013417310360-O2-O2-O2-O2-O2-O2-O2-O2-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -lpthread -ldl -lssl -lcrypto
OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 250EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker14K28K42K56K70KMin: 61470.38 / Avg: 82203.03 / Max: 92695.59Min: 29157.92 / Avg: 37055.62 / Max: 40309.58Min: 38133.01 / Avg: 56963.01 / Max: 61259.49Min: 29449.88 / Avg: 29810.6 / Max: 30472.94Min: 33606.67 / Avg: 43435.78 / Max: 45322.7Min: 31296.95 / Avg: 31407.57 / Max: 31589.59Min: 36943.99 / Avg: 37201.43 / Max: 37335.72Min: 33774.66 / Avg: 34172.69 / Max: 34482.76Min: 10306.73 / Avg: 10359.7 / Max: 10402.81. (CC) gcc options: -lpthread -ldl -lssl -lcrypto

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker60120180240300191294172289172289184288276

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker50M100M150M200M250MSE +/- 236368.34, N = 3SE +/- 765274.16, N = 3SE +/- 110587.02, N = 3SE +/- 1935815.84, N = 3SE +/- 455316.65, N = 3SE +/- 221270.37, N = 3SE +/- 618106.66, N = 3SE +/- 915377.87, N = 3SE +/- 144345.33, N = 34290982975783369629628551126721797846748813547428513171053823960628142116857
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker40M80M120M160M200MMin: 42446678 / Avg: 42909829 / Max: 43223431Min: 75013003 / Avg: 75783369 / Max: 77313906Min: 62758354 / Avg: 62962855.33 / Max: 63138064Min: 110533940 / Avg: 112672179.33 / Max: 116536491Min: 77556887 / Avg: 78467487.67 / Max: 78929465Min: 135106014 / Avg: 135474285.33 / Max: 135870939Min: 131036699 / Avg: 131710538.33 / Max: 132945023Min: 237848794 / Avg: 239606281.33 / Max: 240929073Min: 41894594 / Avg: 42116856.67 / Max: 42387538

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Classroom - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker60120180240300SE +/- 0.22, N = 3SE +/- 0.71, N = 3SE +/- 0.03, N = 3SE +/- 0.26, N = 3SE +/- 0.38, N = 3SE +/- 0.20, N = 3SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 1.44, N = 3264.66136.31181.5894.41152.5278.1786.2546.33291.96
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Classroom - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker50100150200250Min: 264.23 / Avg: 264.66 / Max: 264.96Min: 135.1 / Avg: 136.31 / Max: 137.56Min: 181.54 / Avg: 181.58 / Max: 181.64Min: 94.03 / Avg: 94.41 / Max: 94.9Min: 151.83 / Avg: 152.52 / Max: 153.15Min: 77.97 / Avg: 78.17 / Max: 78.57Min: 85.91 / Avg: 86.25 / Max: 86.64Min: 45.96 / Avg: 46.33 / Max: 46.53Min: 289.61 / Avg: 291.96 / Max: 294.58

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5.0Test: CPU Multi Core - Horizon DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P2004006008001000SE +/- 0.44, N = 3SE +/- 8.03, N = 3SE +/- 0.20, N = 3SE +/- 0.02, N = 3SE +/- 0.43, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3446.17813.17628.731.12751.031.301.151.81
OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5.0Test: CPU Multi Core - Horizon DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P140280420560700Min: 445.3 / Avg: 446.17 / Max: 446.7Min: 799.3 / Avg: 813.17 / Max: 827.1Min: 628.4 / Avg: 628.73 / Max: 629.1Min: 1.08 / Avg: 1.12 / Max: 1.15Min: 750.6 / Avg: 751.03 / Max: 751.9Min: 1.22 / Avg: 1.3 / Max: 1.4Min: 1.15 / Avg: 1.15 / Max: 1.15Min: 1.74 / Avg: 1.81 / Max: 1.88

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5.0Test: CPU Multi Core - Face DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P2004006008001000SE +/- 0.72, N = 3SE +/- 2.97, N = 3SE +/- 0.95, N = 3SE +/- 3.76, N = 3SE +/- 1.88, N = 3SE +/- 18.45, N = 3SE +/- 8.04, N = 3SE +/- 20.44, N = 3148.30290.47219.67415.53266.73501.20465.80792.37
OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5.0Test: CPU Multi Core - Face DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P140280420560700Min: 147.1 / Avg: 148.3 / Max: 149.6Min: 287.4 / Avg: 290.47 / Max: 296.4Min: 218.3 / Avg: 219.67 / Max: 221.5Min: 409.3 / Avg: 415.53 / Max: 422.3Min: 263.4 / Avg: 266.73 / Max: 269.9Min: 464.3 / Avg: 501.2 / Max: 519.7Min: 452.3 / Avg: 465.8 / Max: 480.1Min: 761.3 / Avg: 792.37 / Max: 830.9

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5.0Test: CPU Multi Core - Gaussian BlurEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P2004006008001000SE +/- 1.66, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 323.52, N = 3820.131.311.191.531.421.662.23325.15
OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5.0Test: CPU Multi Core - Gaussian BlurEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P140280420560700Min: 818 / Avg: 820.13 / Max: 823.4Min: 1.27 / Avg: 1.31 / Max: 1.36Min: 1.19 / Avg: 1.19 / Max: 1.19Min: 1.46 / Avg: 1.53 / Max: 1.66Min: 1.41 / Avg: 1.42 / Max: 1.43Min: 1.65 / Avg: 1.66 / Max: 1.67Min: 2.18 / Avg: 2.23 / Max: 2.26Min: 1.59 / Avg: 325.15 / Max: 972.2

OpenBenchmarking.orgScore, More Is BetterGeekbench 5.0Test: CPU Multi CoreEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P13K26K39K52K65KSE +/- 48.12, N = 3SE +/- 121.59, N = 3SE +/- 27.67, N = 3SE +/- 292.23, N = 3SE +/- 4.36, N = 3SE +/- 351.73, N = 3SE +/- 217.83, N = 3SE +/- 578.86, N = 31731228823235143747727437434364206759066
OpenBenchmarking.orgScore, More Is BetterGeekbench 5.0Test: CPU Multi CoreEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P10K20K30K40K50KMin: 17236 / Avg: 17311.67 / Max: 17401Min: 28669 / Avg: 28823 / Max: 29063Min: 23485 / Avg: 23513.67 / Max: 23569Min: 37137 / Avg: 37477.33 / Max: 38059Min: 27430 / Avg: 27437 / Max: 27445Min: 42918 / Avg: 43435.67 / Max: 44107Min: 41681 / Avg: 42066.67 / Max: 42435Min: 58055 / Avg: 59066 / Max: 60060

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker60120180240300266186201168178171155174261

Apache Siege

This is a test of the Apache web server performance being facilitated by the Siege web serverb enchmark program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 200EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P20K40K60K80K100KSE +/- 1768.41, N = 14SE +/- 343.37, N = 3SE +/- 804.16, N = 15SE +/- 500.75, N = 3SE +/- 1323.20, N = 12SE +/- 297.58, N = 15SE +/- 150.37, N = 3SE +/- 345.21, N = 388422.5231601.3053380.7232588.8539737.2033487.9036292.3031578.101. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto
OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 200EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P15K30K45K60K75KMin: 66203.24 / Avg: 88422.52 / Max: 93109.88Min: 31196.38 / Avg: 31601.3 / Max: 32284.1Min: 45218.18 / Avg: 53380.72 / Max: 58360.08Min: 31620.55 / Avg: 32588.85 / Max: 33294.49Min: 32824.55 / Avg: 39737.2 / Max: 45228.4Min: 30436.77 / Avg: 33487.9 / Max: 34423.41Min: 35997.12 / Avg: 36292.3 / Max: 36489.69Min: 31225.6 / Avg: 31578.1 / Max: 32268.471. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto

RAMspeed SMP

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Add - Benchmark: IntegerEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker10K20K30K40K50K310594855147188473974697226566468523757526447-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 4.18Time To CompileEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker4080120160200SE +/- 0.42, N = 9SE +/- 0.26, N = 13SE +/- 0.25, N = 13SE +/- 0.22, N = 15SE +/- 0.26, N = 13SE +/- 0.25, N = 14SE +/- 0.26, N = 13SE +/- 0.26, N = 14SE +/- 1.99, N = 341.6727.4032.7322.6229.0120.8322.3616.07186.37
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 4.18Time To CompileEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150Min: 40.48 / Avg: 41.67 / Max: 44.81Min: 26.94 / Avg: 27.4 / Max: 30.54Min: 32.27 / Avg: 32.73 / Max: 35.7Min: 22.06 / Avg: 22.62 / Max: 25.64Min: 28.5 / Avg: 29.01 / Max: 32.15Min: 20.3 / Avg: 20.83 / Max: 24.01Min: 22.01 / Avg: 22.36 / Max: 25.44Min: 15.65 / Avg: 16.07 / Max: 19.49Min: 184.17 / Avg: 186.37 / Max: 190.34

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.13b1ATPase Simulation - 327,506 AtomsEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker0.30880.61760.92641.23521.544SE +/- 0.00187, N = 3SE +/- 0.00045, N = 10SE +/- 0.00027, N = 15SE +/- 0.00020, N = 15SE +/- 0.00033, N = 15SE +/- 0.00023, N = 12SE +/- 0.00043, N = 13SE +/- 0.00086, N = 4SE +/- 0.00429, N = 31.265800.661560.891480.470610.738720.390540.428790.263931.37249
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.13b1ATPase Simulation - 327,506 AtomsEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker246810Min: 1.26 / Avg: 1.27 / Max: 1.27Min: 0.66 / Avg: 0.66 / Max: 0.66Min: 0.89 / Avg: 0.89 / Max: 0.89Min: 0.47 / Avg: 0.47 / Max: 0.47Min: 0.74 / Avg: 0.74 / Max: 0.74Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.26 / Avg: 0.26 / Max: 0.27Min: 1.37 / Avg: 1.37 / Max: 1.38

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.0vpxenc VP9 1080p Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker50100150200250SE +/- 1.13, N = 3SE +/- 0.61, N = 3SE +/- 2.53, N = 3SE +/- 0.54, N = 3SE +/- 1.78, N = 3SE +/- 0.55, N = 3SE +/- 1.84, N = 3SE +/- 1.08, N = 3SE +/- 3.34, N = 3168122156136158144161152216-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.0vpxenc VP9 1080p Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker4080120160200Min: 165.32 / Avg: 167.57 / Max: 168.92Min: 121.61 / Avg: 122.3 / Max: 123.51Min: 151.09 / Avg: 156.03 / Max: 159.45Min: 134.82 / Avg: 135.73 / Max: 136.7Min: 154.93 / Avg: 158.45 / Max: 160.63Min: 143.75 / Avg: 144.41 / Max: 145.5Min: 156.92 / Avg: 160.59 / Max: 162.7Min: 150.01 / Avg: 152.18 / Max: 153.31Min: 209.74 / Avg: 215.99 / Max: 221.141. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=c++11

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150SE +/- 0.10, N = 3SE +/- 0.08, N = 15SE +/- 0.10, N = 4SE +/- 0.10, N = 12SE +/- 0.02, N = 3SE +/- 0.09, N = 15SE +/- 0.05, N = 4SE +/- 0.05, N = 12SE +/- 0.32, N = 39.625.486.913.865.803.423.622.72139.21-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 136.21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150Min: 9.44 / Avg: 9.62 / Max: 9.79Min: 4.97 / Avg: 5.48 / Max: 6.1Min: 6.79 / Avg: 6.91 / Max: 7.21Min: 3.52 / Avg: 3.86 / Max: 4.83Min: 5.77 / Avg: 5.8 / Max: 5.83Min: 3.13 / Avg: 3.42 / Max: 4.2Min: 3.56 / Avg: 3.62 / Max: 3.77Min: 2.24 / Avg: 2.72 / Max: 2.88Min: 138.72 / Avg: 139.21 / Max: 139.81. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7302EPYC 7402EPYC 7742 2P13002600390052006500SE +/- 0.76, N = 3SE +/- 4.05, N = 3SE +/- 4.05, N = 3905.391335.446095.901. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7302EPYC 7402EPYC 7742 2P11002200330044005500Min: 903.87 / Avg: 905.39 / Max: 906.29Min: 1327.61 / Avg: 1335.44 / Max: 1341.16Min: 6090.06 / Avg: 6095.9 / Max: 6103.671. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150SE +/- 0.12, N = 3SE +/- 0.20, N = 3SE +/- 0.19, N = 3SE +/- 0.23, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.55, N = 3143.5982.88101.6964.7588.3457.6059.3843.71148.98
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150Min: 143.43 / Avg: 143.59 / Max: 143.82Min: 82.59 / Avg: 82.88 / Max: 83.26Min: 101.36 / Avg: 101.69 / Max: 102.02Min: 64.31 / Avg: 64.75 / Max: 65.07Min: 88.21 / Avg: 88.34 / Max: 88.44Min: 57.5 / Avg: 57.6 / Max: 57.68Min: 59.15 / Avg: 59.38 / Max: 59.55Min: 43.62 / Avg: 43.71 / Max: 43.8Min: 148.38 / Avg: 148.98 / Max: 150.07

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 7302EPYC 7402EPYC 7742 2P50K100K150K200K250KSE +/- 38.96, N = 3SE +/- 493.34, N = 13SE +/- 532.50, N = 367014.4861593.15246543.131. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 7302EPYC 7402EPYC 7742 2P40K80K120K160K200KMin: 66937.63 / Avg: 67014.48 / Max: 67064.09Min: 58146.48 / Avg: 61593.15 / Max: 62943.42Min: 245823.32 / Avg: 246543.13 / Max: 247582.81. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker50M100M150M200M250MSE +/- 79439.80, N = 3SE +/- 586431.96, N = 3SE +/- 55740.39, N = 3SE +/- 306099.70, N = 3SE +/- 989546.79, N = 3SE +/- 1058297.76, N = 3SE +/- 744994.83, N = 3SE +/- 728055.41, N = 3SE +/- 170350.92, N = 34255416476602512620208681139014117590352813907088013312086024118510540957382-pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker40M80M120M160M200MMin: 42401235 / Avg: 42554163.67 / Max: 42667933Min: 75465505 / Avg: 76602512.33 / Max: 77420251Min: 61910110 / Avg: 62020867.67 / Max: 62087225Min: 113289485 / Avg: 113901411.33 / Max: 114223208Min: 74446293 / Avg: 75903528.33 / Max: 77791869Min: 136955557 / Avg: 139070880 / Max: 140192094Min: 131676598 / Avg: 133120860 / Max: 134160217Min: 240234422 / Avg: 241185104.67 / Max: 242615614Min: 40625234 / Avg: 40957382.33 / Max: 411891561. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150148.0481.28109.3063.4586.3459.8363.5057.99150.20

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: BMW27 - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker20406080100SE +/- 0.29, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.01, N = 3SE +/- 0.23, N = 3SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.48, N = 3101.3454.6370.5440.5260.9535.4137.3926.84104.48
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.80Blend File: BMW27 - Compute: CPU-OnlyEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker20406080100Min: 100.95 / Avg: 101.34 / Max: 101.9Min: 54.5 / Avg: 54.63 / Max: 54.72Min: 70.4 / Avg: 70.54 / Max: 70.67Min: 40.39 / Avg: 40.52 / Max: 40.75Min: 60.94 / Avg: 60.95 / Max: 60.97Min: 35.17 / Avg: 35.41 / Max: 35.88Min: 37.25 / Avg: 37.39 / Max: 37.64Min: 26.55 / Avg: 26.84 / Max: 27Min: 103.59 / Avg: 104.48 / Max: 105.23

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7302EPYC 7402EPYC 7742 2P50K100K150K200K250KSE +/- 704.59, N = 12SE +/- 54.19, N = 3SE +/- 663.76, N = 360960.7975893.42227460.181. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7302EPYC 7402EPYC 7742 2P40K80K120K160K200KMin: 53329.63 / Avg: 60960.79 / Max: 62020.42Min: 75787.94 / Avg: 75893.42 / Max: 75967.73Min: 226218.77 / Avg: 227460.18 / Max: 228488.181. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5.0Test: CPU Single Core - Horizon DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P612182430SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.00, N = 323.5022.8323.8023.1023.8323.3024.1024.00
OpenBenchmarking.orgGpixels/sec, More Is BetterGeekbench 5.0Test: CPU Single Core - Horizon DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P612182430Min: 23.5 / Avg: 23.5 / Max: 23.5Min: 22.8 / Avg: 22.83 / Max: 22.9Min: 23.5 / Avg: 23.8 / Max: 24Min: 22.8 / Avg: 23.1 / Max: 23.3Min: 23.7 / Avg: 23.83 / Max: 23.9Min: 23.1 / Avg: 23.3 / Max: 23.4Min: 24.1 / Avg: 24.1 / Max: 24.1

OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5.0Test: CPU Single Core - Face DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 38.188.198.318.288.318.318.458.28
OpenBenchmarking.orgimages/sec, More Is BetterGeekbench 5.0Test: CPU Single Core - Face DetectionEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P3691215Min: 8.17 / Avg: 8.18 / Max: 8.19Min: 8.18 / Avg: 8.19 / Max: 8.19Min: 8.3 / Avg: 8.31 / Max: 8.32Min: 8.27 / Avg: 8.28 / Max: 8.28Min: 8.3 / Avg: 8.31 / Max: 8.32Min: 8.3 / Avg: 8.31 / Max: 8.31Min: 8.44 / Avg: 8.45 / Max: 8.45Min: 8.26 / Avg: 8.28 / Max: 8.31

OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5.0Test: CPU Single Core - Gaussian BlurEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P1530456075SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.20, N = 3SE +/- 0.22, N = 3SE +/- 0.25, N = 3SE +/- 0.07, N = 3SE +/- 0.23, N = 3SE +/- 0.09, N = 363.1762.6064.3363.4764.4063.7365.6764.43
OpenBenchmarking.orgMpixels/sec, More Is BetterGeekbench 5.0Test: CPU Single Core - Gaussian BlurEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P1326395265Min: 63 / Avg: 63.17 / Max: 63.3Min: 62.3 / Avg: 62.6 / Max: 62.9Min: 64 / Avg: 64.33 / Max: 64.7Min: 63.2 / Avg: 63.47 / Max: 63.9Min: 63.9 / Avg: 64.4 / Max: 64.7Min: 63.6 / Avg: 63.73 / Max: 63.8Min: 65.3 / Avg: 65.67 / Max: 66.1Min: 64.3 / Avg: 64.43 / Max: 64.6

OpenBenchmarking.orgScore, More Is BetterGeekbench 5.0Test: CPU Single CoreEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P2004006008001000SE +/- 3.84, N = 3SE +/- 3.18, N = 3SE +/- 2.08, N = 3SE +/- 4.33, N = 3SE +/- 0.67, N = 3SE +/- 2.60, N = 3SE +/- 2.73, N = 3100599110261006102699910411033
OpenBenchmarking.orgScore, More Is BetterGeekbench 5.0Test: CPU Single CoreEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P2004006008001000Min: 999 / Avg: 1004.67 / Max: 1012Min: 987 / Avg: 990.67 / Max: 997Min: 1022 / Avg: 1026 / Max: 1029Min: 998 / Avg: 1005.67 / Max: 1013Min: 1025 / Avg: 1026.33 / Max: 1027Min: 994 / Avg: 998.67 / Max: 1003Min: 1037 / Avg: 1040.67 / Max: 1046

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker80K160K240K320K400KSE +/- 76.86, N = 3SE +/- 2076.26, N = 3SE +/- 288.78, N = 3SE +/- 345.86, N = 3SE +/- 288.18, N = 3SE +/- 1782.94, N = 3SE +/- 362.39, N = 3SE +/- 3511.50, N = 3SE +/- 95.79, N = 396539150914140234218425171119255034280575350308953071. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker60K120K180K240K300KMin: 96403 / Avg: 96539.33 / Max: 96669Min: 147466 / Avg: 150914 / Max: 154642Min: 139690 / Avg: 140234 / Max: 140674Min: 218047 / Avg: 218425.33 / Max: 219116Min: 170826 / Avg: 171118.67 / Max: 171695Min: 252036 / Avg: 255034 / Max: 258205Min: 279855 / Avg: 280574.67 / Max: 281009Min: 344025 / Avg: 350308 / Max: 356167Min: 95130 / Avg: 95307 / Max: 954591. (CXX) g++ options: -pipe -lpthread

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker80160240320400SE +/- 0.41, N = 3SE +/- 1.25, N = 3SE +/- 0.28, N = 3SE +/- 1.37, N = 3SE +/- 0.17, N = 3SE +/- 0.65, N = 3SE +/- 0.81, N = 3SE +/- 0.76, N = 3SE +/- 1.24, N = 3200.24123.29148.2999.98124.8181.2189.7589.65363.51-lmklml_intel - MIN: 157.82-lmklml_intel - MIN: 91.89-lmklml_intel - MIN: 114.1-lmklml_intel - MIN: 74.2-lmklml_intel - MIN: 91.72-lmklml_intel - MIN: 58.01-lmklml_intel - MIN: 67.65-lmklml_intel - MIN: 50.92-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 351.71. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker60120180240300Min: 199.64 / Avg: 200.24 / Max: 201.03Min: 120.9 / Avg: 123.29 / Max: 125.1Min: 147.83 / Avg: 148.29 / Max: 148.8Min: 98.37 / Avg: 99.98 / Max: 102.71Min: 124.48 / Avg: 124.81 / Max: 125.05Min: 80.22 / Avg: 81.21 / Max: 82.44Min: 88.76 / Avg: 89.75 / Max: 91.34Min: 88.13 / Avg: 89.65 / Max: 90.42Min: 361.23 / Avg: 363.51 / Max: 365.511. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker800K1600K2400K3200K4000KSE +/- 582.97, N = 3SE +/- 3662.17, N = 3SE +/- 4300.99, N = 3SE +/- 11185.21, N = 3SE +/- 890.96, N = 3SE +/- 11036.87, N = 3SE +/- 2305.83, N = 3SE +/- 4830.15, N = 3SE +/- 11154.19, N = 3588635113409788032716558941109475211876419662113719997614817-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker600K1200K1800K2400K3000KMin: 587540.02 / Avg: 588635.45 / Max: 589528.95Min: 1128797.57 / Avg: 1134096.74 / Max: 1141125.08Min: 875108.96 / Avg: 880326.55 / Max: 888858.03Min: 1642341.62 / Avg: 1655893.53 / Max: 1678083.31Min: 1107817.47 / Avg: 1109475.3 / Max: 1110870.04Min: 2104830.42 / Avg: 2118764.37 / Max: 2140557.72Min: 1962964.38 / Avg: 1966211.24 / Max: 1970670.87Min: 3714316.81 / Avg: 3719996.54 / Max: 3729603.73Min: 594491.66 / Avg: 614816.83 / Max: 632942.691. (CC) gcc options: -O2 -lrt" -lrt

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 6.0.1Time To CompileEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P4080120160200203.24134.26158.54105.84143.4399.96102.1478.81

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker40K80K120K160K200KSE +/- 5.20, N = 3SE +/- 10.97, N = 3SE +/- 37.99, N = 3SE +/- 83.72, N = 3SE +/- 19.00, N = 3SE +/- 1251.62, N = 12SE +/- 78.55, N = 3SE +/- 2284.55, N = 3SE +/- 60.93, N = 3313056239846756922165752311099297345183465285441. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker30K60K90K120K150KMin: 31296 / Avg: 31305 / Max: 31314Min: 62379 / Avg: 62398 / Max: 62417Min: 46699 / Avg: 46756 / Max: 46828Min: 92071 / Avg: 92216.33 / Max: 92361Min: 57504 / Avg: 57523 / Max: 57561Min: 97311 / Avg: 110992.33 / Max: 112781Min: 97234 / Avg: 97345.33 / Max: 97497Min: 180880 / Avg: 183464.67 / Max: 188020Min: 28425 / Avg: 28543.67 / Max: 286271. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker400800120016002000SE +/- 4.37, N = 3SE +/- 1.76, N = 3SE +/- 2.73, N = 3SE +/- 1.86, N = 3SE +/- 6.49, N = 3SE +/- 2.40, N = 3SE +/- 1.15, N = 3SE +/- 2.33, N = 3SE +/- 2.52, N = 3122112251207121112121211120212061834
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker30060090012001500Min: 1216 / Avg: 1221.33 / Max: 1230Min: 1222 / Avg: 1224.67 / Max: 1228Min: 1203 / Avg: 1206.67 / Max: 1212Min: 1209 / Avg: 1211.33 / Max: 1215Min: 1202 / Avg: 1211.67 / Max: 1224Min: 1208 / Avg: 1211.33 / Max: 1216Min: 1200 / Avg: 1202 / Max: 1204Min: 1202 / Avg: 1206.33 / Max: 1210Min: 1829 / Avg: 1834 / Max: 1837

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7302EPYC 7402EPYC 7742 2P20K40K60K80K100KSE +/- 291.22, N = 15SE +/- 198.43, N = 3SE +/- 1046.84, N = 335850.7932144.38109565.861. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7302EPYC 7402EPYC 7742 2P20K40K60K80K100KMin: 32613.07 / Avg: 35850.79 / Max: 36326.19Min: 31901.62 / Avg: 32144.38 / Max: 32537.64Min: 107536.3 / Avg: 109565.86 / Max: 111025.91. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker612182430SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.23, N = 3SE +/- 0.01, N = 3SE +/- 0.22, N = 3SE +/- 0.00, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 326.0025.0723.9723.9723.3324.1321.8022.3025.44-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker612182430Min: 25.82 / Avg: 26 / Max: 26.12Min: 25.01 / Avg: 25.07 / Max: 25.1Min: 23.94 / Avg: 23.97 / Max: 24.02Min: 23.68 / Avg: 23.97 / Max: 24.42Min: 23.31 / Avg: 23.33 / Max: 23.34Min: 23.7 / Avg: 24.13 / Max: 24.39Min: 21.8 / Avg: 21.8 / Max: 21.81Min: 22.11 / Avg: 22.3 / Max: 22.53Min: 25.36 / Avg: 25.44 / Max: 25.511. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker714212835SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 15SE +/- 0.02, N = 328.8816.3620.6512.1517.3410.3210.868.4130.00-lSDL -lXpm -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread-lSDL -lXpm -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread-lSDL -lXpm -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread-lSDL -lXpm -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread-lSDL -lXpm -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread-lSDL -lXpm -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread-lSDL -lXpm -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread-lXpm-fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker714212835Min: 28.81 / Avg: 28.88 / Max: 28.96Min: 16.19 / Avg: 16.36 / Max: 16.53Min: 20.62 / Avg: 20.65 / Max: 20.72Min: 12.03 / Avg: 12.15 / Max: 12.23Min: 17.3 / Avg: 17.34 / Max: 17.42Min: 10.19 / Avg: 10.32 / Max: 10.39Min: 10.8 / Avg: 10.86 / Max: 10.89Min: 7.88 / Avg: 8.41 / Max: 8.84Min: 29.97 / Avg: 30 / Max: 30.041. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSM -lICE -lX11 -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P2K4K6K8K10KSE +/- 26.52, N = 4SE +/- 42.22, N = 20SE +/- 15.37, N = 4SE +/- 69.00, N = 4SE +/- 58.12, N = 4SE +/- 82.60, N = 6SE +/- 31.31, N = 4SE +/- 101.57, N = 450166160545664815759691466728050
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P14002800420056007000Min: 4944 / Avg: 5016 / Max: 5066Min: 5801 / Avg: 6160.2 / Max: 6469Min: 5428 / Avg: 5456 / Max: 5492Min: 6353 / Avg: 6481 / Max: 6676Min: 5683 / Avg: 5758.75 / Max: 5932Min: 6653 / Avg: 6914.17 / Max: 7166Min: 6593 / Avg: 6672 / Max: 6743Min: 7827 / Avg: 8050.25 / Max: 8254

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 7302EPYC 7402EPYC 7742 2P30K60K90K120K150KSE +/- 88.06, N = 3SE +/- 364.79, N = 15SE +/- 509.65, N = 337885.1634934.02122309.531. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 7302EPYC 7402EPYC 7742 2P20K40K60K80K100KMin: 37711.48 / Avg: 37885.16 / Max: 37997.31Min: 32469.48 / Avg: 34934.02 / Max: 36397.61Min: 121596.57 / Avg: 122309.53 / Max: 123296.891. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker612182430SE +/- 0.18, N = 3SE +/- 0.08, N = 3SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.39, N = 15SE +/- 0.09, N = 316.6610.4612.519.5610.489.619.5614.6526.97-lmklml_intel - MIN: 10.87-lmklml_intel - MIN: 6.31-lmklml_intel - MIN: 7.44-lmklml_intel - MIN: 5.94-lmklml_intel - MIN: 6.37-lmklml_intel - MIN: 5.19-lmklml_intel - MIN: 5.91-lmklml_intel - MIN: 5.72-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 26.181. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker612182430Min: 16.34 / Avg: 16.66 / Max: 16.97Min: 10.36 / Avg: 10.46 / Max: 10.61Min: 12.32 / Avg: 12.51 / Max: 12.78Min: 9.51 / Avg: 9.56 / Max: 9.62Min: 10.45 / Avg: 10.48 / Max: 10.51Min: 9.37 / Avg: 9.61 / Max: 9.75Min: 9.47 / Avg: 9.56 / Max: 9.73Min: 12.25 / Avg: 14.65 / Max: 16.57Min: 26.79 / Avg: 26.97 / Max: 27.091. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7302EPYC 7742 2P9001800270036004500SE +/- 2.98, N = 3SE +/- 5.34, N = 31683.604238.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7302EPYC 7742 2P7001400210028003500Min: 1679.09 / Avg: 1683.6 / Max: 1689.22Min: 4230.41 / Avg: 4238.56 / Max: 4248.611. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker20406080100SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.75, N = 318.3511.0814.558.9011.337.848.077.48100.46-lmklml_intel - MIN: 17.89-lmklml_intel - MIN: 10.31-lmklml_intel - MIN: 14.02-lmklml_intel - MIN: 8.19-lmklml_intel - MIN: 10.84-lmklml_intel - MIN: 7.28-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 88.161. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker20406080100Min: 18.33 / Avg: 18.35 / Max: 18.36Min: 11 / Avg: 11.08 / Max: 11.15Min: 14.45 / Avg: 14.55 / Max: 14.64Min: 8.84 / Avg: 8.9 / Max: 8.95Min: 11.28 / Avg: 11.33 / Max: 11.42Min: 7.75 / Avg: 7.84 / Max: 7.92Min: 7.84 / Avg: 8.07 / Max: 8.18Min: 7.32 / Avg: 7.48 / Max: 7.6Min: 99.02 / Avg: 100.46 / Max: 101.521. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker918273645SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 338.2319.3525.6213.0220.8010.8611.856.2938.76-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker816243240Min: 38.2 / Avg: 38.23 / Max: 38.27Min: 19.3 / Avg: 19.35 / Max: 19.42Min: 25.56 / Avg: 25.62 / Max: 25.67Min: 13.02 / Avg: 13.02 / Max: 13.02Min: 20.78 / Avg: 20.8 / Max: 20.83Min: 10.77 / Avg: 10.86 / Max: 10.95Min: 11.83 / Avg: 11.85 / Max: 11.88Min: 6.1 / Avg: 6.29 / Max: 6.41Min: 38.73 / Avg: 38.76 / Max: 38.811. (CC) gcc options: -lm -lpthread -O3

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. The number of iterations used is 1,000,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker120K240K360K480K600KSE +/- 1513.77, N = 3SE +/- 1836.74, N = 3SE +/- 2237.29, N = 3SE +/- 918.52, N = 3SE +/- 1317.72, N = 3SE +/- 954.57, N = 3SE +/- 1701.68, N = 3SE +/- 743.85, N = 3SE +/- 534.00, N = 3489637489367495293496430495466495916502052501770579240
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker100K200K300K400K500KMin: 487123 / Avg: 489637.33 / Max: 492355Min: 485728 / Avg: 489366.67 / Max: 491623Min: 490936 / Avg: 495293 / Max: 498354Min: 495451 / Avg: 496430.33 / Max: 498266Min: 494066 / Avg: 495466.33 / Max: 498100Min: 494715 / Avg: 495916.33 / Max: 497802Min: 499045 / Avg: 502051.67 / Max: 504936Min: 500592 / Avg: 501770.33 / Max: 503146Min: 578287 / Avg: 579240 / Max: 580134

RAMspeed SMP

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Copy - Benchmark: IntegerEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker9K18K27K36K45K292044305030644333853072336670304564105221203-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Add - Benchmark: Floating PointEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker11K22K33K44K55K310584349732629491643273441144324123258226378-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Triad - Benchmark: Floating PointEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker10K20K30K40K50K339654505832513478423272237295323274125124268-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Average - Benchmark: IntegerEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker9K18K27K36K45K417434336831477426623151336754312503112024396-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Scale - Benchmark: IntegerEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker8K16K24K32K40K366163711130320353183033933123300063804520726-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker7001400210028003500SE +/- 2.31, N = 3SE +/- 1.36, N = 3SE +/- 2.20, N = 3SE +/- 1.43, N = 4SE +/- 0.93, N = 3SE +/- 0.80, N = 3SE +/- 0.93, N = 3SE +/- 0.78, N = 3SE +/- 1.50, N = 3243.03131.15173.4397.71141.7676.9188.9550.663244.55-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 3237.691. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker6001200180024003000Min: 240.48 / Avg: 243.03 / Max: 247.64Min: 129.48 / Avg: 131.15 / Max: 133.84Min: 169.26 / Avg: 173.43 / Max: 176.7Min: 94.66 / Avg: 97.71 / Max: 100.38Min: 139.92 / Avg: 141.76 / Max: 142.91Min: 75.97 / Avg: 76.91 / Max: 78.5Min: 87.8 / Avg: 88.95 / Max: 90.78Min: 49.5 / Avg: 50.66 / Max: 52.16Min: 3242.15 / Avg: 3244.55 / Max: 3247.31. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

RAMspeed SMP

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Average - Benchmark: Floating PointEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker10K20K30K40K50K434354286631543457713163839326313143120224657-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Triad - Benchmark: IntegerEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker10K20K30K40K50K420154307532533461584496238564320093213024186-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Scale - Benchmark: Floating PointEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker9K18K27K36K45K396433924030363399474084132142300933006322574-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P816243240SE +/- 0.15, N = 3SE +/- 0.21, N = 3SE +/- 0.42, N = 3SE +/- 0.35, N = 3SE +/- 0.44, N = 3SE +/- 0.18, N = 3SE +/- 0.59, N = 3SE +/- 0.03, N = 328.9727.4032.9331.6333.7031.5734.4332.53
OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P714212835Min: 28.7 / Avg: 28.97 / Max: 29.2Min: 27.1 / Avg: 27.4 / Max: 27.8Min: 32.1 / Avg: 32.93 / Max: 33.4Min: 31 / Avg: 31.63 / Max: 32.2Min: 33 / Avg: 33.7 / Max: 34.5Min: 31.3 / Avg: 31.57 / Max: 31.9Min: 33.7 / Avg: 34.43 / Max: 35.6Min: 32.5 / Avg: 32.53 / Max: 32.6

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode some sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterdav1d 0.3Video Input: Summer Nature 4KEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker510152025SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.19, N = 3SE +/- 0.02, N = 321.8917.4716.7814.4715.9813.0412.0111.4120.80-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -pthread
OpenBenchmarking.orgSeconds, Fewer Is Betterdav1d 0.3Video Input: Summer Nature 4KEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker510152025Min: 21.84 / Avg: 21.89 / Max: 21.95Min: 17.42 / Avg: 17.47 / Max: 17.54Min: 16.76 / Avg: 16.78 / Max: 16.81Min: 14.4 / Avg: 14.47 / Max: 14.53Min: 15.97 / Avg: 15.98 / Max: 16Min: 12.89 / Avg: 13.04 / Max: 13.12Min: 11.99 / Avg: 12.01 / Max: 12.04Min: 11.14 / Avg: 11.41 / Max: 11.77Min: 20.76 / Avg: 20.8 / Max: 20.831. (CC) gcc options: -pthread

RAMspeed SMP

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterRAMspeed SMP 3.5.0Type: Copy - Benchmark: Floating PointEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker10K20K30K40K50K400574101242376443924130632581306154232122481-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -O3 -march=native

x265

This is a simple test of the x265 encoder run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.0H.265 1080p Video EncodingEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker1122334455SE +/- 0.12, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 3SE +/- 0.09, N = 3SE +/- 0.25, N = 3SE +/- 0.27, N = 3SE +/- 0.04, N = 344.9645.0445.3945.4743.4843.7944.1545.0348.71-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.0H.265 1080p Video EncodingEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker1020304050Min: 44.74 / Avg: 44.96 / Max: 45.14Min: 44.85 / Avg: 45.04 / Max: 45.31Min: 45.31 / Avg: 45.39 / Max: 45.51Min: 45.21 / Avg: 45.47 / Max: 45.77Min: 43.25 / Avg: 43.48 / Max: 43.68Min: 43.63 / Avg: 43.79 / Max: 43.95Min: 43.88 / Avg: 44.15 / Max: 44.66Min: 44.72 / Avg: 45.03 / Max: 45.57Min: 48.64 / Avg: 48.71 / Max: 48.791. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.1.2H.265 1080p Video EncodingEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker1326395265SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 3SE +/- 0.50, N = 3SE +/- 0.11, N = 3SE +/- 0.04, N = 3SE +/- 0.23, N = 3SE +/- 0.67, N = 3SE +/- 0.06, N = 350.5549.2050.8149.4148.9347.8648.9548.4558.82-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.1.2H.265 1080p Video EncodingEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker1224364860Min: 50.42 / Avg: 50.55 / Max: 50.79Min: 49.11 / Avg: 49.2 / Max: 49.33Min: 50.58 / Avg: 50.81 / Max: 50.93Min: 48.47 / Avg: 49.41 / Max: 50.16Min: 48.73 / Avg: 48.93 / Max: 49.1Min: 47.79 / Avg: 47.86 / Max: 47.93Min: 48.49 / Avg: 48.95 / Max: 49.22Min: 47.12 / Avg: 48.45 / Max: 49.29Min: 58.7 / Avg: 58.82 / Max: 58.91. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7302EPYC 7402EPYC 7742 2P20K40K60K80K100KSE +/- 184.74, N = 3SE +/- 709.52, N = 15SE +/- 778.03, N = 347694.1446418.68100057.161. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7302EPYC 7402EPYC 7742 2P20K40K60K80K100KMin: 47327.22 / Avg: 47694.14 / Max: 47915.23Min: 42247.07 / Avg: 46418.68 / Max: 48991.51Min: 98501.1 / Avg: 100057.16 / Max: 100835.821. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7302EPYC 7742 2P11K22K33K44K55KSE +/- 68.23, N = 3SE +/- 626.10, N = 315726.6249246.881. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7302EPYC 7742 2P9K18K27K36K45KMin: 15644.52 / Avg: 15726.62 / Max: 15862.06Min: 48097.25 / Avg: 49246.88 / Max: 50251.521. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker1224364860SE +/- 0.04, N = 3SE +/- 0.07, N = 15SE +/- 0.00, N = 3SE +/- 0.12, N = 15SE +/- 0.00, N = 3SE +/- 0.11, N = 15SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 34.793.593.372.703.592.682.442.7551.80-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake -liomp5 - MIN: 51.481. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32EPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker1020304050Min: 4.75 / Avg: 4.79 / Max: 4.86Min: 3.34 / Avg: 3.59 / Max: 4.06Min: 3.37 / Avg: 3.37 / Max: 3.38Min: 2.24 / Avg: 2.7 / Max: 3.28Min: 3.58 / Avg: 3.59 / Max: 3.59Min: 2.46 / Avg: 2.68 / Max: 3.92Min: 2.43 / Avg: 2.44 / Max: 2.46Min: 2.71 / Avg: 2.75 / Max: 2.78Min: 51.79 / Avg: 51.8 / Max: 51.831. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7302EPYC 7402EPYC 7742 2P13002600390052006500SE +/- 0.78, N = 3SE +/- 2.59, N = 3SE +/- 12.56, N = 3903.831354.936224.781. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7302EPYC 7402EPYC 7742 2P11002200330044005500Min: 902.4 / Avg: 903.83 / Max: 905.09Min: 1349.98 / Avg: 1354.93 / Max: 1358.7Min: 6207.68 / Avg: 6224.78 / Max: 6249.271. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.51080p 8-bit YUV To AV1 Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker20406080100SE +/- 0.18, N = 3SE +/- 1.01, N = 3SE +/- 0.15, N = 3SE +/- 0.70, N = 3SE +/- 0.30, N = 3SE +/- 1.38, N = 4SE +/- 0.55, N = 3SE +/- 0.27, N = 3SE +/- 0.07, N = 344.7660.6862.7871.7267.0096.8298.77101.5240.19-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -O3 -pie -lpthread -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.51080p 8-bit YUV To AV1 Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker20406080100Min: 44.41 / Avg: 44.76 / Max: 44.95Min: 58.86 / Avg: 60.68 / Max: 62.36Min: 62.56 / Avg: 62.78 / Max: 63.07Min: 70.5 / Avg: 71.72 / Max: 72.94Min: 66.4 / Avg: 67 / Max: 67.32Min: 92.72 / Avg: 96.82 / Max: 98.57Min: 97.75 / Avg: 98.77 / Max: 99.65Min: 101.11 / Avg: 101.52 / Max: 102.04Min: 40.06 / Avg: 40.19 / Max: 40.311. (CXX) g++ options: -O3 -pie -lpthread -lm

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker246810SE +/- 0.11, N = 4SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.12, N = 37.765.675.304.854.664.494.334.287.97-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker3691215Min: 7.5 / Avg: 7.76 / Max: 8.01Min: 5.66 / Avg: 5.67 / Max: 5.68Min: 5.26 / Avg: 5.3 / Max: 5.37Min: 4.83 / Avg: 4.85 / Max: 4.87Min: 4.65 / Avg: 4.66 / Max: 4.67Min: 4.49 / Avg: 4.49 / Max: 4.49Min: 4.33 / Avg: 4.33 / Max: 4.33Min: 4.27 / Avg: 4.28 / Max: 4.29Min: 7.74 / Avg: 7.97 / Max: 8.141. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 2019-02-171080p 8-bit YUV To VP9 Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker60120180240300SE +/- 0.43, N = 3SE +/- 1.37, N = 3SE +/- 0.65, N = 3SE +/- 2.18, N = 5SE +/- 0.23, N = 3SE +/- 2.34, N = 3SE +/- 1.55, N = 3SE +/- 3.81, N = 3SE +/- 0.46, N = 393.90116.28120.79162.84128.17191.55221.23280.2293.66-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -fPIE -fPIC -O2 -flto -fvisibility=hidden -mavx -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 2019-02-171080p 8-bit YUV To VP9 Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker50100150200250Min: 93.27 / Avg: 93.9 / Max: 94.73Min: 114.22 / Avg: 116.28 / Max: 118.88Min: 119.52 / Avg: 120.79 / Max: 121.7Min: 155.64 / Avg: 162.84 / Max: 169.3Min: 127.9 / Avg: 128.17 / Max: 128.62Min: 187.15 / Avg: 191.55 / Max: 195.12Min: 218.58 / Avg: 221.23 / Max: 223.96Min: 272.73 / Avg: 280.22 / Max: 285.17Min: 93.18 / Avg: 93.66 / Max: 94.581. (CC) gcc options: -fPIE -fPIC -O2 -flto -fvisibility=hidden -mavx -pie -rdynamic -lpthread -lrt -lm

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 15SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 36.385.305.742.974.432.282.171.528.06-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-lIlmImf -lIlmThread -lImath -lHalf -lIex -lz-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker3691215Min: 6.34 / Avg: 6.38 / Max: 6.41Min: 5.27 / Avg: 5.3 / Max: 5.33Min: 5.66 / Avg: 5.74 / Max: 5.78Min: 2.8 / Avg: 2.97 / Max: 3.89Min: 4.39 / Avg: 4.43 / Max: 4.49Min: 2.24 / Avg: 2.28 / Max: 2.31Min: 2.17 / Avg: 2.17 / Max: 2.18Min: 1.49 / Avg: 1.52 / Max: 1.56Min: 8.02 / Avg: 8.06 / Max: 8.141. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -ljpeg -lpthread -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 2019-09-091080p 8-bit YUV To VP9 Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker90180270360450SE +/- 1.48, N = 3SE +/- 5.06, N = 3SE +/- 5.34, N = 3SE +/- 4.60, N = 5SE +/- 3.65, N = 3SE +/- 2.98, N = 3SE +/- 5.14, N = 5SE +/- 4.38, N = 15SE +/- 0.30, N = 3273319359348375345393344220-pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -fPIE -fPIC -flto -O3 -O2 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 2019-09-091080p 8-bit YUV To VP9 Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker70140210280350Min: 269.66 / Avg: 272.54 / Max: 274.6Min: 309.12 / Avg: 319.09 / Max: 325.56Min: 348.03 / Avg: 358.66 / Max: 364.96Min: 330.76 / Avg: 347.87 / Max: 356.93Min: 368.32 / Avg: 374.6 / Max: 380.95Min: 340.91 / Avg: 345.47 / Max: 351.08Min: 375.94 / Avg: 393.1 / Max: 404.86Min: 313.81 / Avg: 344.13 / Max: 382.41Min: 219.62 / Avg: 220.21 / Max: 220.591. (CC) gcc options: -fPIE -fPIC -flto -O3 -O2 -pie -rdynamic -lpthread -lrt -lm

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2018-09-25H.264 Video EncodingEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150SE +/- 0.16, N = 3SE +/- 0.99, N = 3SE +/- 1.38, N = 3SE +/- 0.99, N = 3SE +/- 0.48, N = 3SE +/- 0.64, N = 3SE +/- 0.18, N = 3SE +/- 1.84, N = 3SE +/- 1.15, N = 3131156154153152148156151145-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale-lavformat -lavcodec -lavutil -lswscale-pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2018-09-25H.264 Video EncodingEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker306090120150Min: 131.07 / Avg: 131.26 / Max: 131.58Min: 155.09 / Avg: 156.1 / Max: 158.07Min: 151.63 / Avg: 154.33 / Max: 156.13Min: 151.94 / Avg: 153.23 / Max: 155.18Min: 151.08 / Avg: 151.86 / Max: 152.73Min: 147.27 / Avg: 148.1 / Max: 149.35Min: 155.45 / Avg: 155.81 / Max: 156Min: 148.29 / Avg: 151.15 / Max: 154.59Min: 142.69 / Avg: 144.94 / Max: 146.461. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 2019-02-031080p 8-bit YUV To HEVC Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker80160240320400SE +/- 1.51, N = 3SE +/- 4.32, N = 3SE +/- 2.17, N = 3SE +/- 2.30, N = 3SE +/- 4.93, N = 3SE +/- 4.13, N = 4SE +/- 1.90, N = 3SE +/- 2.50, N = 15SE +/- 0.58, N = 3258329344338360336368330256-O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-signed-zeros -fno-trapping-math -fassociative-math -mtune=skylake1. (CC) gcc options: -fPIE -fPIC -O2 -flto -fvisibility=hidden -march=native -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 2019-02-031080p 8-bit YUV To HEVC Video EncodeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2PEPYC 7302p Docker70140210280350Min: 255.86 / Avg: 258.45 / Max: 261.1Min: 320.17 / Avg: 328.76 / Max: 333.89Min: 340.33 / Avg: 343.93 / Max: 347.83Min: 334.82 / Avg: 338.31 / Max: 342.66Min: 351.7 / Avg: 360.14 / Max: 368.78Min: 325.03 / Avg: 335.96 / Max: 344.83Min: 365.19 / Avg: 367.52 / Max: 371.29Min: 312.01 / Avg: 329.82 / Max: 345.62Min: 254.67 / Avg: 255.79 / Max: 256.631. (CC) gcc options: -fPIE -fPIC -O2 -flto -fvisibility=hidden -march=native -pie -rdynamic -lpthread -lrt

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.98.9Total TimeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P0.68851.3772.06552.7543.4425SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 15SE +/- 0.01, N = 15SE +/- 0.01, N = 153.061.792.111.331.741.211.090.811. (CC) gcc options: -m32 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.98.9Total TimeEPYC 7302EPYC 7302 2PEPYC 7402EPYC 7402 2PEPYC 7502EPYC 7502 2PEPYC 7742EPYC 7742 2P246810Min: 3.05 / Avg: 3.06 / Max: 3.07Min: 1.74 / Avg: 1.79 / Max: 1.83Min: 2.07 / Avg: 2.11 / Max: 2.17Min: 1.3 / Avg: 1.33 / Max: 1.36Min: 1.73 / Avg: 1.74 / Max: 1.75Min: 0.99 / Avg: 1.21 / Max: 1.31Min: 1.03 / Avg: 1.09 / Max: 1.14Min: 0.74 / Avg: 0.81 / Max: 0.911. (CC) gcc options: -m32 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread