EPYC 2021 Benchmarks

Tests for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102219-HA-EB716339316
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 5 Tests
C++ Boost Tests 5 Tests
Chess Test Suite 6 Tests
Timed Code Compilation 8 Tests
C/C++ Compiler Tests 30 Tests
Compression Tests 5 Tests
CPU Massive 51 Tests
Creator Workloads 38 Tests
Cryptography 5 Tests
Database Test Suite 7 Tests
Encoding 6 Tests
Finance 2 Tests
Fortran Tests 9 Tests
Game Development 7 Tests
HPC - High Performance Computing 36 Tests
Imaging 7 Tests
Common Kernel Benchmarks 6 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Linear Algebra 2 Tests
Machine Learning 11 Tests
Memory Test Suite 3 Tests
Molecular Dynamics 10 Tests
MPI Benchmarks 11 Tests
Multi-Core 54 Tests
NVIDIA GPU Compute 10 Tests
Intel oneAPI 6 Tests
OpenCL 2 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 20 Tests
Programmer / Developer System Benchmarks 15 Tests
Python 4 Tests
Quantum Mechanics 2 Tests
Raytracing 6 Tests
Renderers 12 Tests
Scientific Computing 19 Tests
Server 12 Tests
Server CPU Tests 33 Tests
Single-Threaded 9 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 6 Tests
Common Workstation Benchmarks 8 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable
Show Perf Per RAM Channel Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7702
February 01 2021
  19 Hours, 11 Minutes
EPYC 7402P
February 03 2021
  19 Hours
EPYC 7302P
February 04 2021
  23 Hours, 11 Minutes
EPYC 7232P
February 06 2021
  1 Day, 1 Hour, 30 Minutes
EPYC 7552
February 07 2021
  21 Hours, 8 Minutes
EPYC 7272
February 08 2021
  23 Hours, 46 Minutes
EPYC 7662
February 10 2021
  19 Hours, 2 Minutes
EPYC 7502P
February 11 2021
  21 Hours, 2 Minutes
EPYC 7F52
February 12 2021
  22 Hours, 53 Minutes
EPYC 7542
February 13 2021
  20 Hours, 7 Minutes
EPYC 7282
February 15 2021
  23 Hours, 31 Minutes
EPYC 7F32
February 16 2021
  1 Day, 1 Hour, 3 Minutes
EPYC 7532
February 17 2021
  19 Hours, 59 Minutes
EPYC 7642
February 19 2021
  22 Hours, 55 Minutes
EPYC 7742
February 20 2021
  21 Hours, 24 Minutes
Invert Hiding All Results Option
  21 Hours, 51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 2021 BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionEPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 7742AMD EPYC 7702 64-Core @ 2.00GHz (64 Cores / 128 Threads)ASRockRack EPYCD8 (P2.40 BIOS)AMD Starship/Matisse8 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E13841GB Micron_9300_MTFDHAL3T8TDPllvmpipeVE2282 x Intel I350Ubuntu 20.045.11.0-051100rc6daily20210201-generic (x86_64) 20210131GNOME Shell 3.36.4X Server 1.20.8llvmpipe4.5 Mesa 20.2.6 (LLVM 11.0.0 256 bits)GCC 9.3.0ext41920x1080AMD EPYC 7402P 24-Core @ 2.80GHz (24 Cores / 48 Threads)AMD EPYC 7302P 16-Core @ 3.00GHz (16 Cores / 32 Threads)AMD EPYC 7232P 8-Core @ 3.10GHz (8 Cores / 16 Threads)AMD EPYC 7552 48-Core @ 2.20GHz (48 Cores / 96 Threads)AMD EPYC 7272 12-Core @ 2.90GHz (12 Cores / 24 Threads)AMD EPYC 7662 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD EPYC 7502P 32-Core @ 2.50GHz (32 Cores / 64 Threads)AMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads)7 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E1AMD EPYC 7542 32-Core @ 2.90GHz (32 Cores / 64 Threads)8 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E1AMD EPYC 7282 16-Core @ 2.80GHz (16 Cores / 32 Threads)AMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads)AMD EPYC 7532 32-Core @ 2.40GHz (32 Cores / 64 Threads)AMD EPYC 7642 48-Core @ 2.30GHz (48 Cores / 96 Threads)AMD EPYC 7742 64-Core @ 2.25GHz (64 Cores / 128 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x8301034Java Details- EPYC 7702: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7402P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7302P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7232P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7552: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7272: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7662: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7502P: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7F52: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7542: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7282: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7F32: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7532: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7642: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7742: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)Python Details- Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 7742Logarithmic Result OverviewPhoronix Test SuiteCpuminer-Optm-queensStockfishCoremarkAircrack-ngC-RayJohn The RipperIndigoBenchBRL-CADTachyonPennantOSPrayasmFishNAMD7-Zip CompressionChaos Group V-RAYASTC EncoderBlenderLuxCoreRenderOpenVKLrays1benchNAS Parallel BenchmarksPOV-RayStress-NGCloverLeafFacebook RocksDBTensorFlow LiteGROMACSIntel Open Image DenoiseebizzyApache CassandraLAMMPS Molecular Dynamics SimulatorPostgreSQL pgbenchASKAPoneDNNTimed Linux Kernel CompilationAppleseedLeelaChessZeroTimed MPlayer CompilationBasis UniversalOpenFOAMFFTEGPAWTimed LLVM CompilationPlaidMLSVT-AV1KripkeSVT-VP9RodiniaSysbenchKvazaarTimed FFmpeg CompilationYafaRayminiFETimed Godot Game Engine Compilationx265x264Tungsten Rendererdav1dZstd CompressionHigh Performance Conjugate GradientIncompact3DTimed ImageMagick CompilationBuild2LULESHParboilOpenVINOAlgebraic Multi-Grid BenchmarkWebP2 Image EncodeFinanceBenchACES DGEMMOCRMyPDFctx_clockTimed PHP CompilationStreamNumenta Anomaly BenchmarkAI Benchmark AlphaXZ CompressionTimed MrBayes AnalysisMobile Neural NetworkBlogBenchQuantum ESPRESSOMonte Carlo Simulations of Ionised NebulaeRawTherapeeCaffeC-BloscNumpy BenchmarkApache CouchDBInfluxDBCraftyHierarchical INTegrationGoogle SynthMarkSwetBotansimdjsoneSpeak-NG Speech EnginePyPerformanceTSCPPyBenchQuantLibMontage Astronomical Image Mosaic EngineCrypto++Himeno BenchmarkPHPBenchEtcpakPerl BenchmarksDarmstadt Automotive Parallel Heterogeneous SuiteLibRawONNX RuntimeTinymembenchRedisHuginKeyDBLZ4 CompressionMBW

EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 7742Logarithmic Per Watt Result OverviewPhoronix Test SuiteACES DGEMMCpuminer-Optrays1benchOSPrayLuxCoreRenderKripkeOpenVKLBRL-CADIntel Open Image DenoiseStockfishminiFECoremarkAircrack-ngSVT-VP97-Zip CompressionJohn The RipperApache CassandraGROMACSIndigoBenchChaos Group V-RAYasmFishBlogBenchNAS Parallel BenchmarksHigh Performance Conjugate GradientebizzyPlaidMLLAMMPS Molecular Dynamics SimulatorStress-NGASKAPSysbenchSVT-AV1Facebook RocksDBFFTEC-Bloscdav1dONNX Runtimex265x264Darmstadt Automotive Parallel Heterogeneous SuiteAI Benchmark AlphaAlgebraic Multi-Grid BenchmarkKvazaarKeyDBLULESHStreamLeelaChessZeroLZ4 CompressionMBWZstd CompressionHierarchical INTegrationTinymembenchSwetCrypto++QuantLibPHPBenchBotanHimeno BenchmarkCraftyGoogle SynthMarkNumpy BenchmarkEtcpakTSCPRedisInfluxDBLibRawsimdjsonP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

EPYC 2021 Benchmarksqe: AUSURF112nwchem: C240 Buckyballlczero: Eigenlczero: BLASlammps: 20k Atomscaffe: GoogleNet - CPU - 200openfoam: Motorbike 60Mbuild-llvm: Time To Compileblogbench: Readopenvkl: vklBenchmarkhint: FLOATai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scorecryptopp: Keyed Algorithmsblender: Barbershop - CPU-Onlyjpegxl: PNG - 8onnx: super-resolution-10 - OpenMP CPUngspice: C7552webp2: Quality 95, Compression Effort 7numpy: tinymembench: Standard Memsetbrl-cad: VGR Performance Metricblender: Pabellon Barcelona - CPU-Onlycryptopp: Integer + Elliptic Curve Public Key Algorithmsonnx: shufflenet-v2-10 - OpenMP CPUincompact3d: Cylinderonnx: bertsquad-10 - OpenMP CPUblender: Classroom - CPU-Onlymocassin: Dust 2D tau100.0hpcg: ngspice: C2670mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0daphne: OpenMP - Points2Imagecaffe: AlexNet - CPU - 200asmfish: 1024 Hash Memory, 26 Depthwebp2: Quality 75, Compression Effort 7appleseed: Emilyyafaray: Total Time For Sample Sceneonnx: fcn-resnet101-11 - OpenMP CPUrodinia: OpenMP Leukocyteospray: San Miguel - Path Tracergpaw: Carbon Nanotubeonnx: yolov4 - OpenMP CPUrodinia: OpenMP LavaMDplaidml: No - Inference - VGG19 - CPUblosc: blosclzstress-ng: CPU Cachetensorflow-lite: NASNet Mobilev-ray: CPUastcenc: Exhaustiveblender: Fishy Cat - CPU-Onlycpuminer-opt: Garlicoincpuminer-opt: Deepcoincompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedcassandra: Writesplaidml: No - Inference - VGG16 - CPUnumenta-nab: Earthgecko Skylinedav1d: Chimera 1080p 10-bitfinancebench: Bonds OpenMPjpegxl: PNG - 7mrbayes: Primate Phylogeny Analysiscouchdb: 100 - 1000 - 24v-ray: CPUmontage: Mosaic of M17, K band, 1.5 deg x 1.5 degnpb: EP.Donednn: Recurrent Neural Network Training - u8s8f32 - CPUinfluxdb: 4 - 10000 - 2,5000,1 - 10000keydb: rocksdb: Read While Writingbuild-godot: Time To Compilecompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedospray: XFrog Forest - Path Traceronednn: Recurrent Neural Network Training - f32 - CPUinfluxdb: 64 - 10000 - 2,5000,1 - 10000tensorflow-lite: Inception ResNet V2build2: Time To Compilegromacs: Water Benchmarkblender: BMW27 - CPU-Onlytensorflow-lite: Inception V4kripke: appleseed: Disney Materialcpuminer-opt: Skeincoinonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUjpegxl-decode: 1gromacs: water_GMX50_barestockfish: Total Timeopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUperl-benchmark: Pod2htmlhimeno: Poisson Pressure Solveropenvino: Person Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP16 - CPUcpuminer-opt: x25xopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP32 - CPUpgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writecoremark: CoreMark Size 666 - Iterations Per Secondopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUcryptopp: Unkeyed Algorithmsrocksdb: Rand Readopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUindigobench: CPU - Bedroombuild-linux-kernel: Time To Compileindigobench: CPU - Supercarluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismtensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantrocksdb: Rand Fill Syncjohn-the-ripper: MD5simdjson: LargeRandkvazaar: Bosphorus 4K - Mediumfinancebench: Repo OpenMPrawtherapee: Total Benchmark Timembw: Memory Copy - 8192 MiBpgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writenamd: ATPase Simulation - 327,506 Atomssimdjson: PartialTweetssimdjson: DistinctUserIDhugin: Panorama Photo Assistant + Stitching Timesimdjson: Kostyaopenfoam: Motorbike 30Mjpegxl-decode: Allperl-benchmark: Interpreterbuild-php: Time To Compilecompress-7zip: Compress Speed Testospray: XFrog Forest - SciVisespeak: Text-To-Speech Synthesisredis: SETredis: GETtachyon: Total Timecompress-zstd: 19mt-dgemm: Sustained Floating-Point Ratepyperformance: regex_compileaskap: tConvolve MPI - Griddingaskap: tConvolve MPI - Degriddingospray: San Miguel - SciVisquantlib: parboil: OpenMP LBMphpbench: PHP Benchmark Suiteebizzy: libraw: Post-Processing Benchmarkm-queens: Time To Solvebuild-ffmpeg: Time To Compileetcpak: ETC2pgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlynumenta-nab: Bayesian Changepointpyperformance: pathlibstress-ng: Socket Activitybasis: UASTC Level 3minife: Smallc-ray: Total Time - 4K, 16 Rays Per Pixelnpb: LU.Cstress-ng: Context Switchingospray: NASA Streamlines - Path Tracerx265: Bosphorus 4Konednn: Deconvolution Batch shapes_1d - f32 - CPUcloverleaf: Lagrangian-Eulerian Hydrodynamicscompress-zstd: 3stress-ng: CPU Stresspennant: sedovbigjohn-the-ripper: Blowfishaircrack-ng: stress-ng: Cryptostress-ng: Matrix Mathstress-ng: Vector Mathsynthmark: VoiceMark_100npb: IS.Ddaphne: OpenMP - NDT Mappinglulesh: crafty: Elapsed Timepyperformance: django_templatepovray: Trace Timecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9pybench: Total For Average Test Timestungsten: Water Causticpyperformance: nbodypyperformance: floatamg: pyperformance: crypto_pyaeskvazaar: Bosphorus 4K - Very Fastswet: Averagetoybrot: C++ Taskstoybrot: OpenMPbotan: AES-256jpegxl: JPEG - 5toybrot: C++ Threadstoybrot: TBBdaphne: OpenMP - Euclidean Clusterbuild-mplayer: Time To Compilepennant: leblancbigbasis: UASTC Level 2askap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingstream-dynamic: - Triadstream-dynamic: - Addstream-dynamic: - Scalestream-dynamic: - Copyonednn: IP Shapes 1D - f32 - CPUbuild-imagemagick: Time To Compileastcenc: Thoroughocrmypdf: Processing 60 Page PDF Documentrodinia: OpenMP Streamclusteronednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUkvazaar: Bosphorus 4K - Ultra Fastsvt-av1: Enc Mode 4 - 1080pnumenta-nab: Relative Entropybotan: Blowfishbotan: Twofishospray: Magnetic Reconnection - SciVisbotan: CAST-256botan: KASUMIonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUrodinia: OpenMP CFD Solveronednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUoidn: Memorialsysbench: Memorytungsten: Hairjpegxl: PNG - 5jpegxl: JPEG - 7stream: Copynpb: CG.Cnpb: FT.Cwebp2: Quality 100, Compression Effort 5sysbench: CPUsvt-av1: Enc Mode 8 - 1080prays1bench: Large Scenejpegxl: JPEG - 8onednn: IP Shapes 3D - f32 - CPUospray: NASA Streamlines - SciVisdav1d: Chimera 1080ponednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUnumenta-nab: Windowed Gaussiansvt-vp9: Visual Quality Optimized - Bosphorus 1080pdav1d: Summer Nature 4Kx264: H.264 Video Encodingnpb: EP.Cnpb: MG.Cetcpak: DXT1ffte: N=256, 3D Complex FFT Routineonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUlammps: Rhodopsin Proteinsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080ptscp: AI Chess Performancedav1d: Summer Nature 1080pctx-clock: Context Switch Timestream: Triadstream: Addstream: ScaleEPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 77421208.3122472686269924.818347484233.34234.2051372955492297478250.42946308911241965539.454890154.424087132.616257.005306.5017329.8660079119.174152.5975336478224.616374390103.5524217.3075177.6163.1875.06726.9248.37519062.924024905128965122849461139.362154.43673367.2527448.7164.9281.06024054.71126.498372.743.7095873.74529245.8055.409581.064611310219.544.5422756431.8984.833185.3188553.783854115.26597.9546275494.7633989.792296.531212918.2371448.21823247462.55710162.045.695.922306.331308517.270260767.9664.37340.0776426919968396067.7375496058522314.364.3711009084531.2125067.04878.199874.440875.7080.149543393762.4889305170.996.021360.065153.996.051.798556991845888.7971834029.987.884030.537.88298.7292512184698241.2225012.988.81128.13019.0957.047.8261679.635037.436468.828711541830000.3314.9451015.86588556.03015599.8984.804521810.492640.520.5455.6760.4623.350.0009708442.56126490811.0735.9271228831.791482738.9219.1218147.515.61481219820382.220185.555.561970.123.322929529609270176734.5013.83924.034137.2840.2798999920.11289745333.26820.219577.0318.86919155.412.849102548.9520917619.7616.6725.042.3351914.708248.319926.6111.9424770033143169.12011996.59179226.24397326.45635.9221971.50861.3514612.208689994857.511.50022.047116621.968313513387837533312936.076024348854499.500954.2012.9907.03398213.2977148.779509.141.7036017.1156.3518.7328.9281.8884253.526.75215.944368.930302.13540119.99577.6990.5874437.8190.80064327.676302136.65377.5070690511.815079.0855136.508.125106421.951179.824243.571.1594376.92983.613.558921.078446.930305.10437.31198.583908.2351795.611019.205145868.294648502.820151.4944619.997409.3610158581050.9912098034.897206.387714.61342.144709.51439152114.907391015314.06312.6981959615279297827555.09073347215461926538.085673291.524946254.240307.4714961.4317661219.754151.2915978810200.210118500196.7021715.56293.2135.27029.0228.97821653.91001367115469362581775138.044194.52524780.4967259.6522.45116.409280102.99523.3410178.341.1110504689.0791.814490.861936310256.345.0920070527.8185.758135.6988634.30989685.99686.6823318493.9231813.291674.661248284.1415583.96439769576.16210209.345.983.071672.241290897.8120679076.6502.74169.841366410209811150105.6919722105391673.31488047320.9212800.59974.124972.927972.7720.145735973765.8483873310.913.53862.813315.633.512.10147662902070.1513342507.584.762522.614.73298.4906041143914340.8813220.954.49441.4979.6303.834.1793404.161419.962904.121642721273330.3311.2250474.69791753.92915510.9216.067412730.937050.530.5353.3560.4633.810.0009558647.4131385755.7635.6941147451.381466557.0337.733796.97.20823520013892.512173.730.301977.329.020656542521172185434.0328.19432.065137.0890.4805216530.20149913834.31120.213570.8329.16016743.625.68374421.6110443329.128.7723.842.4897720.358033.08541.1225.889113375571372.7925847.7092207.07182716.39636.429899.458024.3706688076057.220.46422.399115923.870913513377845943313126.616000655724487.898962.8818.58816.6537518.1675726.606379.881.8295120.43011.7120.87414.7615.9220448.335.56316.124368.377302.12020.83119.99377.5700.59230811.5771.2959917.175614019.491712.262379677.27.97442008.114654.873134.194.1338241.67839.064.251113.543477.121319.05328.54174.631806.541018.971111563.844001143.346332.3830614.038413.091013320847.4016887308.486805.679147.31403.526676.41233125310.602387173313.64404.3261898428192292597759.78158311914591660530.362593411.564522301.520301.2814820.3219055317.274086.0757968975262.633291456287.8122815.63293.8565.77231.5439.50422339.84670168915253942112219164.805260.983099110.9396998.1861.65139.152277147.38217.3910476.832.72144445129.26129.812965.291269710132.143.9813526021.2488.323116.6889863.26041785.93289.4332348695.4481190.482410.231188387.2405905.78308623796.77210172.744.532.152412.791249264.6156707087.6572.014102.291742890162898387149.3357191510302407.93329737490.849316.801227.561229.691228.500.149784893792.8083213145.892.51447.013155.852.512.60638458603381.3330022414.743.312413.453.32294.437128769720900.849314.823.12953.3996.5752.682.9112394780320.582490.416324814603330.339.1551629.10286558.50615480.2908.258303131.350580.520.5355.7730.4540.870.0009547852.749951374.0236.3561173570.001438157.7354.840573.84.96371520212204.410427.220.831952.524.924913524189120846633.7142.38139.737135.1800.6793684490.28035778134.84320.49499.3339.75417082.438.48861041.277003110.186.0720.443.2837725.217699.45622.8234.025542231448284.8263871.5763930.72120826.81627.7431647.78898.217954.1518678769257.628.70223.153117226.020113713578826687513022.945985892284430.718943.2924.05821.9518823.3425509.35503.592.4108223.54616.6823.39217.7296.0349741.524.50617.183362.204297.64714.08117.87776.5100.80227215.4451.8417812.184502591.448216.617580140.115688.5737175.209.35427632.185437.04490.373.7639529.41698.085.454003.706407.645242.24245.60150.451191.9047349.291007.95084316.2310613774.726083.3466310.374316.49997133634.5118088105.787568.879703.51656.886817475.406369027573.04763.556194449696284332324.15296220610941112513.653997866.853941212.894507.741270.8314571.7106073664.293949.4362678836450.845693409558.872768.64661211.4933.9174.94431.4359.65023230.18582577315155921027795277.672487.372372186.18059148.1990.76271.157243296.6869.309386.832.532122537864262.14267.241473.316207.8910057.843.175173411.41141.24594.3692426.49479288.666107.0091113698.414579.184347.661041996.8399110.881470621172.13110063.044.930.994345.121113861.33136920145.4430.985198.453472940100898758265.544082636484348.88160349940.984012.402397.222397.562398.980.156532083589.9335913071.911.29216.583090.581.284.69021334293387.3962692298.631.752314.661.74286.188934392061560.964115.441.456101.0903.1171.381.50242480160713165426943717171750.314.7253237.68229172.47315523.14014.123177092.725530.500.5259.3100.44113.160.0010060875.329459411.8937.3961126925.501421564.35112.393537.31.5837042083879.103350.2210.311892.045.76253151190662327232.4887.19468.502131.1821.4731698320.54318425145.21221.14965.9673.43210156.679.16833816.483438510.352.829.206.1226169.915123.22708.9066.418481082523406.3971878.7431690.6058583.38608.6051083.36861.076757.9893641460959.555.77534.509121333.916414114344991646013512.985618167724311.057876.4545.58645.1546039.6662616.812840.134.7386637.71633.1533.01515.5408.6715423.112.53623.563352.291288.6986.99114.58874.2652.0183825.3743.583636.555942474.372832.538852390.09844.9521822.0615.54713398.098421.76148.617.1503014.29473.8210.95826.3766811.912107.11150.4378.21578.1230311.94976.58644242.9828328158.271846.794455.237144.32969371419.1921756788.656795.152630.11329.902716.71927196922.046399610260.78245.3971771624441293216264.80405339713482049530.795528180.944642131.530258.367303.4516494.4540558139.214085.5278937373206.583562440122.0322316.6494170.2163.0224.85225.4438.14220634.199886873171885105304578140.180154.73128464.8827947.4254.1786.73926464.20727.509155.049.7892153.23784354.3462.727811.993358910156.544.4923087132.6886.062178.7090310.903646104.51892.5895357095.5313292.841352.621208480.8394639.23718357962.48410159.745.385.101350.551288738.689351769.8593.86346.5997882821131030773.3338714682271350.13823971320.9923441.91784.815787.426782.6520.151249763793.1664664197.605.531139.384203.815.531.759569451530338.7875473325.997.203327.507.22293.8811571850950080.9923559.177.55530.20615.9846.136.7568430.448507.450401.833130035286670.3215.5051630.50390654.49215459.1644.872514440.574840.520.5255.4190.4525.170.0009615343.8412297479.5236.3971170719.461450496.7122.8100125.712.73503120217300.816990.1501945.326.474040523575245651133.6416.61025.499135.1390.3167914560.12083250933.95520.518192.3220.79618424.115.21295079.4117727598.8414.2925.362.1579415.288172.015958.1615.3474059963119683.6519992.15152001.95328019.70626.566895.5213435.303667488158.113.27322.540117822.647113713485689340013335.816056443644421.410956.8213.8208.98853814.2366302.578454.71.5993818.0637.4218.85512.0271.9970755.996.49415.972363.503297.10234.48118.32476.5750.5339618.4050.86802426.136380400.35088.7195388717.28.16280743.888862.395217.702.2861166.671095.403.840181.719677.033336.24406.93198.003252.131008.239135838.120561632.553351.5549418.506445.929974611044.2513296497.195807.086805.61520.438844.19259467.737378367530.10529.6162023180142284710986.65814267913071372514.845743554.643943137.585361.481298.2714776.4155347426.593965.0335968721325.844127403387.232509.09743174.2005.3225.93038.1029.76823807.66765484215487131791813199.541333.246095131.02770107.9671.22205.891259200.34813.679442.925.1717754012198176.04177.652192.829361.0610070.242.949315516.79101.587107.2592387.25260487.60092.3481755098.305860.013057.081143360.5418080.472357928122.41010101.043.231.553056.051185857.82242950105.7071.409136.202493747121256433195.0816241051403059.191.407243144430.896689.952032.882022.452029.280.154578903634.1742993264.491.81322.833265.931.813.13731896440879.6417372434.172.472406.222.47286.181820569289670.876803.092.28969.6124.8551.982.1417380711706012011913158310723330.326.4952936.97916762.75815482.7119.939251701.838950.500.5157.2420.4467.230.0009807460.203696552.9337.4001156241.741372468.8875.035455.23.4640032087130.896100.1815.6251893.242.94750051335688396533.1258.21849.919131.1450.9082756770.35927851038.06521.07332.8251.77110066.652.80643321.385095560.164.3917.094.4407748.296498.44078.2149.112091622435086.9512821.1946780.9487851.58608.050864.626871.9303655135160.038.41125.656120829.954514113845785535013417.835811945884290.852919.0131.14333.8085229.2443257.044004.083.8095528.21822.4026.89816.8955.2533131.393.59319.105352.072288.62210.42114.68874.2071.0934418.1342.493729.037232205.295521.995851714.411.13120091.959130.37768.514.9751221.74624.797.605055.953488.798184.83203.40117.14867.81978.85061838.5158416056.291354.527587.582239.47969066555.1120356066.655911.651978.21216.502220.72408237625.206345626232.98233.5131397475504293169027.14643334112192122529.619444154.234175136.866256.563303.2115640.1652602118.264091.7839786686206.952866395101.6823917.3863172.3713.0734.92025.2217.54219914.997832858127699126290077139.558153.46039266.4827947.3694.9978.39324354.65431.848328.244.3688093.74465845.8455.829507.954505110249.344.8021938037.3884.794190.1990672.447917116.14697.2956229595.5604019.092230.181209698.0376241.11817538362.27210170.745.196.002221.331294664.565527968.2914.54140.3370083321577122767.3263466310382203.164.533988474691.0628284.87793.228793.199792.5460.149767953837.5593184731.726.621398.774732.736.601.643609491867023.4567603562.638.973558.098.98293.8097302147177601.0728316.508.89827.97719.0627.217.9256195.532085.533024.033341241953330.3216.4451437.79166755.04215616.8974.617543180.489080.520.5355.9450.4522.830.0009745543.06027014011.2436.1971200527.311417246.6719.0841149.616.88116820220991.720993.158.821955.323.819394524751276264734.0113.75223.978135.2350.2679397500.10991552333.40620.519990.7518.65619256.712.815103985.9120981193.8816.9525.652.0223114.038287.720174.7611.8002373579143292.78612091.56179496.78400773.38624.7262006.81897.5914603.333675390258.611.50122.047118422.132213713488373520013339.156022038424429.185966.0412.9566.99532113.2066699.079509.141.4898117.1876.3818.7938.9051.7987458.646.96615.968362.990297.80840118.22576.5450.5170237.7470.78080329.416374595.12097.5509390296.515230.2355668.638.052108892.224083.399243.251.1250276.921158.083.504120.9937326.971332.56457.27210.693967.2852245.231007.223155375.984111282.523631.4312421.763448.089978091193.6612098343.297180.987848.11386.783678.81539155917.614427967320.20289.3591950080337297816529.27817351014882022539.174870237.650.694755128.923253.096309.1015097.1387792182.764139.9849678421189.659383459162.2921415.2920173.8952.9274.82025.0587.49321272.04396400216872878084684137.442170.48434570.7046457.9793.03105.66627785.00526.519808.045.2385462.82844873.3878.016104.272654610184.544.6823373032.0585.729152.1588487.56770810.0294.26286.2333708593.8552380.272743.711262530.4413354.59548606669.41910232.845.773.752738.421303437.689788474.1963.12860.75100634717679886787.5114473163502740.523.140608646160.8418612.54919.607902.259917.6140.148592733853.5022333578.094.37883.873582.564.361.935517941147733.9487312794.255.732791.545.72298.7084411393380560.8418514.605.47736.38811.8544.655.0475080.047083.447911.725935325963330.3313.9750609.03645852.74415621.6415.481456710.774390.520.5353.7220.4630.620.0009608245.4401713627.0135.7341180459.911419600.1330.8963114.39.32690919712698.212067.337.041984.527.598317531740194783634.1122.63229.124137.3440.4076141830.16461066732.64519.916396.5725.28416649.520.90778846.9612896969.6410.6025.132.1972719.077903.611121.5923.028764370387391.7947273.71112205.01237596.66637.3781884.31898.8412840.642680430556.917.14022.420117323.112613513277430480012833.255973611974497.76558.59974.4016.46214.7815716.3105788.176617.901.5930019.4059.7319.37814.3202.1519454.925.86615.847368.692302.25426.05120.06077.6880.5376369.8921.0739920.256612746.496210.574079.5958.6279342.914759.4046206.337.97256038.719457.315167.7525.032.9628350937.733.970433.462366.989334.18361.72182.422368.6944082.951022.480136276.499331933.688331.9484515.685437.881014342932.7515087239.886677.278638.61357.305684.61699175811.757382020454.84343.3861676516222346505619.64441261511591456628.207929356.900.795212122.440254.764348.6615585.7232507265.134834.0460719348230.942063372237.791977.08925151.4605.7788.17133.49910.20924205.54322741115132746342665140.611226.803596127.4466990.5231.48165.407248125.53320.0010910.146.3912674319521109.17108.403522.441533810068.552.0214412324.2975.225129.7776410.10677110.6972.42983.2792713280.9391410.592013.241247037.6433091.73301238784.20610099.652.822.342013.751438050.4134615375.8531.99583.45149918371413394118.4778591824031997.3937.942.313390434100.819663.43998.668997.853999.1420.129242854367.1185662643.392.99524.312649.352.982.69737152716393.8849031976.704.041980.564.02342.931821938831300.799866.083.46146.9147.6843.273.5710739168637.670154.517258517316670.3810.6643527.27864653.14914958.1938.876281931.143750.610.6250.590.5339.89207.630.0008257046.5401090694.4530.7531336546.211635794.5746.594276.14.89765317416399.713692.322.562300.735.422343614039147528038.0135.92033.981159.6030.6303970380.26338118429.44817.58415.1933.8016787.0632.58044891.698225794.406.8320.002.7574322.837899.76629.0625.878302634556973.8494580.0876923.39142756.35739.722934.00773.866839.1706794004750.424.18321.43499924.418811611464318092511026.316851888105226.75460.251062.3220.57116.1392919.8611406.932113.141.9803521.63214.0219.85615.0495.4960547.175.33314.596429.140351.63313.45139.70590.4170.66041913.5301.5650314.243087955.627314.132773.8360.1266901.17779.4922117.177.97332649.357142.839109.9127.592.7782629.59671.076.915612.001836.586230.31244.17173.501407.6221714.651179.01771767.9604074343.927373.0289711.521263.131178140651.3617572315.672752.267011.91317.273653.91617166618.156411681319.87275.3501923368359302406827.18194364115342107546.717564223.080.714986130.086250.407311.9715872.0412076167.994207.9103438698187.604813475148.5021515.2794165.6632.8364.71324.3037.62521441.99598117716254482716262135.973161.63510164.9236957.6833.23103.29628579.11427.679878.049.1979313.63063267.9572.276242.212649310292.545.1323652433.3383.830152.7886946.61458310.0986.72286.5723890492.7702389.762721.871162690.9422944.46556443366.56710222.445.904.042718.681206497.884060471.4643.32355.3294322918739700781.8251773244202716.5434.223.317641529290.7719922.01886.876890.477888.5350.148508123988.9426463276.524.74891.023271.264.701.915523251203386.7200632579.796.172572.366.13302.9271151474882800.7719958.415.95034.55412.7385.035.5470622.943977.945118.125875927986670.3314.9450076.27474052.12515641.0915.369466550.715490.530.5452.9870.4629.62188.060.0009548544.5481761487.5835.2651189270.871508082.4128.5901114.48.86333619812699.511976.9402016.427.639163539060213685034.3821.37227.856139.2520.3826557740.15365491432.87219.917266.2123.96816649.719.54679644.8813858394.6111.3625.212.0486618.617885.011234.6022.233044459694237.6517720.58122762.96241722.09645.8691885.98908.4613099.615698394556.715.94322.300113722.635113313577432946712935.066122184254561.18558.68983.4515.69414.1648215.6385788.176658.4890450.72290238.63481529.05782081.4281.4722218.9359.0818.65514.3162.0698356.986.09115.723374.521306.61527.78121.92378.8810.5005219.8061.0158221.556618850.23959.9284980.0558.5979674.114977.2947098.227.94956066.787359.939182.5925.232.9583055.07939.903.976993.461746.900350.63367.19188.452375.4844205.801035.225144583.872063323.441181.8962416.564459.141030295937.4417487057.086737.778399.91456.827056105110429.889388699520.45439.9962043037186283588722.34226292213841538513.598865432.920.664044135.679309.984296.9614921.3208150338.523948.9987388611270.723175400304.342399.05154183.2184.0815.85533.5879.61022574.80373997515576341583844173.717275.805876102.5287799.7191.59177.905267157.16816.619400.730.1215123515246137.78139.462991.731279310110.942.2513644220.3496.849114.4992896.6171889.0889.20487.9362202298.2381156.262834.091172021.1420324.963023239103.74110112.244.381.982836.681211354.3163916794.3451.677108.341820153112489722151.8593791502872835.3532.161.675312461370.7810017.981886.301882.641884.700.153417053756.9598093344.392.37429.513342.132.382.59638586586659.0982152527.863.162446.023.23285.253686748165600.799884.622.93257.9086.2142.532.7412929384723.586277.616353513540000.328.6152977.60286561.53915482.7357.791321151.451450.500.5156.0980.4456.57197.760.0009921055.222905883.7237.5521143594.631414840.4158.469166.44.4524192087358.196587.5719.611887.046.073278511228102199033.0843.73842.861130.8810.7293432290.29034459836.34921.39134.7241.24010003.639.65549617.926612368.675.6520.033.5017545.466706.25463.4541.687672145446017.2033760.6659431.03117080.86605.8251422.07858.546954.7964654769059.130.50624.901122126.789514213945576063313521.7057182558025683253884293.86953.842542625411928.4326.26528.9939224.2393317.934294.4557372.72957323.38053610.35153581.2653.4466825.45917.7324.23318.1896.0633339.654.22518.103351.140287.93213.89114.47474.0590.87574914.7921.9130511.347880986.585817.654267.2953.7551093.19697.8329526.979.65426778.618935.38384.5423.054.4728727.78712.266.762425.864588.105229.82239.84142.371155.8829776.81975.21576796.0449199125.083283.452909.685295.80968149644.2319655596.355437.251044.21356.26110410526.705340168362.74581.2981772037116346733990.00480247411731301627.435908683.810.794518119.444416.273345.3616357.2120622508.264835.0150679823375.703674480454.4922112.9560158.7913.6254.57827.4459.35727135.63258623414055824675753227.130381.370711172.35359130.9750.95200.413262243.84312.0611206.824.8517792810044214.44207.121769.467644.1010561.951.955811814.6798.522109.5576027.9765629.7872.89193.7921420480.848705.213345.671155871.8424600.071603673136.62310567.252.791.273345.101279851.82592970113.2601.345156.312872227129536920225.08796793653348.0638.421.348191911990.794953.651742.791745.541744.750.128195454336.4055732399.191.66262.902409.031.644.14524137356570.4067761836.512.161844.312.16343.606808483317350.784984.351.80879.6643.8831.681.842012581332361360231008438747000.385.7343548.63151062.23615666.70012.176205412.229490.620.6352.9870.5360.50183.950.0008356359.564569552.3930.8391315900.461601070.5092.157547.03.0115951718265.996907.5412.52285.227.24515562167277688038.5971.57954.149159.7871.1732133450.46421561735.66917.25638.5360.40317351.264.96943172.514265815.033.6510.205.0323636.806573.93306.3952.788431318328528.3442288.3338378.0871366.17740.5781277.15967.378720.8822793988349.145.46829.60799828.100011611580984458311115.5668639541941679414225238.76260.5741573416061039.6035.24534.5800332.6334004.084035.113.5602028.59727.1627.20219.7077.2376327.163.18418.881429.405351.9898.16139.83390.4591.1601722.2183.012927.944656182.257726.516163.5160.2582399.712350.8727284.6912.78816323.915426.13159.6027.446.6820917.24496.119.151244.301309.790124.35168.1795.50704.6945336.891179.30454800.9923541796.463985.582606.470167.081180682453.3618589567.289255.481753.01403.873653.61769173517.525441978236.99280.7781636092328293437386.10939351814752043531.138320243.580.694864132.041260.463303.3314786.9378660189.094089.1196518070193.923879466168.4021517.9677175.8473.0034.82125.2667.33820967.47929720817250874422699142.200172.91456385.5617959.4382.9793.23627187.55625.3510253.351.3984263.52840175.2778.945725.792301310188.643.6921136130.3686.698146.2990157.2812509.7096.81391.3153622395.4792333.542230.351197778.9404313.15521430269.62410118.044.693.702214.791281490.592186574.4813.26761.25103436721652513389.0114443172502221.3732.693.256583840260.9516446.88812.987813.120814.0270.149882933830.2026513757.354.20807.023754.854.192.103476151123555.3010762815.805.682814.155.68294.2737151330385500.9516443.735.41536.84511.5314.544.9277264.748084.749078.525368925340000.3213.6551428.89192754.77315503.0485.648443430.790790.510.5255.5710.4526.53171.730.0009675346.0981712426.9036.2431186674.321481804.0031.5657122.79.82887120220585.318311.735.711922.322.944527532957197732533.0623.05029.228135.0060.4265875480.17357834033.43320.415568.2425.88419645.421.42986547.2812627261.9310.4223.712.2224215.298476.310901.4718.061824280085303.3527105.73110766.79232455.68627.0591992.76895.3313938.252654521158.317.67022.092118023.553413713490953266713132.7258268433313798139264429.47454.971361913556949.1516.38310.8857616.6986455.408257.47104119.156103789.98592368.40894075.6701.6150219.2669.9919.9429.9132.2054354.075.95816.332363.098297.70525118.29076.5280.5534449.8581.1048620.144720934.255810.813176.1854.7390663.218019.0149235.928.05555254.656956.528163.0024.061.2515447.62892.423.612681.483747.220325.84349.99181.152318.4452022.761006.923135786.937654143.601301.9880916.265426.52999442880.5714499248.297912.589487.61372.732648.52311220322.442429875233.73243.9151583350446292412973.62287352814032125529.604123179.180.694506160.820254.877303.7516261.5541775137.834072.7598807221195.343160431121.2121917.6850176.5303.0564.82524.7827.47920036.194382600173812103932785138.077153.97564567.6588046.5324.2179.97726363.47229.379301.051.1386140.63921554.0461.937961.643436310229.643.8621557635.2585.032179.7290409.5859389.88102.45091.5125444595.3873322.991210.041215056.1386824.54713160061.81410123.044.845.151213.581295588.883073069.4334.11745.7893869123085178370.4853894705021211.5432.764.106835681661.0222986.05734.403732.808735.8190.148682533816.0787074132.135.671147.364138.695.671.701588891550727.1454793059.057.813069.387.79292.5985201857129821.0222996.937.59630.07516.2306.196.8165104.445436.645818.833179135576670.3215.9451397.36067754.48015528.6534.972503810.570480.520.5255.3030.4523.43136.310.0009657943.8172340629.5836.2771183717.451437534.2322.5643130.113.94246520323040.721472.3501944.521.939827521572271938832.3116.29925.213135.2550.3237756870.1283545733.60420.518451.9120.68719436.315.03999557.2317595501.8114.4925.231.9872613.668499.216152.0813.8695861119120520.68210112.22127853.09332363.94623.842903.3014015.660665269658.413.27022.006118422.554113813589224883313437.39592049225983998924425.91554.7596529723963.0013.7128.17660214.1886615.819427.17103873.438103556.13292049.48393594.6071.4677517.6727.4019.0909.1851.9304259.596.67815.887361.797296.90734.48117.98776.2170.4842198.4340.86314626.685536026.07068.5682877.7154.8091703.28.07481398.289663.205218.7323.961.2201166.671116.833.543161.108057.063346.63416.96203.743310.341008.514149967.739528632.403801.5391519.328458.249968051070.0913897940.097944.287579.91168.322123.12656262326.281332628232.40224.8981373157535302249577.83011324811792069546.248682143.940.704229132.938254.503308.3916300.8702166107.794199.9125546527213.08629337991.6722817.3763171.8383.1274.91925.4557.68418909.101207343123771131041886138.090146.84277363.2707647.1735.3578.00524650.33729.498355.948.8691245.84936441.8051.40101944879410249.545.0321519634.9983.396190.0487489.37500010.06108.22497.2346663993.1704269.262263.321217027.7380625.69859745260.37210233.146.206.522283.911294605.564315066.2534.83836.6072369921205420061.6777196293682251.6333.154.8721085431251.0728361.73808.616810.996810.1830.145541433937.2299774536.006.831429.254525.326.871.417706901985242.5041673589.698.893584.778.92301.5386022382397361.0828270.219.67126.72220.7437.778.6457509.932117.933434.633143945680000.3316.5049870.81510455.61215534.9754.403571020.446270.530.5455.5990.4622.36145.070.0009611641.87227965612.2035.3031185927.041493222.2117.5380148.816.29885219521311.921495.262.52004.423.333266540797285378334.7312.78223.025138.8940.2519976520.10298138232.62219.920955.9317.97819245.911.896105428.6722318900.9918.1825.572.0688414.408270.421092.5611.1283675234155994.02613026.17197426.57427545.66643.8172007.42902.3814778.635693609256.810.68421.979116621.642913313088099856712738.65606567122788079914549.53355.7475737517969.3112.3346.63924012.8356679.719177.92104095.678103818.18692337.60593965.3751.5152816.7575.8918.1428.7911.7906557.406.98515.572374.198306.48443.48121.70578.8610.5369167.6830.75051930.256334165.63447.0929777.3455.7890141.715249.7256055.878.061108070.729285.002269.7424.471.1482883.331113.873.531160.9559776.824322.83454.41211.394195.6251593.621035.372148358.572950792.607641.3966021.320437.2510280541166.4213598248.097124.387905.6OpenBenchmarking.org

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400800120016002000SE +/- 0.83, N = 3SE +/- 1.18, N = 3SE +/- 4.40, N = 3SE +/- 5.19, N = 3SE +/- 1.29, N = 3SE +/- 5.50, N = 3SE +/- 6.34, N = 3SE +/- 3.48, N = 3SE +/- 16.58, N = 4SE +/- 22.87, N = 9SE +/- 1.64, N = 3SE +/- 1.52, N = 3SE +/- 1.20, N = 3SE +/- 0.33, N = 3SE +/- 11.50, N = 31656.881520.431456.821403.521342.141386.781403.871317.271329.901372.731216.501208.311168.321356.261357.301. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230060090012001500Min: 1655.62 / Avg: 1656.88 / Max: 1658.44Min: 1518.1 / Avg: 1520.43 / Max: 1521.97Min: 1450.92 / Avg: 1456.82 / Max: 1465.43Min: 1394.1 / Avg: 1403.52 / Max: 1411.99Min: 1339.87 / Avg: 1342.14 / Max: 1344.32Min: 1377.07 / Avg: 1386.78 / Max: 1396.11Min: 1396.5 / Avg: 1403.87 / Max: 1416.5Min: 1313.6 / Avg: 1317.27 / Max: 1324.22Min: 1306.89 / Avg: 1329.9 / Max: 1377.55Min: 1251.31 / Avg: 1372.73 / Max: 1438.64Min: 1214.11 / Avg: 1216.5 / Max: 1219.63Min: 1205.82 / Avg: 1208.31 / Max: 1211.05Min: 1166.38 / Avg: 1168.32 / Max: 1170.52Min: 1355.84 / Avg: 1356.26 / Max: 1356.92Min: 1338.99 / Avg: 1357.3 / Max: 1378.51. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F522K4K6K8K10K8844.17056.06676.44709.53678.83653.63653.92716.72648.52220.72247.02123.15684.61. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F526001200180024003000SE +/- 8.88, N = 3SE +/- 11.79, N = 9SE +/- 11.49, N = 9SE +/- 12.27, N = 9SE +/- 5.90, N = 3SE +/- 14.82, N = 9SE +/- 20.90, N = 3SE +/- 9.84, N = 3SE +/- 31.31, N = 9SE +/- 37.70, N = 9SE +/- 29.49, N = 9SE +/- 26.03, N = 3SE +/- 28.78, N = 9SE +/- 16.37, N = 9SE +/- 17.43, N = 568192510511233143915391769161719272311240826862656110416991. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525001000150020002500Min: 663 / Avg: 680.67 / Max: 691Min: 870 / Avg: 924.56 / Max: 993Min: 1008 / Avg: 1051.22 / Max: 1107Min: 1194 / Avg: 1232.78 / Max: 1292Min: 1432 / Avg: 1439.33 / Max: 1451Min: 1454 / Avg: 1538.78 / Max: 1593Min: 1729 / Avg: 1769.33 / Max: 1799Min: 1597 / Avg: 1616.67 / Max: 1627Min: 1773 / Avg: 1926.89 / Max: 2050Min: 2112 / Avg: 2311.33 / Max: 2527Min: 2310 / Avg: 2408 / Max: 2551Min: 2634 / Avg: 2685.67 / Max: 2717Min: 2523 / Avg: 2656.22 / Max: 2818Min: 1024 / Avg: 1103.67 / Max: 1176Min: 1639 / Avg: 1699.2 / Max: 17471. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F526001200180024003000SE +/- 4.36, N = 3SE +/- 7.54, N = 3SE +/- 3.71, N = 3SE +/- 13.76, N = 9SE +/- 20.26, N = 3SE +/- 23.71, N = 9SE +/- 22.27, N = 9SE +/- 20.09, N = 3SE +/- 27.18, N = 3SE +/- 26.91, N = 3SE +/- 41.44, N = 9SE +/- 14.38, N = 3SE +/- 29.59, N = 9SE +/- 22.57, N = 9SE +/- 31.35, N = 974794610421253152115591735166619692203237626992623105217581. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525001000150020002500Min: 739 / Avg: 747 / Max: 754Min: 931 / Avg: 945.67 / Max: 956Min: 1035 / Avg: 1042.33 / Max: 1047Min: 1190 / Avg: 1253.33 / Max: 1313Min: 1492 / Avg: 1521 / Max: 1560Min: 1445 / Avg: 1559.33 / Max: 1649Min: 1624 / Avg: 1734.78 / Max: 1815Min: 1626 / Avg: 1665.67 / Max: 1691Min: 1916 / Avg: 1969 / Max: 2006Min: 2155 / Avg: 2203.33 / Max: 2248Min: 2146 / Avg: 2376 / Max: 2558Min: 2683 / Avg: 2699.33 / Max: 2728Min: 2470 / Avg: 2622.56 / Max: 2756Min: 958 / Avg: 1052.44 / Max: 1142Min: 1652 / Avg: 1758 / Max: 19431. (CXX) g++ options: -flto -pthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430SE +/- 0.007, N = 3SE +/- 0.023, N = 3SE +/- 0.021, N = 3SE +/- 0.009, N = 3SE +/- 0.010, N = 3SE +/- 0.059, N = 3SE +/- 0.016, N = 3SE +/- 0.062, N = 3SE +/- 0.112, N = 3SE +/- 0.029, N = 3SE +/- 0.056, N = 3SE +/- 0.024, N = 3SE +/- 0.042, N = 3SE +/- 0.004, N = 3SE +/- 0.053, N = 35.4067.7379.88910.60214.90717.61417.52518.15622.04622.44225.20624.81826.2816.70511.7571. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430Min: 5.39 / Avg: 5.41 / Max: 5.41Min: 7.71 / Avg: 7.74 / Max: 7.78Min: 9.85 / Avg: 9.89 / Max: 9.92Min: 10.58 / Avg: 10.6 / Max: 10.62Min: 14.89 / Avg: 14.91 / Max: 14.93Min: 17.51 / Avg: 17.61 / Max: 17.71Min: 17.5 / Avg: 17.53 / Max: 17.55Min: 18.03 / Avg: 18.16 / Max: 18.22Min: 21.83 / Avg: 22.05 / Max: 22.19Min: 22.4 / Avg: 22.44 / Max: 22.5Min: 25.11 / Avg: 25.21 / Max: 25.31Min: 24.78 / Avg: 24.82 / Max: 24.86Min: 26.22 / Avg: 26.28 / Max: 26.36Min: 6.7 / Avg: 6.7 / Max: 6.71Min: 11.66 / Avg: 11.76 / Max: 11.851. (CXX) g++ options: -O3 -pthread -lm

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5290K180K270K360K450KSE +/- 533.40, N = 3SE +/- 226.68, N = 3SE +/- 456.29, N = 3SE +/- 60.14, N = 3SE +/- 338.07, N = 3SE +/- 440.86, N = 3SE +/- 422.64, N = 3SE +/- 147.16, N = 3SE +/- 18239.03, N = 9SE +/- 10886.13, N = 9SE +/- 1226.45, N = 3SE +/- 730.51, N = 3SE +/- 198.26, N = 3SE +/- 95.91, N = 3SE +/- 1699.46, N = 33690273783673886993871733910154279674419784116813996104298753456263474843326283401683820201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280K160K240K320K400KMin: 367965 / Avg: 369027.33 / Max: 369643Min: 378044 / Avg: 378367 / Max: 378804Min: 387821 / Avg: 388699.33 / Max: 389353Min: 387058 / Avg: 387173 / Max: 387261Min: 390425 / Avg: 391015.33 / Max: 391596Min: 427345 / Avg: 427966.67 / Max: 428819Min: 441242 / Avg: 441977.67 / Max: 442706Min: 411435 / Avg: 411681.33 / Max: 411944Min: 331340 / Avg: 399610.44 / Max: 446964Min: 364614 / Avg: 429875.11 / Max: 446994Min: 344068 / Avg: 345626.33 / Max: 348046Min: 346044 / Avg: 347483.67 / Max: 348419Min: 332266 / Avg: 332628.33 / Max: 332949Min: 339980 / Avg: 340168 / Max: 340295Min: 379249 / Avg: 382019.67 / Max: 3851101. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52120240360480600SE +/- 0.29, N = 3SE +/- 1.13, N = 3SE +/- 0.84, N = 3SE +/- 0.15, N = 3SE +/- 0.21, N = 3SE +/- 0.44, N = 3SE +/- 0.57, N = 3SE +/- 0.59, N = 3SE +/- 0.13, N = 3SE +/- 0.31, N = 3SE +/- 0.24, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.36, N = 3SE +/- 1.71, N = 3573.04530.10520.45313.64314.06320.20236.99319.87260.78233.73232.98233.34232.40362.74454.841. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52100200300400500Min: 572.49 / Avg: 573.04 / Max: 573.47Min: 528.05 / Avg: 530.1 / Max: 531.94Min: 518.85 / Avg: 520.45 / Max: 521.68Min: 313.39 / Avg: 313.64 / Max: 313.91Min: 313.84 / Avg: 314.06 / Max: 314.48Min: 319.73 / Avg: 320.2 / Max: 321.08Min: 235.98 / Avg: 236.99 / Max: 237.94Min: 319.1 / Avg: 319.87 / Max: 321.04Min: 260.58 / Avg: 260.78 / Max: 261.02Min: 233.28 / Avg: 233.73 / Max: 234.33Min: 232.55 / Avg: 232.98 / Max: 233.38Min: 233.29 / Avg: 233.34 / Max: 233.42Min: 232.15 / Avg: 232.4 / Max: 232.62Min: 362.09 / Avg: 362.74 / Max: 363.35Min: 452.49 / Avg: 454.84 / Max: 458.161. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52160320480640800SE +/- 2.05, N = 3SE +/- 0.47, N = 3SE +/- 3.68, N = 3SE +/- 5.04, N = 4SE +/- 3.16, N = 9SE +/- 2.29, N = 3SE +/- 1.56, N = 3SE +/- 1.12, N = 3SE +/- 1.44, N = 3SE +/- 1.29, N = 3SE +/- 0.89, N = 3SE +/- 1.73, N = 3SE +/- 0.11, N = 3SE +/- 0.89, N = 3SE +/- 2.67, N = 3763.56529.62440.00404.33312.70289.36280.78275.35245.40243.92233.51234.21224.90581.30343.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52130260390520650Min: 759.57 / Avg: 763.56 / Max: 766.35Min: 528.7 / Avg: 529.62 / Max: 530.26Min: 432.97 / Avg: 440 / Max: 445.42Min: 391.86 / Avg: 404.33 / Max: 414.08Min: 297.71 / Avg: 312.7 / Max: 321.97Min: 286.12 / Avg: 289.36 / Max: 293.79Min: 278.05 / Avg: 280.78 / Max: 283.44Min: 273.43 / Avg: 275.35 / Max: 277.3Min: 242.68 / Avg: 245.4 / Max: 247.61Min: 241.53 / Avg: 243.91 / Max: 245.95Min: 232.12 / Avg: 233.51 / Max: 235.17Min: 230.9 / Avg: 234.2 / Max: 236.76Min: 224.67 / Avg: 224.9 / Max: 225.02Min: 579.68 / Avg: 581.3 / Max: 582.76Min: 340.62 / Avg: 343.39 / Max: 348.73

BlogBench

BlogBench is designed to replicate the load of a real-world busy file server by stressing the file-system with multiple threads of random reads, writes, and rewrites. The behavior is mimicked of that of a blog by creating blogs with content and pictures, modifying blog posts, adding comments to these blogs, and then reading the content of the blogs. All of these blogs generated are created locally with fake content and pictures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400K800K1200K1600K2000KSE +/- 16960.13, N = 3SE +/- 20232.46, N = 3SE +/- 12175.94, N = 3SE +/- 7235.14, N = 3SE +/- 7805.03, N = 3SE +/- 7760.40, N = 3SE +/- 13194.50, N = 3SE +/- 7106.01, N = 3SE +/- 4883.16, N = 3SE +/- 14031.09, N = 3SE +/- 9874.65, N = 3SE +/- 10548.76, N = 3SE +/- 16576.40, N = 3SE +/- 14389.61, N = 3SE +/- 13461.20, N = 31944496202318020430371898428195961519500801636092192336817716241583350139747513729551373157177203716765161. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400K800K1200K1600K2000KMin: 1916141 / Avg: 1944496.33 / Max: 1974796Min: 1992185 / Avg: 2023179.67 / Max: 2061206Min: 2022967 / Avg: 2043037.33 / Max: 2065016Min: 1890301 / Avg: 1898428 / Max: 1912860Min: 1944836 / Avg: 1959614.67 / Max: 1971357Min: 1935120 / Avg: 1950080.33 / Max: 1961140Min: 1610844 / Avg: 1636092.33 / Max: 1655363Min: 1909275 / Avg: 1923368 / Max: 1932004Min: 1764285 / Avg: 1771624.33 / Max: 1780874Min: 1565457 / Avg: 1583350 / Max: 1611018Min: 1384215 / Avg: 1397475 / Max: 1416780Min: 1360224 / Avg: 1372955 / Max: 1393890Min: 1355831 / Avg: 1373156.67 / Max: 1406298Min: 1743950 / Avg: 1772037 / Max: 1791514Min: 1650220 / Avg: 1676515.67 / Max: 16946651. (CC) gcc options: -O2 -pthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52120240360480600SE +/- 1.15, N = 3SE +/- 0.58, N = 3SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.88, N = 3SE +/- 1.00, N = 3SE +/- 1.86, N = 396142186192279337328359441446504492535116222MIN: 1 / MAX: 343MIN: 1 / MAX: 507MIN: 1 / MAX: 658MIN: 1 / MAX: 672MIN: 1 / MAX: 983MIN: 1 / MAX: 1172MIN: 1 / MAX: 1145MIN: 1 / MAX: 1250MIN: 1 / MAX: 1539MIN: 1 / MAX: 1551MIN: 1 / MAX: 1715MIN: 1 / MAX: 1688MIN: 1 / MAX: 1809MIN: 1 / MAX: 413MIN: 1 / MAX: 779
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5290180270360450Min: 277 / Avg: 279 / Max: 281Min: 327 / Avg: 328 / Max: 329Min: 358 / Avg: 359 / Max: 360Min: 440 / Avg: 441.33 / Max: 442Min: 444 / Avg: 445.67 / Max: 447Min: 502 / Avg: 504 / Max: 505Min: 488 / Avg: 491.67 / Max: 494

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5270M140M210M280M350MSE +/- 333820.85, N = 3SE +/- 380619.49, N = 3SE +/- 38644.36, N = 3SE +/- 206566.86, N = 3SE +/- 465752.47, N = 3SE +/- 131222.60, N = 3SE +/- 12833.10, N = 3SE +/- 40543.01, N = 3SE +/- 234513.76, N = 3SE +/- 426281.88, N = 3SE +/- 136985.64, N = 3SE +/- 320354.97, N = 3SE +/- 44137.19, N = 3SE +/- 60046.42, N = 3SE +/- 33906.80, N = 3284332324.15284710986.66283588722.34292597759.78297827555.09297816529.28293437386.11302406827.18293216264.80292412973.62293169027.15297478250.43302249577.83346733990.00346505619.641. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260M120M180M240M300MMin: 283664869.62 / Avg: 284332324.15 / Max: 284679741.15Min: 284060087.32 / Avg: 284710986.66 / Max: 285378285.89Min: 283512887.38 / Avg: 283588722.34 / Max: 283639560.88Min: 292233343.29 / Avg: 292597759.78 / Max: 292948520.88Min: 297325728.53 / Avg: 297827555.09 / Max: 298758103.32Min: 297554997.89 / Avg: 297816529.28 / Max: 297966245.2Min: 293416576.95 / Avg: 293437386.11 / Max: 293460802.19Min: 302325902.16 / Avg: 302406827.18 / Max: 302451712.79Min: 292759126.74 / Avg: 293216264.8 / Max: 293535711.28Min: 291560846.18 / Avg: 292412973.62 / Max: 292862655.91Min: 292897063.96 / Avg: 293169027.15 / Max: 293333683.01Min: 296837585.98 / Avg: 297478250.43 / Max: 297805193.98Min: 302186816.9 / Avg: 302249577.83 / Max: 302334717.7Min: 346619184.22 / Avg: 346733990 / Max: 346821912.43Min: 346439918.8 / Avg: 346505619.64 / Max: 346553015.31. (CC) gcc options: -O3 -march=native -lm

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F528001600240032004000220626792922311934723510351836413397352833413089324824742615

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230060090012001500109413071384145915461488147515341348140312191124117911731159

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525001000150020002500111213721538166019262022204321072049212521221965206913011456

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52140280420560700SE +/- 0.59, N = 3SE +/- 0.07, N = 3SE +/- 0.21, N = 3SE +/- 0.13, N = 3SE +/- 0.56, N = 3SE +/- 0.31, N = 3SE +/- 0.27, N = 3SE +/- 0.77, N = 3SE +/- 0.13, N = 3SE +/- 0.31, N = 3SE +/- 0.63, N = 3SE +/- 0.06, N = 3SE +/- 0.40, N = 3SE +/- 0.29, N = 3SE +/- 0.14, N = 3513.65514.85513.60530.36538.09539.17531.14546.72530.80529.60529.62539.45546.25627.44628.211. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52110220330440550Min: 512.6 / Avg: 513.65 / Max: 514.65Min: 514.7 / Avg: 514.85 / Max: 514.94Min: 513.18 / Avg: 513.6 / Max: 513.82Min: 530.22 / Avg: 530.36 / Max: 530.61Min: 537.38 / Avg: 538.09 / Max: 539.19Min: 538.79 / Avg: 539.17 / Max: 539.79Min: 530.69 / Avg: 531.14 / Max: 531.61Min: 545.3 / Avg: 546.72 / Max: 547.95Min: 530.54 / Avg: 530.8 / Max: 530.93Min: 528.99 / Avg: 529.6 / Max: 530.03Min: 528.75 / Avg: 529.62 / Max: 530.84Min: 539.37 / Avg: 539.45 / Max: 539.56Min: 545.53 / Avg: 546.25 / Max: 546.93Min: 627.02 / Avg: 627.44 / Max: 627.99Min: 628.01 / Avg: 628.21 / Max: 628.491. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000SE +/- 0.62, N = 3SE +/- 1.20, N = 3SE +/- 0.65, N = 3SE +/- 0.25, N = 3SE +/- 0.35, N = 3SE +/- 0.46, N = 3SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.31, N = 3SE +/- 0.23, N = 3SE +/- 0.20, N = 3SE +/- 0.04, N = 3SE +/- 0.22, N = 3SE +/- 0.11, N = 3SE +/- 0.69, N = 3866.85554.64432.92411.56291.52237.65243.58223.08180.94179.18154.23154.42143.94683.81356.90
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52150300450600750Min: 865.62 / Avg: 866.85 / Max: 867.52Min: 552.98 / Avg: 554.64 / Max: 556.98Min: 432.21 / Avg: 432.92 / Max: 434.21Min: 411.08 / Avg: 411.56 / Max: 411.94Min: 291.13 / Avg: 291.52 / Max: 292.22Min: 237.18 / Avg: 237.65 / Max: 238.58Min: 243.32 / Avg: 243.58 / Max: 243.79Min: 222.95 / Avg: 223.08 / Max: 223.18Min: 180.32 / Avg: 180.94 / Max: 181.27Min: 178.74 / Avg: 179.18 / Max: 179.54Min: 153.83 / Avg: 154.23 / Max: 154.49Min: 154.37 / Avg: 154.42 / Max: 154.49Min: 143.49 / Avg: 143.94 / Max: 144.19Min: 683.61 / Avg: 683.81 / Max: 683.97Min: 356.09 / Avg: 356.9 / Max: 358.27

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 8EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F520.17780.35560.53340.71120.889SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.660.690.690.710.690.700.790.791. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 8EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.66 / Avg: 0.66 / Max: 0.66Min: 0.69 / Avg: 0.69 / Max: 0.69Min: 0.68 / Avg: 0.69 / Max: 0.69Min: 0.7 / Avg: 0.71 / Max: 0.71Min: 0.69 / Avg: 0.69 / Max: 0.69Min: 0.7 / Avg: 0.7 / Max: 0.71Min: 0.78 / Avg: 0.79 / Max: 0.79Min: 0.79 / Avg: 0.79 / Max: 0.791. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5211002200330044005500SE +/- 3.69, N = 3SE +/- 16.48, N = 3SE +/- 24.57, N = 3SE +/- 73.62, N = 12SE +/- 71.97, N = 12SE +/- 62.74, N = 3SE +/- 35.49, N = 11SE +/- 67.46, N = 11SE +/- 27.98, N = 3SE +/- 52.31, N = 12SE +/- 95.52, N = 12SE +/- 84.35, N = 9SE +/- 97.77, N = 9SE +/- 5.63, N = 3SE +/- 88.35, N = 123941394340444522494647554864498646424506417540874229451852121. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500Min: 3934 / Avg: 3941 / Max: 3946.5Min: 3922 / Avg: 3943 / Max: 3975.5Min: 3996 / Avg: 4043.83 / Max: 4077.5Min: 4171 / Avg: 4521.92 / Max: 5107Min: 4494.5 / Avg: 4946.25 / Max: 5164.5Min: 4633.5 / Avg: 4754.67 / Max: 4843.5Min: 4539 / Avg: 4864.36 / Max: 4940.5Min: 4318 / Avg: 4986.05 / Max: 5084Min: 4586 / Avg: 4641.67 / Max: 4674.5Min: 4219.5 / Avg: 4506.42 / Max: 4722Min: 3788 / Avg: 4174.71 / Max: 4608.5Min: 3791.5 / Avg: 4086.5 / Max: 4435Min: 3682.5 / Avg: 4228.56 / Max: 4633Min: 4508 / Avg: 4517.67 / Max: 4527.5Min: 4665 / Avg: 5211.92 / Max: 55961. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7232PEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250SE +/- 2.36, N = 3SE +/- 1.75, N = 3SE +/- 1.05, N = 3SE +/- 1.66, N = 3SE +/- 1.09, N = 12SE +/- 1.31, N = 12SE +/- 1.35, N = 3SE +/- 4.09, N = 12SE +/- 1.10, N = 3SE +/- 1.19, N = 3SE +/- 1.41, N = 3SE +/- 1.27, N = 12SE +/- 2.04, N = 12212.89137.59135.68128.92132.04130.09131.53160.82136.87132.62132.94119.44122.441. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7232PEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200Min: 209.86 / Avg: 212.89 / Max: 217.53Min: 134.17 / Avg: 137.59 / Max: 139.97Min: 133.59 / Avg: 135.68 / Max: 136.92Min: 126.46 / Avg: 128.92 / Max: 132.08Min: 125.23 / Avg: 132.04 / Max: 137.09Min: 122.93 / Avg: 130.09 / Max: 137.37Min: 128.85 / Avg: 131.53 / Max: 133.13Min: 137.76 / Avg: 160.82 / Max: 185.31Min: 134.99 / Avg: 136.87 / Max: 138.8Min: 130.9 / Avg: 132.62 / Max: 134.89Min: 130.13 / Avg: 132.94 / Max: 134.51Min: 112.44 / Avg: 119.44 / Max: 126.77Min: 113.19 / Avg: 122.44 / Max: 133.841. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52110220330440550SE +/- 0.90, N = 3SE +/- 2.81, N = 3SE +/- 2.55, N = 3SE +/- 0.98, N = 3SE +/- 0.27, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.19, N = 3SE +/- 0.12, N = 3SE +/- 0.24, N = 3SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.26, N = 3SE +/- 1.43, N = 3507.74361.48309.98301.52254.24253.10260.46250.41258.37254.88256.56257.01254.50416.27254.761. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5290180270360450Min: 506.68 / Avg: 507.74 / Max: 509.53Min: 356.31 / Avg: 361.48 / Max: 365.98Min: 305.77 / Avg: 309.98 / Max: 314.59Min: 300.37 / Avg: 301.52 / Max: 303.47Min: 253.9 / Avg: 254.24 / Max: 254.77Min: 252.84 / Avg: 253.1 / Max: 253.26Min: 260.18 / Avg: 260.46 / Max: 260.72Min: 250.08 / Avg: 250.41 / Max: 250.74Min: 258.22 / Avg: 258.37 / Max: 258.61Min: 254.57 / Avg: 254.88 / Max: 255.36Min: 256.37 / Avg: 256.56 / Max: 256.9Min: 256.86 / Avg: 257 / Max: 257.24Min: 254.45 / Avg: 254.5 / Max: 254.61Min: 415.89 / Avg: 416.27 / Max: 416.76Min: 252.29 / Avg: 254.76 / Max: 257.231. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280160240320400SE +/- 0.53, N = 3SE +/- 0.30, N = 3SE +/- 0.76, N = 3SE +/- 1.35, N = 3SE +/- 0.30, N = 3SE +/- 0.31, N = 3SE +/- 0.48, N = 3SE +/- 0.40, N = 3SE +/- 0.35, N = 3SE +/- 0.24, N = 3SE +/- 0.41, N = 3SE +/- 0.70, N = 3SE +/- 0.66, N = 3SE +/- 0.65, N = 3SE +/- 0.20, N = 3270.83298.27296.96301.28307.47309.10303.33311.97303.45303.75303.21306.50308.39345.36348.66
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300Min: 269.87 / Avg: 270.83 / Max: 271.69Min: 297.91 / Avg: 298.27 / Max: 298.86Min: 295.48 / Avg: 296.96 / Max: 298.01Min: 298.67 / Avg: 301.28 / Max: 303.16Min: 307 / Avg: 307.47 / Max: 308.03Min: 308.57 / Avg: 309.1 / Max: 309.64Min: 302.7 / Avg: 303.33 / Max: 304.28Min: 311.24 / Avg: 311.97 / Max: 312.62Min: 302.78 / Avg: 303.45 / Max: 303.96Min: 303.29 / Avg: 303.75 / Max: 304.08Min: 302.77 / Avg: 303.21 / Max: 304.03Min: 305.69 / Avg: 306.5 / Max: 307.9Min: 307.21 / Avg: 308.39 / Max: 309.51Min: 344.12 / Avg: 345.36 / Max: 346.34Min: 348.4 / Avg: 348.66 / Max: 349.06

Tinymembench

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemsetEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KSE +/- 21.61, N = 3SE +/- 37.39, N = 3SE +/- 18.80, N = 3SE +/- 20.90, N = 3SE +/- 46.42, N = 3SE +/- 13.93, N = 3SE +/- 27.08, N = 3SE +/- 39.21, N = 3SE +/- 217.68, N = 3SE +/- 169.52, N = 3SE +/- 52.61, N = 3SE +/- 217.98, N = 3SE +/- 58.19, N = 9SE +/- 28.30, N = 3SE +/- 39.37, N = 314571.714776.414921.314820.314961.415097.114786.915872.016494.416261.515640.117329.816300.816357.215585.71. (CC) gcc options: -O2 -lm
OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemsetEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KMin: 14546.4 / Avg: 14571.7 / Max: 14614.7Min: 14704.8 / Avg: 14776.4 / Max: 14830.9Min: 14884.6 / Avg: 14921.27 / Max: 14946.8Min: 14781.5 / Avg: 14820.27 / Max: 14853.2Min: 14876.5 / Avg: 14961.37 / Max: 15036.4Min: 15074.8 / Avg: 15097.07 / Max: 15122.7Min: 14734.2 / Avg: 14786.93 / Max: 14824Min: 15794.6 / Avg: 15872.03 / Max: 15921.5Min: 16092.5 / Avg: 16494.43 / Max: 16840.3Min: 15924.5 / Avg: 16261.47 / Max: 16462.4Min: 15534.9 / Avg: 15640.1 / Max: 15694.5Min: 16900.6 / Avg: 17329.83 / Max: 17610.5Min: 16107.6 / Avg: 16300.84 / Max: 16573.2Min: 16301.7 / Avg: 16357.2 / Max: 16394.6Min: 15512.6 / Avg: 15585.7 / Max: 15647.61. (CC) gcc options: -O2 -lm

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52150K300K450K600K750K1060731553472081502190553176613877923786604120765405585417756526026600797021661206222325071. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52140280420560700SE +/- 0.72, N = 3SE +/- 0.64, N = 3SE +/- 1.19, N = 3SE +/- 0.50, N = 3SE +/- 1.69, N = 3SE +/- 0.14, N = 3SE +/- 0.33, N = 3SE +/- 0.07, N = 3SE +/- 0.92, N = 3SE +/- 0.14, N = 3SE +/- 0.39, N = 3SE +/- 0.15, N = 3SE +/- 0.24, N = 3SE +/- 0.51, N = 3SE +/- 0.98, N = 3664.29426.59338.52317.27219.75182.76189.09167.99139.21137.83118.26119.17107.79508.26265.13
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52120240360480600Min: 662.88 / Avg: 664.29 / Max: 665.27Min: 425.64 / Avg: 426.59 / Max: 427.81Min: 336.65 / Avg: 338.52 / Max: 340.74Min: 316.57 / Avg: 317.27 / Max: 318.25Min: 217.98 / Avg: 219.75 / Max: 223.14Min: 182.48 / Avg: 182.76 / Max: 182.95Min: 188.42 / Avg: 189.09 / Max: 189.45Min: 167.9 / Avg: 167.99 / Max: 168.12Min: 138.06 / Avg: 139.21 / Max: 141.03Min: 137.55 / Avg: 137.83 / Max: 138.03Min: 117.87 / Avg: 118.26 / Max: 119.05Min: 118.86 / Avg: 119.17 / Max: 119.35Min: 107.34 / Avg: 107.79 / Max: 108.17Min: 507.47 / Avg: 508.26 / Max: 509.22Min: 263.71 / Avg: 265.13 / Max: 267.02

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5210002000300040005000SE +/- 5.48, N = 3SE +/- 1.24, N = 3SE +/- 2.72, N = 3SE +/- 1.09, N = 3SE +/- 2.96, N = 3SE +/- 4.11, N = 3SE +/- 3.97, N = 3SE +/- 5.67, N = 3SE +/- 2.54, N = 3SE +/- 3.12, N = 3SE +/- 2.07, N = 3SE +/- 5.35, N = 3SE +/- 8.28, N = 3SE +/- 2.49, N = 3SE +/- 3.14, N = 33949.443965.033949.004086.084151.294139.984089.124207.914085.534072.764091.784152.604199.914835.024834.051. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F528001600240032004000Min: 3939.83 / Avg: 3949.44 / Max: 3958.81Min: 3963.31 / Avg: 3965.03 / Max: 3967.43Min: 3943.6 / Avg: 3949 / Max: 3952.3Min: 4084.12 / Avg: 4086.08 / Max: 4087.91Min: 4145.43 / Avg: 4151.29 / Max: 4154.96Min: 4132.23 / Avg: 4139.98 / Max: 4146.24Min: 4081.58 / Avg: 4089.12 / Max: 4095.03Min: 4199.27 / Avg: 4207.91 / Max: 4218.6Min: 4080.99 / Avg: 4085.53 / Max: 4089.76Min: 4068.43 / Avg: 4072.76 / Max: 4078.81Min: 4087.91 / Avg: 4091.78 / Max: 4095.01Min: 4142.26 / Avg: 4152.6 / Max: 4160.14Min: 4183.38 / Avg: 4199.91 / Max: 4208.9Min: 4830.71 / Avg: 4835.02 / Max: 4839.34Min: 4828.11 / Avg: 4834.05 / Max: 4838.821. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KSE +/- 15.37, N = 3SE +/- 2.92, N = 3SE +/- 45.05, N = 3SE +/- 1.83, N = 3SE +/- 18.32, N = 3SE +/- 63.96, N = 12SE +/- 105.59, N = 12SE +/- 29.60, N = 3SE +/- 70.17, N = 12SE +/- 73.86, N = 12SE +/- 46.88, N = 3SE +/- 53.19, N = 12SE +/- 66.14, N = 6SE +/- 45.76, N = 3SE +/- 112.30, N = 48836872186118975881084218070869873737221668664786527982393481. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KMin: 8814.5 / Avg: 8836.33 / Max: 8866Min: 8717.5 / Avg: 8720.67 / Max: 8726.5Min: 8522 / Avg: 8611.17 / Max: 8667Min: 8973 / Avg: 8974.83 / Max: 8978.5Min: 8780.5 / Avg: 8809.83 / Max: 8843.5Min: 7942 / Avg: 8421.42 / Max: 8675.5Min: 7492 / Avg: 8070 / Max: 8524Min: 8644 / Avg: 8698 / Max: 8746Min: 6932 / Avg: 7373.04 / Max: 7736Min: 6810.5 / Avg: 7220.58 / Max: 7710.5Min: 6618 / Avg: 6686.17 / Max: 6776Min: 6037.5 / Avg: 6477.5 / Max: 6804Min: 6262 / Avg: 6526.58 / Max: 6727.5Min: 9743 / Avg: 9822.67 / Max: 9901.5Min: 9037.5 / Avg: 9348.13 / Max: 9565.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52100200300400500SE +/- 1.04, N = 3SE +/- 0.37, N = 3SE +/- 0.47, N = 3SE +/- 0.25, N = 3SE +/- 0.20, N = 3SE +/- 0.52, N = 3SE +/- 0.57, N = 3SE +/- 0.39, N = 3SE +/- 1.22, N = 3SE +/- 0.73, N = 3SE +/- 0.75, N = 3SE +/- 0.50, N = 3SE +/- 0.47, N = 3SE +/- 0.12, N = 3SE +/- 0.68, N = 3450.85325.84270.72262.63200.21189.66193.92187.60206.58195.34206.95224.62213.09375.70230.941. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280160240320400Min: 449.53 / Avg: 450.85 / Max: 452.91Min: 325.11 / Avg: 325.84 / Max: 326.32Min: 270.13 / Avg: 270.72 / Max: 271.65Min: 262.32 / Avg: 262.63 / Max: 263.13Min: 199.84 / Avg: 200.21 / Max: 200.55Min: 188.81 / Avg: 189.66 / Max: 190.61Min: 192.99 / Avg: 193.92 / Max: 194.97Min: 186.85 / Avg: 187.6 / Max: 188.17Min: 204.79 / Avg: 206.58 / Max: 208.92Min: 194.23 / Avg: 195.34 / Max: 196.73Min: 206.04 / Avg: 206.95 / Max: 208.44Min: 224.11 / Avg: 224.62 / Max: 225.62Min: 212.32 / Avg: 213.09 / Max: 213.94Min: 375.47 / Avg: 375.7 / Max: 375.88Min: 229.71 / Avg: 230.94 / Max: 232.081. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52110220330440550SE +/- 0.50, N = 3SE +/- 1.26, N = 3SE +/- 0.29, N = 3SE +/- 5.01, N = 3SE +/- 4.15, N = 9SE +/- 4.69, N = 12SE +/- 0.44, N = 3SE +/- 1.15, N = 3SE +/- 0.60, N = 3SE +/- 3.97, N = 3SE +/- 6.16, N = 12SE +/- 7.16, N = 12SE +/- 8.08, N = 9SE +/- 0.50, N = 3SE +/- 6.13, N = 124094034004565004594664754404313953903794803721. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5290180270360450Min: 408 / Avg: 409 / Max: 409.5Min: 401.5 / Avg: 403 / Max: 405.5Min: 399.5 / Avg: 400 / Max: 400.5Min: 450.5 / Avg: 456 / Max: 466Min: 468 / Avg: 500.22 / Max: 507Min: 425.5 / Avg: 459.13 / Max: 469.5Min: 465.5 / Avg: 466.33 / Max: 467Min: 472.5 / Avg: 474.5 / Max: 476.5Min: 439.5 / Avg: 440.33 / Max: 441.5Min: 424.5 / Avg: 430.5 / Max: 438Min: 343.5 / Avg: 395 / Max: 412.5Min: 335.5 / Avg: 389.54 / Max: 410.5Min: 341.5 / Avg: 379.17 / Max: 403.5Min: 479.5 / Avg: 480 / Max: 481Min: 329.5 / Avg: 371.92 / Max: 403.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52120240360480600SE +/- 0.21, N = 3SE +/- 0.31, N = 3SE +/- 0.83, N = 3SE +/- 0.64, N = 3SE +/- 0.04, N = 3SE +/- 0.20, N = 3SE +/- 1.09, N = 3SE +/- 0.41, N = 3SE +/- 0.69, N = 3SE +/- 0.77, N = 3SE +/- 0.25, N = 3SE +/- 0.41, N = 3SE +/- 0.23, N = 3SE +/- 0.25, N = 3SE +/- 0.74, N = 3558.87387.23304.34287.81196.70162.29168.40148.50122.03121.21101.68103.5591.67454.49237.79
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52100200300400500Min: 558.51 / Avg: 558.87 / Max: 559.22Min: 386.68 / Avg: 387.23 / Max: 387.74Min: 303.05 / Avg: 304.34 / Max: 305.88Min: 286.64 / Avg: 287.81 / Max: 288.83Min: 196.63 / Avg: 196.7 / Max: 196.77Min: 161.98 / Avg: 162.29 / Max: 162.65Min: 167.27 / Avg: 168.4 / Max: 170.57Min: 147.97 / Avg: 148.5 / Max: 149.3Min: 120.66 / Avg: 122.03 / Max: 122.86Min: 120.06 / Avg: 121.21 / Max: 122.68Min: 101.18 / Avg: 101.68 / Max: 101.94Min: 103.02 / Avg: 103.55 / Max: 104.35Min: 91.4 / Avg: 91.67 / Max: 92.12Min: 454.13 / Avg: 454.49 / Max: 454.97Min: 236.43 / Avg: 237.79 / Max: 238.96

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 32762502392282172142152152232192392422282211971. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 249 / Avg: 249.67 / Max: 250Min: 216 / Avg: 216.67 / Max: 217Min: 242 / Avg: 242.33 / Max: 2431. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620SE +/- 0.00542, N = 3SE +/- 0.00448, N = 3SE +/- 0.00263, N = 3SE +/- 0.07483, N = 3SE +/- 0.01008, N = 3SE +/- 0.00324, N = 3SE +/- 0.00249, N = 3SE +/- 0.00275, N = 3SE +/- 0.00667, N = 3SE +/- 0.00479, N = 3SE +/- 0.00378, N = 3SE +/- 0.03096, N = 3SE +/- 0.00350, N = 3SE +/- 0.06375, N = 3SE +/- 0.05299, N = 118.646619.097439.0515415.6329015.5629015.2920017.9677015.2794016.6494017.6850017.3863017.3075017.3763012.956007.089251. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025Min: 8.64 / Avg: 8.65 / Max: 8.66Min: 9.09 / Avg: 9.1 / Max: 9.1Min: 9.05 / Avg: 9.05 / Max: 9.05Min: 15.51 / Avg: 15.63 / Max: 15.77Min: 15.55 / Avg: 15.56 / Max: 15.58Min: 15.29 / Avg: 15.29 / Max: 15.3Min: 17.96 / Avg: 17.97 / Max: 17.97Min: 15.28 / Avg: 15.28 / Max: 15.28Min: 16.64 / Avg: 16.65 / Max: 16.66Min: 17.68 / Avg: 17.69 / Max: 17.69Min: 17.38 / Avg: 17.39 / Max: 17.39Min: 17.25 / Avg: 17.31 / Max: 17.34Min: 17.37 / Avg: 17.38 / Max: 17.38Min: 12.86 / Avg: 12.96 / Max: 13.08Min: 6.64 / Avg: 7.09 / Max: 7.271. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7232PEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250SE +/- 2.54, N = 3SE +/- 1.59, N = 3SE +/- 1.23, N = 3SE +/- 1.93, N = 5SE +/- 1.85, N = 3SE +/- 1.90, N = 3SE +/- 1.89, N = 3SE +/- 2.10, N = 4SE +/- 1.33, N = 3SE +/- 1.88, N = 4SE +/- 2.27, N = 3SE +/- 1.05, N = 3SE +/- 2.33, N = 9211.49174.20183.22173.90175.85165.66170.22176.53172.37177.62171.84158.79151.461. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7232PEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200Min: 206.5 / Avg: 211.49 / Max: 214.78Min: 171.02 / Avg: 174.2 / Max: 175.94Min: 181.83 / Avg: 183.22 / Max: 185.67Min: 168.11 / Avg: 173.89 / Max: 179.12Min: 172.42 / Avg: 175.85 / Max: 178.76Min: 162.62 / Avg: 165.66 / Max: 169.15Min: 168.07 / Avg: 170.22 / Max: 173.98Min: 172.54 / Avg: 176.53 / Max: 181.84Min: 170.3 / Avg: 172.37 / Max: 174.84Min: 172.48 / Avg: 177.62 / Max: 181.39Min: 169.11 / Avg: 171.84 / Max: 176.34Min: 157.45 / Avg: 158.79 / Max: 160.86Min: 142 / Avg: 151.46 / Max: 162.31. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.30012.60023.90035.20046.5005SE +/- 0.007, N = 11SE +/- 0.015, N = 4SE +/- 0.060, N = 3SE +/- 0.005, N = 15SE +/- 0.012, N = 3SE +/- 0.010, N = 3SE +/- 0.008, N = 3SE +/- 0.005, N = 15SE +/- 0.004, N = 15SE +/- 0.024, N = 3SE +/- 0.006, N = 3SE +/- 0.015, N = 3SE +/- 0.009, N = 3SE +/- 0.002, N = 14SE +/- 0.401, N = 33.9175.3224.0813.8563.2132.9273.0032.8363.0223.0563.0733.1873.1273.6255.778MIN: 3.79 / MAX: 19.86MIN: 5.22 / MAX: 13.18MIN: 3.8 / MAX: 20.13MIN: 3.74 / MAX: 20.71MIN: 3.15 / MAX: 4.81MIN: 2.87 / MAX: 5.27MIN: 2.95 / MAX: 3.25MIN: 2.77 / MAX: 4.94MIN: 2.96 / MAX: 3.4MIN: 2.98 / MAX: 3.31MIN: 3.03 / MAX: 3.22MIN: 3.13 / MAX: 3.28MIN: 3.07 / MAX: 3.45MIN: 3.57 / MAX: 17.34MIN: 4.65 / MAX: 7.241. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 3.86 / Avg: 3.92 / Max: 3.95Min: 5.3 / Avg: 5.32 / Max: 5.36Min: 3.98 / Avg: 4.08 / Max: 4.18Min: 3.82 / Avg: 3.86 / Max: 3.88Min: 3.19 / Avg: 3.21 / Max: 3.23Min: 2.92 / Avg: 2.93 / Max: 2.95Min: 2.99 / Avg: 3 / Max: 3.02Min: 2.81 / Avg: 2.84 / Max: 2.88Min: 3.01 / Avg: 3.02 / Max: 3.05Min: 3.02 / Avg: 3.06 / Max: 3.1Min: 3.06 / Avg: 3.07 / Max: 3.09Min: 3.17 / Avg: 3.19 / Max: 3.22Min: 3.11 / Avg: 3.13 / Max: 3.14Min: 3.61 / Avg: 3.62 / Max: 3.64Min: 5 / Avg: 5.78 / Max: 6.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.030, N = 11SE +/- 0.155, N = 4SE +/- 0.033, N = 3SE +/- 0.046, N = 15SE +/- 0.026, N = 3SE +/- 0.028, N = 3SE +/- 0.022, N = 3SE +/- 0.009, N = 15SE +/- 0.008, N = 15SE +/- 0.016, N = 3SE +/- 0.022, N = 3SE +/- 0.016, N = 3SE +/- 0.020, N = 3SE +/- 0.017, N = 14SE +/- 0.391, N = 34.9445.9305.8555.7725.2704.8204.8214.7134.8524.8254.9205.0674.9194.5788.171MIN: 4.74 / MAX: 7.04MIN: 5.53 / MAX: 20.6MIN: 5.58 / MAX: 23.16MIN: 5.46 / MAX: 20.47MIN: 5.12 / MAX: 6.52MIN: 4.67 / MAX: 6.92MIN: 4.72 / MAX: 5.31MIN: 4.51 / MAX: 6.85MIN: 4.69 / MAX: 5.21MIN: 4.71 / MAX: 5.16MIN: 4.78 / MAX: 5.1MIN: 4.95 / MAX: 5.21MIN: 4.81 / MAX: 5.23MIN: 4.43 / MAX: 10.46MIN: 7.28 / MAX: 12.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 4.81 / Avg: 4.94 / Max: 5.12Min: 5.66 / Avg: 5.93 / Max: 6.37Min: 5.81 / Avg: 5.86 / Max: 5.92Min: 5.57 / Avg: 5.77 / Max: 6.19Min: 5.23 / Avg: 5.27 / Max: 5.32Min: 4.78 / Avg: 4.82 / Max: 4.87Min: 4.8 / Avg: 4.82 / Max: 4.87Min: 4.6 / Avg: 4.71 / Max: 4.75Min: 4.78 / Avg: 4.85 / Max: 4.89Min: 4.79 / Avg: 4.83 / Max: 4.84Min: 4.88 / Avg: 4.92 / Max: 4.96Min: 5.05 / Avg: 5.07 / Max: 5.1Min: 4.88 / Avg: 4.92 / Max: 4.95Min: 4.51 / Avg: 4.58 / Max: 4.67Min: 7.72 / Avg: 8.17 / Max: 8.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.12, N = 11SE +/- 0.11, N = 4SE +/- 0.12, N = 3SE +/- 0.09, N = 15SE +/- 0.97, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 15SE +/- 0.05, N = 15SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 14SE +/- 0.19, N = 331.4438.1033.5931.5429.0225.0625.2724.3025.4424.7825.2226.9225.4627.4533.50MIN: 30.44 / MAX: 49.82MIN: 36.85 / MAX: 55.75MIN: 32.54 / MAX: 48.77MIN: 30.05 / MAX: 48.95MIN: 27.23 / MAX: 33.12MIN: 24.51 / MAX: 27.82MIN: 24.96 / MAX: 28.42MIN: 23.72 / MAX: 27.28MIN: 24.73 / MAX: 28.58MIN: 24.46 / MAX: 25.37MIN: 24.73 / MAX: 27.24MIN: 26.49 / MAX: 27.67MIN: 25.03 / MAX: 27.67MIN: 27.01 / MAX: 58.07MIN: 32.85 / MAX: 46.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 30.9 / Avg: 31.44 / Max: 32.5Min: 37.9 / Avg: 38.1 / Max: 38.42Min: 33.39 / Avg: 33.59 / Max: 33.8Min: 30.84 / Avg: 31.54 / Max: 32Min: 27.93 / Avg: 29.02 / Max: 30.96Min: 24.81 / Avg: 25.06 / Max: 25.18Min: 25.2 / Avg: 25.27 / Max: 25.36Min: 23.9 / Avg: 24.3 / Max: 24.57Min: 25.04 / Avg: 25.44 / Max: 25.64Min: 24.73 / Avg: 24.78 / Max: 24.82Min: 25.03 / Avg: 25.22 / Max: 25.35Min: 26.83 / Avg: 26.92 / Max: 27.06Min: 25.38 / Avg: 25.45 / Max: 25.59Min: 27.35 / Avg: 27.44 / Max: 27.53Min: 33.28 / Avg: 33.5 / Max: 33.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.071, N = 11SE +/- 0.115, N = 4SE +/- 0.027, N = 3SE +/- 0.061, N = 15SE +/- 0.114, N = 3SE +/- 0.048, N = 3SE +/- 0.016, N = 3SE +/- 0.252, N = 15SE +/- 0.271, N = 15SE +/- 0.003, N = 3SE +/- 0.018, N = 3SE +/- 0.014, N = 3SE +/- 0.002, N = 3SE +/- 0.062, N = 14SE +/- 0.064, N = 39.6509.7689.6109.5048.9787.4937.3387.6258.1427.4797.5428.3757.6849.35710.209MIN: 9.16 / MAX: 25.96MIN: 9.3 / MAX: 12.44MIN: 9.28 / MAX: 25.71MIN: 9.01 / MAX: 25.56MIN: 8.7 / MAX: 10.42MIN: 7.29 / MAX: 9.78MIN: 7.23 / MAX: 7.87MIN: 7.08 / MAX: 14.04MIN: 7.43 / MAX: 13.26MIN: 7.34 / MAX: 8.04MIN: 7.38 / MAX: 8.01MIN: 8.25 / MAX: 8.55MIN: 7.56 / MAX: 8.11MIN: 8.98 / MAX: 23.95MIN: 10.06 / MAX: 11.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 9.3 / Avg: 9.65 / Max: 10.14Min: 9.49 / Avg: 9.77 / Max: 10.05Min: 9.58 / Avg: 9.61 / Max: 9.66Min: 9.26 / Avg: 9.5 / Max: 10.27Min: 8.84 / Avg: 8.98 / Max: 9.2Min: 7.44 / Avg: 7.49 / Max: 7.59Min: 7.32 / Avg: 7.34 / Max: 7.37Min: 7.17 / Avg: 7.63 / Max: 10.86Min: 7.63 / Avg: 8.14 / Max: 11.51Min: 7.47 / Avg: 7.48 / Max: 7.48Min: 7.51 / Avg: 7.54 / Max: 7.56Min: 8.35 / Avg: 8.37 / Max: 8.4Min: 7.68 / Avg: 7.68 / Max: 7.69Min: 9.16 / Avg: 9.36 / Max: 10Min: 10.14 / Avg: 10.21 / Max: 10.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F526K12K18K24K30KSE +/- 130.62, N = 3SE +/- 251.88, N = 3SE +/- 162.08, N = 13SE +/- 278.67, N = 4SE +/- 228.79, N = 5SE +/- 303.68, N = 3SE +/- 233.41, N = 5SE +/- 300.22, N = 3SE +/- 149.83, N = 11SE +/- 208.12, N = 5SE +/- 203.19, N = 6SE +/- 189.69, N = 6SE +/- 248.77, N = 3SE +/- 296.05, N = 4SE +/- 205.94, N = 1323230.1923807.6722574.8022339.8521653.9121272.0420967.4821442.0020634.2020036.1919915.0019062.9218909.1027135.6324205.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525K10K15K20K25KMin: 22989.39 / Avg: 23230.19 / Max: 23438.32Min: 23305.32 / Avg: 23807.67 / Max: 24091.45Min: 20715.3 / Avg: 22574.8 / Max: 23030.24Min: 21516.1 / Avg: 22339.85 / Max: 22741.4Min: 20776.27 / Avg: 21653.91 / Max: 22118.67Min: 20665.78 / Avg: 21272.04 / Max: 21606.83Min: 20100.44 / Avg: 20967.48 / Max: 21472.74Min: 20857.09 / Avg: 21442 / Max: 21851.95Min: 19629.33 / Avg: 20634.2 / Max: 21282.63Min: 19216.66 / Avg: 20036.19 / Max: 20355.61Min: 18959.86 / Avg: 19915 / Max: 20292.89Min: 18118.9 / Avg: 19062.92 / Max: 19320.87Min: 18483.38 / Avg: 18909.1 / Max: 19344.98Min: 26266.6 / Avg: 27135.63 / Max: 27598.01Min: 22569.97 / Avg: 24205.54 / Max: 25313.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240K80K120K160K200KSE +/- 139.37, N = 3SE +/- 272.29, N = 3SE +/- 76.96, N = 3SE +/- 37.21, N = 3SE +/- 92.46, N = 3SE +/- 187.42, N = 3SE +/- 106.86, N = 3SE +/- 120.93, N = 3SE +/- 1549.72, N = 7SE +/- 114.49, N = 3SE +/- 146.69, N = 3SE +/- 182.06, N = 3SE +/- 204.77, N = 3SE +/- 101.93, N = 3SE +/- 1333.26, N = 31515591548711557631525391546931687281725081625441718851738121276991289651237711405581513271. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KMin: 151334 / Avg: 151559 / Max: 151814Min: 154328 / Avg: 154871.33 / Max: 155175Min: 155611 / Avg: 155763 / Max: 155860Min: 152490 / Avg: 152539 / Max: 152612Min: 154545 / Avg: 154693 / Max: 154863Min: 168512 / Avg: 168727.67 / Max: 169101Min: 172364 / Avg: 172508.33 / Max: 172717Min: 162302 / Avg: 162543.67 / Max: 162673Min: 162598 / Avg: 171885 / Max: 173740Min: 173597 / Avg: 173811.67 / Max: 173988Min: 127449 / Avg: 127699.33 / Max: 127957Min: 128622 / Avg: 128965.33 / Max: 129242Min: 123410 / Avg: 123771 / Max: 124119Min: 140379 / Avg: 140558 / Max: 140732Min: 149779 / Avg: 151326.67 / Max: 1539811. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230M60M90M120M150MSE +/- 57670.67, N = 3SE +/- 91373.33, N = 3SE +/- 215295.19, N = 3SE +/- 230890.74, N = 3SE +/- 395352.82, N = 3SE +/- 375595.74, N = 3SE +/- 939330.26, N = 3SE +/- 394553.31, N = 3SE +/- 373542.05, N = 3SE +/- 355704.27, N = 3SE +/- 1202983.30, N = 3SE +/- 688813.01, N = 3SE +/- 113439.47, N = 3SE +/- 83002.90, N = 3SE +/- 491333.28, N = 321027795317918134158384442112219625817757808468474422699827162621053045781039327851262900771228494611310418862467575346342665
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220M40M60M80M100MMin: 20912491 / Avg: 21027794.67 / Max: 21087999Min: 31610049 / Avg: 31791813.33 / Max: 31899083Min: 41298645 / Avg: 41583844.33 / Max: 42005822Min: 41683109 / Avg: 42112219 / Max: 42474524Min: 61945670 / Avg: 62581775.33 / Max: 63306570Min: 77335639 / Avg: 78084684 / Max: 78508351Min: 72577275 / Avg: 74422699 / Max: 75650094Min: 82311763 / Avg: 82716261.67 / Max: 83505284Min: 104841596 / Avg: 105304577.67 / Max: 106043845Min: 103309473 / Avg: 103932784.67 / Max: 104541406Min: 124900585 / Avg: 126290076.67 / Max: 128685849Min: 122057435 / Avg: 122849461.33 / Max: 124221646Min: 130816472 / Avg: 131041886 / Max: 131176885Min: 24510321 / Avg: 24675752.67 / Max: 24770415Min: 45648904 / Avg: 46342664.67 / Max: 47292245

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.87, N = 3SE +/- 0.65, N = 3SE +/- 1.08, N = 3SE +/- 1.62, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.47, N = 3SE +/- 0.13, N = 3SE +/- 0.90, N = 3SE +/- 0.46, N = 3277.67199.54173.72164.81138.04137.44142.20135.97140.18138.08139.56139.36138.09227.13140.611. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 275.95 / Avg: 277.67 / Max: 278.78Min: 198.32 / Avg: 199.54 / Max: 200.54Min: 172.62 / Avg: 173.72 / Max: 175.87Min: 161.8 / Avg: 164.81 / Max: 167.37Min: 137.91 / Avg: 138.04 / Max: 138.21Min: 137.39 / Avg: 137.44 / Max: 137.53Min: 142.07 / Avg: 142.2 / Max: 142.38Min: 135.9 / Avg: 135.97 / Max: 136.04Min: 140 / Avg: 140.18 / Max: 140.43Min: 138.03 / Avg: 138.08 / Max: 138.13Min: 139.49 / Avg: 139.56 / Max: 139.6Min: 138.86 / Avg: 139.36 / Max: 140.29Min: 137.89 / Avg: 138.09 / Max: 138.34Min: 225.67 / Avg: 227.13 / Max: 228.77Min: 140.07 / Avg: 140.61 / Max: 141.521. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52110220330440550487.37333.25275.81260.98194.53170.48172.91161.64154.73153.98153.46154.44146.84381.37226.80

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200SE +/- 0.97, N = 3SE +/- 0.80, N = 3SE +/- 1.06, N = 3SE +/- 1.37, N = 3SE +/- 1.05, N = 3SE +/- 0.38, N = 3SE +/- 0.21, N = 3SE +/- 0.43, N = 3SE +/- 0.43, N = 14SE +/- 0.60, N = 15SE +/- 0.74, N = 3SE +/- 0.72, N = 15SE +/- 0.66, N = 5SE +/- 0.41, N = 3SE +/- 1.73, N = 3186.18131.03102.53110.9480.5070.7085.5664.9264.8867.6666.4867.2563.27172.35127.451. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 184.7 / Avg: 186.18 / Max: 188.01Min: 129.53 / Avg: 131.03 / Max: 132.24Min: 100.59 / Avg: 102.53 / Max: 104.24Min: 108.85 / Avg: 110.94 / Max: 113.51Min: 79.1 / Avg: 80.5 / Max: 82.54Min: 69.95 / Avg: 70.7 / Max: 71.18Min: 85.16 / Avg: 85.56 / Max: 85.86Min: 64.15 / Avg: 64.92 / Max: 65.62Min: 61.7 / Avg: 64.88 / Max: 67.09Min: 64.7 / Avg: 67.66 / Max: 71.97Min: 65.27 / Avg: 66.48 / Max: 67.84Min: 61.8 / Avg: 67.25 / Max: 71.05Min: 61.44 / Avg: 63.27 / Max: 64.99Min: 171.56 / Avg: 172.35 / Max: 172.91Min: 124 / Avg: 127.45 / Max: 129.281. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.00, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 0.67, N = 3SE +/- 0.29, N = 3SE +/- 0.53, N = 12SE +/- 0.67, N = 3SE +/- 0.58, N = 3SE +/- 1.04, N = 3SE +/- 0.17, N = 3SE +/- 0.33, N = 3SE +/- 0.44, N = 3SE +/- 0.78, N = 5SE +/- 0.67, N = 3SE +/- 0.60, N = 35970776972647969798079747659691. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521530456075Min: 58.5 / Avg: 58.5 / Max: 58.5Min: 69.5 / Avg: 69.83 / Max: 70Min: 76.5 / Avg: 77 / Max: 77.5Min: 68 / Avg: 69.33 / Max: 70Min: 71 / Avg: 71.5 / Max: 72Min: 61 / Avg: 64 / Max: 67.5Min: 78.5 / Avg: 79.17 / Max: 80.5Min: 68 / Avg: 69 / Max: 70Min: 77 / Avg: 79 / Max: 80.5Min: 80 / Avg: 80.17 / Max: 80.5Min: 78.5 / Avg: 79.17 / Max: 79.5Min: 73.5 / Avg: 74.33 / Max: 75Min: 74 / Avg: 76.2 / Max: 78.5Min: 58 / Avg: 59.33 / Max: 60Min: 68 / Avg: 69.17 / Max: 701. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.41, N = 3SE +/- 0.53, N = 3SE +/- 0.76, N = 10SE +/- 1.14, N = 3SE +/- 0.25, N = 3SE +/- 0.26, N = 3SE +/- 0.42, N = 3SE +/- 0.28, N = 3SE +/- 0.82, N = 15SE +/- 0.47, N = 15SE +/- 0.47, N = 3SE +/- 0.61, N = 14SE +/- 0.51, N = 15SE +/- 0.11, N = 3SE +/- 0.52, N = 3148.20107.9799.7298.1959.6557.9859.4457.6847.4346.5347.3748.7247.17130.9890.521. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 147.55 / Avg: 148.2 / Max: 148.96Min: 106.91 / Avg: 107.97 / Max: 108.52Min: 96.18 / Avg: 99.72 / Max: 103.91Min: 96.7 / Avg: 98.19 / Max: 100.44Min: 59.34 / Avg: 59.65 / Max: 60.14Min: 57.58 / Avg: 57.98 / Max: 58.48Min: 58.61 / Avg: 59.44 / Max: 59.95Min: 57.12 / Avg: 57.68 / Max: 57.99Min: 44.96 / Avg: 47.42 / Max: 54.58Min: 44.25 / Avg: 46.53 / Max: 49.64Min: 46.63 / Avg: 47.37 / Max: 48.23Min: 46.16 / Avg: 48.72 / Max: 52.95Min: 44.36 / Avg: 47.17 / Max: 50.76Min: 130.76 / Avg: 130.98 / Max: 131.14Min: 89.7 / Avg: 90.52 / Max: 91.491. (CXX) g++ options: -O2 -lOpenCL

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.20382.40763.61144.81526.019SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.761.221.591.652.453.032.973.234.174.214.994.925.350.951.48MAX: 0.77MIN: 1.21MIN: 1.58 / MAX: 1.6MIN: 1.63MIN: 2.43 / MAX: 2.46MIN: 2.99 / MAX: 3.04MIN: 2.93 / MAX: 2.99MIN: 3.19 / MAX: 3.24MIN: 4.12 / MAX: 4.18MIN: 4.1 / MAX: 4.24MIN: 4.95 / MAX: 5.03MIN: 4.88 / MAX: 4.95MIN: 5.26 / MAX: 5.38MIN: 1.47 / MAX: 1.49
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.76 / Avg: 0.76 / Max: 0.76Min: 1.22 / Avg: 1.22 / Max: 1.22Min: 1.59 / Avg: 1.59 / Max: 1.59Min: 1.65 / Avg: 1.65 / Max: 1.65Min: 2.44 / Avg: 2.45 / Max: 2.45Min: 3.03 / Avg: 3.03 / Max: 3.03Min: 2.97 / Avg: 2.97 / Max: 2.97Min: 3.23 / Avg: 3.23 / Max: 3.23Min: 4.17 / Avg: 4.17 / Max: 4.17Min: 4.2 / Avg: 4.21 / Max: 4.22Min: 4.98 / Avg: 4.99 / Max: 5Min: 4.9 / Avg: 4.92 / Max: 4.93Min: 5.35 / Avg: 5.35 / Max: 5.35Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 1.48 / Avg: 1.48 / Max: 1.48

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.67, N = 3SE +/- 0.37, N = 3SE +/- 0.09, N = 3SE +/- 0.19, N = 3SE +/- 0.46, N = 3SE +/- 0.20, N = 3SE +/- 0.02, N = 3SE +/- 0.27, N = 3SE +/- 0.11, N = 3SE +/- 0.26, N = 3SE +/- 0.05, N = 3SE +/- 0.15, N = 3SE +/- 0.19, N = 3SE +/- 0.20, N = 3SE +/- 0.87, N = 3271.16205.89177.91139.15116.41105.6793.24103.3086.7479.9878.3981.0678.01200.41165.411. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 270.41 / Avg: 271.16 / Max: 272.49Min: 205.17 / Avg: 205.89 / Max: 206.37Min: 177.8 / Avg: 177.9 / Max: 178.09Min: 138.85 / Avg: 139.15 / Max: 139.52Min: 115.76 / Avg: 116.41 / Max: 117.3Min: 105.35 / Avg: 105.67 / Max: 106.04Min: 93.21 / Avg: 93.24 / Max: 93.28Min: 102.87 / Avg: 103.3 / Max: 103.79Min: 86.54 / Avg: 86.74 / Max: 86.91Min: 79.56 / Avg: 79.98 / Max: 80.47Min: 78.29 / Avg: 78.39 / Max: 78.44Min: 80.78 / Avg: 81.06 / Max: 81.29Min: 77.68 / Avg: 78 / Max: 78.33Min: 200.15 / Avg: 200.41 / Max: 200.81Min: 163.66 / Avg: 165.41 / Max: 166.31. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.60, N = 3SE +/- 0.29, N = 3SE +/- 0.33, N = 3SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.88, N = 3SE +/- 0.17, N = 3SE +/- 0.67, N = 3SE +/- 0.50, N = 3SE +/- 1.01, N = 3SE +/- 0.93, N = 3SE +/- 1.30, N = 3SE +/- 0.88, N = 3SE +/- 1.00, N = 32432592672772802772712852642632432402462622481. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 242.5 / Avg: 243.33 / Max: 244.5Min: 258.5 / Avg: 259 / Max: 259.5Min: 266.5 / Avg: 267.17 / Max: 267.5Min: 276.5 / Avg: 277 / Max: 277.5Min: 279.5 / Avg: 279.67 / Max: 280Min: 275.5 / Avg: 276.83 / Max: 278.5Min: 270.5 / Avg: 270.67 / Max: 271Min: 284.5 / Avg: 285.17 / Max: 286.5Min: 263 / Avg: 263.5 / Max: 264.5Min: 261 / Avg: 262.83 / Max: 264.5Min: 242 / Avg: 243.17 / Max: 245Min: 238 / Avg: 240.33 / Max: 242.5Min: 244.5 / Avg: 245.83 / Max: 247.5Min: 260 / Avg: 262 / Max: 2631. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.05, N = 3SE +/- 0.51, N = 3SE +/- 0.20, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.19, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3296.69200.35157.17147.38103.0085.0187.5679.1164.2163.4754.6554.7150.34243.84125.531. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 296.63 / Avg: 296.69 / Max: 296.79Min: 199.81 / Avg: 200.35 / Max: 201.37Min: 156.97 / Avg: 157.17 / Max: 157.56Min: 147.07 / Avg: 147.38 / Max: 147.56Min: 102.75 / Avg: 103 / Max: 103.2Min: 84.8 / Avg: 85 / Max: 85.18Min: 87.32 / Avg: 87.56 / Max: 87.75Min: 79.03 / Avg: 79.11 / Max: 79.27Min: 64.09 / Avg: 64.21 / Max: 64.3Min: 63.31 / Avg: 63.47 / Max: 63.56Min: 54.27 / Avg: 54.65 / Max: 54.89Min: 54.68 / Avg: 54.71 / Max: 54.75Min: 50.25 / Avg: 50.34 / Max: 50.45Min: 243.81 / Avg: 243.84 / Max: 243.89Min: 125.35 / Avg: 125.53 / Max: 125.641. (CXX) g++ options: -O2 -lOpenCL

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.21, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.02, N = 3SE +/- 0.12, N = 3SE +/- 0.26, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 39.3013.6716.6117.3923.3426.5125.3527.6727.5029.3731.8426.4929.4912.0620.00
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835Min: 9.23 / Avg: 9.3 / Max: 9.41Min: 13.44 / Avg: 13.67 / Max: 13.83Min: 16.52 / Avg: 16.61 / Max: 16.72Min: 17.26 / Avg: 17.39 / Max: 17.46Min: 23.11 / Avg: 23.34 / Max: 23.52Min: 26.11 / Avg: 26.51 / Max: 26.82Min: 25.31 / Avg: 25.35 / Max: 25.42Min: 27.53 / Avg: 27.67 / Max: 27.96Min: 27.47 / Avg: 27.5 / Max: 27.53Min: 29.23 / Avg: 29.37 / Max: 29.61Min: 31.48 / Avg: 31.84 / Max: 32.35Min: 26.42 / Avg: 26.49 / Max: 26.59Min: 29.33 / Avg: 29.49 / Max: 29.66Min: 11.98 / Avg: 12.06 / Max: 12.18Min: 19.79 / Avg: 20 / Max: 20.19

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KSE +/- 13.17, N = 3SE +/- 6.54, N = 3SE +/- 21.69, N = 3SE +/- 5.79, N = 3SE +/- 4.79, N = 3SE +/- 6.00, N = 3SE +/- 8.53, N = 3SE +/- 0.44, N = 3SE +/- 22.60, N = 3SE +/- 9.12, N = 3SE +/- 18.47, N = 3SE +/- 20.77, N = 3SE +/- 17.64, N = 3SE +/- 15.51, N = 3SE +/- 27.57, N = 39386.89442.99400.710476.810178.39808.010253.39878.09155.09301.08328.28372.78355.911206.810910.11. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KMin: 9360.9 / Avg: 9386.83 / Max: 9403.8Min: 9434.1 / Avg: 9442.93 / Max: 9455.7Min: 9370.8 / Avg: 9400.73 / Max: 9442.9Min: 10468.4 / Avg: 10476.8 / Max: 10487.9Min: 10170 / Avg: 10178.27 / Max: 10186.6Min: 9796 / Avg: 9808 / Max: 9814.1Min: 10242.8 / Avg: 10253.3 / Max: 10270.2Min: 9877.3 / Avg: 9878 / Max: 9878.8Min: 9132 / Avg: 9155 / Max: 9200.2Min: 9282.8 / Avg: 9301 / Max: 9311.2Min: 8294.6 / Avg: 8328.2 / Max: 8358.3Min: 8331.9 / Avg: 8372.7 / Max: 8399.9Min: 8327.6 / Avg: 8355.93 / Max: 8388.3Min: 11190.3 / Avg: 11206.8 / Max: 11237.8Min: 10869.2 / Avg: 10910.13 / Max: 10962.61. (CXX) g++ options: -rdynamic

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521224364860SE +/- 0.49, N = 13SE +/- 0.77, N = 12SE +/- 0.37, N = 15SE +/- 0.68, N = 15SE +/- 0.69, N = 12SE +/- 1.28, N = 12SE +/- 1.06, N = 15SE +/- 0.67, N = 3SE +/- 0.96, N = 12SE +/- 0.62, N = 3SE +/- 0.36, N = 3SE +/- 0.81, N = 15SE +/- 1.24, N = 15SE +/- 0.72, N = 15SE +/- 1.23, N = 1232.5325.1730.1232.7241.1145.2351.3949.1949.7851.1344.3643.7048.8624.8546.391. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521020304050Min: 29.83 / Avg: 32.53 / Max: 36.16Min: 20.46 / Avg: 25.17 / Max: 29.16Min: 28.09 / Avg: 30.12 / Max: 33.69Min: 28.49 / Avg: 32.72 / Max: 38.56Min: 35.35 / Avg: 41.11 / Max: 44.78Min: 35.87 / Avg: 45.23 / Max: 51.94Min: 45.08 / Avg: 51.39 / Max: 61.54Min: 48.23 / Avg: 49.19 / Max: 50.47Min: 45.03 / Avg: 49.78 / Max: 56.32Min: 50.43 / Avg: 51.13 / Max: 52.36Min: 43.65 / Avg: 44.36 / Max: 44.75Min: 38.22 / Avg: 43.7 / Max: 48.67Min: 40.51 / Avg: 48.86 / Max: 54.53Min: 20.06 / Avg: 24.85 / Max: 30.63Min: 38.66 / Avg: 46.39 / Max: 52.291. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250K100K150K200K250KSE +/- 42.10, N = 3SE +/- 219.52, N = 3SE +/- 183.23, N = 3SE +/- 652.17, N = 3SE +/- 414.30, N = 3SE +/- 797.17, N = 15SE +/- 115.78, N = 3SE +/- 115.93, N = 3SE +/- 275.99, N = 3SE +/- 34.51, N = 3SE +/- 833.99, N = 6SE +/- 718.53, N = 15SE +/- 898.32, N = 15SE +/- 54.60, N = 3SE +/- 244.85, N = 3212253.0177540.0151235.0144445.0105046.085462.884263.579313.692153.286140.688093.795873.791245.8177928.0126743.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240K80K120K160K200KMin: 212188 / Avg: 212253.33 / Max: 212332Min: 177299 / Avg: 177539.67 / Max: 177978Min: 150900 / Avg: 151235.33 / Max: 151531Min: 143270 / Avg: 144444.67 / Max: 145523Min: 104457 / Avg: 105045.67 / Max: 105845Min: 82675.3 / Avg: 85462.75 / Max: 91289.3Min: 84034.6 / Avg: 84263.47 / Max: 84408.4Min: 79189.4 / Avg: 79313.63 / Max: 79545.3Min: 91707.3 / Avg: 92153.17 / Max: 92657.9Min: 86071.8 / Avg: 86140.57 / Max: 86180.1Min: 85520.1 / Avg: 88093.72 / Max: 91636.7Min: 91993.6 / Avg: 95873.69 / Max: 100262Min: 86292.5 / Avg: 91245.77 / Max: 101420Min: 177860 / Avg: 177928 / Max: 178036Min: 126253 / Avg: 126742.67 / Max: 126993

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5211K22K33K44K55KSE +/- 50.21, N = 3SE +/- 140.41, N = 4SE +/- 120.35, N = 3SE +/- 298.29, N = 3SE +/- 144.16, N = 3SE +/- 305.20, N = 6SE +/- 284.86, N = 11SE +/- 148.55, N = 3SE +/- 392.90, N = 8SE +/- 469.86, N = 3SE +/- 614.94, N = 3SE +/- 117.08, N = 4SE +/- 165.84, N = 37864121981524628448284013063237843392154465845292493641004419521
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529K18K27K36K45KMin: 7769 / Avg: 7863.67 / Max: 7940Min: 11788 / Avg: 12198.25 / Max: 12425Min: 15016 / Avg: 15246.33 / Max: 15422Min: 28095 / Avg: 28448 / Max: 29041Min: 28141 / Avg: 28400.67 / Max: 28639Min: 29652 / Avg: 30632 / Max: 31469Min: 36221 / Avg: 37842.73 / Max: 38879Min: 39060 / Avg: 39215 / Max: 39512Min: 43124 / Avg: 44658.38 / Max: 46027Min: 44695 / Avg: 45292 / Max: 46219Min: 48563 / Avg: 49364.33 / Max: 50573Min: 9719 / Avg: 10044.25 / Max: 10235Min: 19190 / Avg: 19521 / Max: 19705

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.12, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.16, N = 3262.14176.04137.78129.2689.0773.3875.2767.9554.3454.0445.8445.8041.80214.44109.171. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 262.01 / Avg: 262.14 / Max: 262.39Min: 176.03 / Avg: 176.04 / Max: 176.05Min: 137.74 / Avg: 137.78 / Max: 137.82Min: 129.21 / Avg: 129.26 / Max: 129.36Min: 89.06 / Avg: 89.07 / Max: 89.08Min: 73.37 / Avg: 73.38 / Max: 73.4Min: 75.26 / Avg: 75.27 / Max: 75.27Min: 67.93 / Avg: 67.95 / Max: 67.96Min: 54.32 / Avg: 54.34 / Max: 54.35Min: 53.96 / Avg: 54.04 / Max: 54.19Min: 45.83 / Avg: 45.84 / Max: 45.85Min: 45.79 / Avg: 45.8 / Max: 45.82Min: 41.78 / Avg: 41.8 / Max: 41.81Min: 214.41 / Avg: 214.44 / Max: 214.47Min: 108.86 / Avg: 109.17 / Max: 109.351. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.21, N = 3SE +/- 0.81, N = 3SE +/- 0.24, N = 3SE +/- 0.08, N = 3SE +/- 0.46, N = 3SE +/- 0.04, N = 3SE +/- 0.30, N = 3SE +/- 0.12, N = 3SE +/- 0.22, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.20, N = 3267.24177.65139.46129.8191.8178.0178.9472.2762.7261.9355.8255.4051.40207.12108.40
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 267.03 / Avg: 267.24 / Max: 267.66Min: 176.74 / Avg: 177.65 / Max: 179.26Min: 139.16 / Avg: 139.46 / Max: 139.93Min: 129.68 / Avg: 129.81 / Max: 129.95Min: 90.99 / Avg: 91.81 / Max: 92.58Min: 77.95 / Avg: 78.01 / Max: 78.08Min: 78.44 / Avg: 78.94 / Max: 79.47Min: 72.03 / Avg: 72.27 / Max: 72.43Min: 62.45 / Avg: 62.72 / Max: 63.15Min: 61.67 / Avg: 61.93 / Max: 62.15Min: 55.77 / Avg: 55.82 / Max: 55.85Min: 55.2 / Avg: 55.4 / Max: 55.58Min: 51.15 / Avg: 51.4 / Max: 51.63Min: 207.06 / Avg: 207.12 / Max: 207.19Min: 108.03 / Avg: 108.4 / Max: 108.72

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KSE +/- 24.63, N = 15SE +/- 22.88, N = 15SE +/- 46.80, N = 15SE +/- 6.92, N = 3SE +/- 8.85, N = 3SE +/- 179.55, N = 12SE +/- 135.96, N = 15SE +/- 157.39, N = 15SE +/- 49.30, N = 3SE +/- 113.98, N = 14SE +/- 72.85, N = 13SE +/- 215.48, N = 14SE +/- 106.13, N = 14SE +/- 8.24, N = 3SE +/- 3.02, N = 31473.312192.822991.732965.294490.866104.275725.796242.217811.997961.649507.959581.0610194.001769.463522.441. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KMin: 1428.33 / Avg: 1473.31 / Max: 1776.69Min: 2113.61 / Avg: 2192.82 / Max: 2419.18Min: 2871.77 / Avg: 2991.73 / Max: 3335.77Min: 2956.9 / Avg: 2965.29 / Max: 2979.02Min: 4477.91 / Avg: 4490.86 / Max: 4507.79Min: 5581 / Avg: 6104.27 / Max: 7538.77Min: 5501.12 / Avg: 5725.79 / Max: 7108.86Min: 5942.72 / Avg: 6242.21 / Max: 7690.85Min: 7760.8 / Avg: 7811.99 / Max: 7910.57Min: 7778.26 / Avg: 7961.64 / Max: 9440.93Min: 9387.02 / Avg: 9507.95 / Max: 10330Min: 9331.1 / Avg: 9581.06 / Max: 12370Min: 10000 / Avg: 10194.29 / Max: 11570Min: 1758.86 / Avg: 1769.46 / Max: 1785.69Min: 3516.78 / Avg: 3522.44 / Max: 3527.081. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5210K20K30K40K50KSE +/- 0.23, N = 3SE +/- 78.72, N = 3SE +/- 407.34, N = 15SE +/- 14.53, N = 3SE +/- 94.04, N = 3SE +/- 1900.37, N = 15SE +/- 38.44, N = 3SE +/- 991.42, N = 15SE +/- 964.94, N = 15SE +/- 1521.20, N = 15SE +/- 1295.28, N = 15SE +/- 2134.76, N = 15SE +/- 1097.21, N = 15SE +/- 73.09, N = 6SE +/- 333.56, N = 156207.899361.0612793.0012697.0019363.0026546.0023013.0026493.0033589.0034363.0045051.0046113.0048794.007644.1015338.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F528K16K24K32K40KMin: 6207.47 / Avg: 6207.89 / Max: 6208.24Min: 9269.8 / Avg: 9361.06 / Max: 9517.79Min: 12330 / Avg: 12793.33 / Max: 18480Min: 12670 / Avg: 12696.67 / Max: 12720Min: 19250 / Avg: 19363.33 / Max: 19550Min: 23580 / Avg: 26546 / Max: 44860Min: 22940 / Avg: 23013.33 / Max: 23070Min: 25310 / Avg: 26492.67 / Max: 40340Min: 32160 / Avg: 33588.67 / Max: 46960Min: 32480 / Avg: 34363.33 / Max: 55610Min: 42590 / Avg: 45050.67 / Max: 63100Min: 42730 / Avg: 46112.67 / Max: 75810Min: 44980 / Avg: 48794 / Max: 63320Min: 7535.18 / Avg: 7644.1 / Max: 7997.88Min: 14690 / Avg: 15338 / Max: 199901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KSE +/- 28.32, N = 5SE +/- 37.27, N = 3SE +/- 51.98, N = 3SE +/- 35.59, N = 3SE +/- 36.49, N = 3SE +/- 31.02, N = 3SE +/- 51.28, N = 3SE +/- 71.90, N = 3SE +/- 28.28, N = 5SE +/- 69.66, N = 3SE +/- 42.91, N = 3SE +/- 25.22, N = 14SE +/- 34.36, N = 4SE +/- 12.37, N = 15SE +/- 33.96, N = 310057.810070.210110.910132.110256.310184.510188.610292.510156.510229.610249.310219.510249.510561.910068.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KMin: 10025.8 / Avg: 10057.76 / Max: 10170.9Min: 10010.8 / Avg: 10070.17 / Max: 10138.9Min: 10010.1 / Avg: 10110.87 / Max: 10183.4Min: 10093.6 / Avg: 10132.1 / Max: 10203.2Min: 10188.2 / Avg: 10256.27 / Max: 10313.1Min: 10134.8 / Avg: 10184.5 / Max: 10241.5Min: 10089.5 / Avg: 10188.6 / Max: 10261Min: 10179.5 / Avg: 10292.47 / Max: 10426Min: 10088.8 / Avg: 10156.48 / Max: 10226.8Min: 10097.1 / Avg: 10229.57 / Max: 10333.2Min: 10163.9 / Avg: 10249.27 / Max: 10299.5Min: 10124.1 / Avg: 10219.52 / Max: 10411.5Min: 10164.3 / Avg: 10249.5 / Max: 10330.1Min: 10508.8 / Avg: 10561.89 / Max: 10609.7Min: 10031 / Avg: 10068.5 / Max: 10136.31. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521224364860SE +/- 0.44, N = 5SE +/- 0.61, N = 3SE +/- 0.13, N = 3SE +/- 0.42, N = 3SE +/- 0.34, N = 3SE +/- 0.39, N = 3SE +/- 0.02, N = 3SE +/- 0.28, N = 3SE +/- 0.48, N = 5SE +/- 0.30, N = 3SE +/- 0.42, N = 3SE +/- 0.30, N = 14SE +/- 0.51, N = 4SE +/- 0.36, N = 15SE +/- 0.42, N = 343.1742.9442.2543.9845.0944.6843.6945.1344.4943.8644.8044.5445.0351.9552.021. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521020304050Min: 42.2 / Avg: 43.17 / Max: 44.21Min: 42.28 / Avg: 42.94 / Max: 44.16Min: 42.11 / Avg: 42.25 / Max: 42.52Min: 43.51 / Avg: 43.98 / Max: 44.82Min: 44.42 / Avg: 45.09 / Max: 45.46Min: 44.24 / Avg: 44.68 / Max: 45.45Min: 43.65 / Avg: 43.69 / Max: 43.71Min: 44.77 / Avg: 45.13 / Max: 45.69Min: 43.45 / Avg: 44.49 / Max: 45.67Min: 43.26 / Avg: 43.86 / Max: 44.2Min: 44.34 / Avg: 44.8 / Max: 45.63Min: 42.74 / Avg: 44.54 / Max: 46.27Min: 44.35 / Avg: 45.03 / Max: 46.55Min: 49.68 / Avg: 51.95 / Max: 54.03Min: 51.53 / Avg: 52.02 / Max: 52.861. (CC) gcc options: -O3

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250K100K150K200K250KSE +/- 244.34, N = 3SE +/- 388.59, N = 3SE +/- 936.51, N = 3SE +/- 1793.27, N = 3SE +/- 2693.44, N = 3SE +/- 2790.65, N = 3SE +/- 2221.55, N = 3SE +/- 2207.46, N = 15SE +/- 1126.54, N = 3SE +/- 1690.31, N = 3SE +/- 2776.27, N = 3SE +/- 1230.73, N = 3SE +/- 910.59, N = 3SE +/- 224.51, N = 3SE +/- 1803.14, N = 3517349315513644213526020070523373021136123652423087121557621938022756421519658118144123
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240K80K120K160K200KMin: 51248 / Avg: 51734.33 / Max: 52019Min: 92380 / Avg: 93154.67 / Max: 93596Min: 134572 / Avg: 136442 / Max: 137469Min: 131775 / Avg: 135259.67 / Max: 137737Min: 197409 / Avg: 200705 / Max: 206043Min: 228668 / Avg: 233730 / Max: 238297Min: 208632 / Avg: 211361 / Max: 215762Min: 227162 / Avg: 236524.33 / Max: 255532Min: 229505 / Avg: 230871.33 / Max: 233106Min: 212562 / Avg: 215576 / Max: 218409Min: 214454 / Avg: 219380 / Max: 224062Min: 225103 / Avg: 227564.33 / Max: 228816Min: 213537 / Avg: 215196.33 / Max: 216676Min: 57842 / Avg: 58118.33 / Max: 58563Min: 141196 / Avg: 144123.33 / Max: 147411

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.15, N = 3SE +/- 0.22, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 3SE +/- 0.37, N = 3SE +/- 0.16, N = 3SE +/- 0.45, N = 3SE +/- 0.46, N = 4SE +/- 0.28, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 311.4116.7920.3421.2427.8132.0530.3633.3332.6835.2537.3831.8934.9914.6724.29
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 11.13 / Avg: 11.41 / Max: 11.63Min: 16.5 / Avg: 16.79 / Max: 17.22Min: 20.24 / Avg: 20.34 / Max: 20.43Min: 21.13 / Avg: 21.24 / Max: 21.44Min: 27.57 / Avg: 27.81 / Max: 27.96Min: 31.69 / Avg: 32.05 / Max: 32.3Min: 30.21 / Avg: 30.36 / Max: 30.58Min: 32.65 / Avg: 33.33 / Max: 33.94Min: 32.38 / Avg: 32.68 / Max: 32.94Min: 34.41 / Avg: 35.25 / Max: 35.93Min: 36.33 / Avg: 37.38 / Max: 38.57Min: 31.34 / Avg: 31.89 / Max: 32.28Min: 34.74 / Avg: 34.99 / Max: 35.29Min: 14.6 / Avg: 14.67 / Max: 14.82Min: 24.18 / Avg: 24.29 / Max: 24.46

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 1.44, N = 3SE +/- 0.64, N = 3SE +/- 1.07, N = 3SE +/- 0.45, N = 3SE +/- 0.58, N = 3SE +/- 0.60, N = 3SE +/- 0.65, N = 3SE +/- 0.80, N = 3SE +/- 1.10, N = 3SE +/- 0.18, N = 3SE +/- 0.45, N = 3SE +/- 0.05, N = 3SE +/- 0.27, N = 3SE +/- 0.92, N = 7SE +/- 0.59, N = 3141.25101.5996.8588.3285.7685.7386.7083.8386.0685.0384.7984.8383.4098.5275.23
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 138.42 / Avg: 141.25 / Max: 143.09Min: 100.3 / Avg: 101.59 / Max: 102.33Min: 94.95 / Avg: 96.85 / Max: 98.65Min: 87.53 / Avg: 88.32 / Max: 89.11Min: 84.6 / Avg: 85.76 / Max: 86.43Min: 85.04 / Avg: 85.73 / Max: 86.93Min: 85.78 / Avg: 86.7 / Max: 87.96Min: 82.24 / Avg: 83.83 / Max: 84.72Min: 83.86 / Avg: 86.06 / Max: 87.25Min: 84.8 / Avg: 85.03 / Max: 85.39Min: 84.12 / Avg: 84.79 / Max: 85.64Min: 84.74 / Avg: 84.83 / Max: 84.9Min: 83 / Avg: 83.4 / Max: 83.9Min: 95.01 / Avg: 98.52 / Max: 100.63Min: 74.45 / Avg: 75.23 / Max: 76.39

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200SE +/- 0.13, N = 3SE +/- 0.17, N = 3SE +/- 0.38, N = 3SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.12, N = 3SE +/- 0.28, N = 3SE +/- 0.56, N = 3SE +/- 0.31, N = 3SE +/- 0.55, N = 3SE +/- 0.13, N = 3SE +/- 0.17, N = 394.36107.25114.49116.68135.69152.15146.29152.78178.70179.72190.19185.31190.04109.55129.77MIN: 60.14 / MAX: 225.56MIN: 67.93 / MAX: 243.83MIN: 72.02 / MAX: 259.87MIN: 74.3 / MAX: 261.93MIN: 85.88 / MAX: 272.31MIN: 97.46 / MAX: 272.72MIN: 94.75 / MAX: 256.37MIN: 98.52 / MAX: 278.09MIN: 121.07 / MAX: 277.42MIN: 121.84 / MAX: 276.81MIN: 126.93 / MAX: 307.7MIN: 125.96 / MAX: 294.64MIN: 126.76 / MAX: 308.03MIN: 70.77 / MAX: 249.84MIN: 84.35 / MAX: 268.951. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 94.18 / Avg: 94.36 / Max: 94.61Min: 106.92 / Avg: 107.25 / Max: 107.46Min: 113.88 / Avg: 114.49 / Max: 115.2Min: 116.54 / Avg: 116.68 / Max: 116.86Min: 135.36 / Avg: 135.69 / Max: 136.06Min: 152.02 / Avg: 152.15 / Max: 152.28Min: 146.03 / Avg: 146.29 / Max: 146.46Min: 152.47 / Avg: 152.78 / Max: 153.01Min: 178.47 / Avg: 178.7 / Max: 178.84Min: 179.44 / Avg: 179.72 / Max: 180.27Min: 189.1 / Avg: 190.19 / Max: 190.97Min: 184.71 / Avg: 185.31 / Max: 185.73Min: 188.97 / Avg: 190.04 / Max: 190.8Min: 109.41 / Avg: 109.55 / Max: 109.8Min: 129.43 / Avg: 129.77 / Max: 129.971. (CC) gcc options: -pthread

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KSE +/- 200.17, N = 3SE +/- 203.38, N = 3SE +/- 402.31, N = 3SE +/- 461.99, N = 3SE +/- 397.99, N = 3SE +/- 294.03, N = 3SE +/- 287.97, N = 3SE +/- 26.41, N = 3SE +/- 75.52, N = 3SE +/- 380.43, N = 3SE +/- 966.88, N = 3SE +/- 232.73, N = 3SE +/- 152.13, N = 3SE +/- 185.79, N = 3SE +/- 395.61, N = 392426.4992387.2592896.6289863.2688634.3188487.5790157.2886946.6190310.9090409.5990672.4588553.7887489.3876027.9876410.111. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5216K32K48K64K80KMin: 92207.57 / Avg: 92426.49 / Max: 92826.23Min: 92154.74 / Avg: 92387.25 / Max: 92792.54Min: 92266.52 / Avg: 92896.62 / Max: 93645.02Min: 89400.2 / Avg: 89863.26 / Max: 90787.24Min: 88093.03 / Avg: 88634.31 / Max: 89410.36Min: 88176.03 / Avg: 88487.57 / Max: 89075.27Min: 89581.38 / Avg: 90157.28 / Max: 90451.57Min: 86896.07 / Avg: 86946.61 / Max: 86985.19Min: 90159.87 / Avg: 90310.9 / Max: 90387.48Min: 89648.99 / Avg: 90409.59 / Max: 90807.3Min: 89694.92 / Avg: 90672.45 / Max: 92606.17Min: 88306.55 / Avg: 88553.78 / Max: 89018.95Min: 87189.84 / Avg: 87489.38 / Max: 87685.39Min: 75656.88 / Avg: 76027.98 / Max: 76230.05Min: 75619.2 / Avg: 76410.11 / Max: 76824.911. (CXX) g++ options: -O3 -march=native -fopenmp

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 7EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 39.0810.029.7010.099.8810.069.7810.691. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 7EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F523691215Min: 9.06 / Avg: 9.08 / Max: 9.09Min: 10.01 / Avg: 10.02 / Max: 10.03Min: 9.68 / Avg: 9.7 / Max: 9.71Min: 10.08 / Avg: 10.09 / Max: 10.09Min: 9.87 / Avg: 9.88 / Max: 9.89Min: 10.05 / Avg: 10.06 / Max: 10.09Min: 9.77 / Avg: 9.78 / Max: 9.79Min: 10.68 / Avg: 10.69 / Max: 10.71. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.21, N = 3SE +/- 0.29, N = 3SE +/- 0.21, N = 3SE +/- 0.34, N = 3SE +/- 0.07, N = 3SE +/- 1.17, N = 4SE +/- 0.17, N = 3SE +/- 1.02, N = 3SE +/- 0.25, N = 3SE +/- 1.35, N = 3SE +/- 0.18, N = 3SE +/- 0.24, N = 388.6787.6089.2085.9386.0094.2696.8186.72104.52102.45116.15115.27108.2272.8972.431. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 88.6 / Avg: 88.67 / Max: 88.81Min: 87.51 / Avg: 87.6 / Max: 87.67Min: 88.88 / Avg: 89.2 / Max: 89.39Min: 85.7 / Avg: 85.93 / Max: 86.35Min: 85.64 / Avg: 86 / Max: 86.58Min: 93.87 / Avg: 94.26 / Max: 94.59Min: 96.44 / Avg: 96.81 / Max: 97.49Min: 86.61 / Avg: 86.72 / Max: 86.86Min: 103.18 / Avg: 104.52 / Max: 108.01Min: 102.2 / Avg: 102.45 / Max: 102.77Min: 115.11 / Avg: 116.15 / Max: 118.18Min: 114.96 / Avg: 115.27 / Max: 115.77Min: 106.69 / Avg: 108.22 / Max: 110.92Min: 72.56 / Avg: 72.89 / Max: 73.16Min: 72.17 / Avg: 72.43 / Max: 72.91. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.41, N = 3SE +/- 1.02, N = 4SE +/- 0.89, N = 3SE +/- 0.51, N = 3SE +/- 0.52, N = 3SE +/- 1.14, N = 3SE +/- 0.64, N = 3SE +/- 0.45, N = 3SE +/- 0.35, N = 3SE +/- 0.03, N = 3SE +/- 0.34, N = 3SE +/- 0.55, N = 3SE +/- 0.42, N = 3SE +/- 0.48, N = 3SE +/- 0.45, N = 3107.0192.3587.9489.4386.6886.2391.3286.5792.5991.5197.3097.9597.2393.7983.281. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 106.25 / Avg: 107.01 / Max: 107.66Min: 90.47 / Avg: 92.35 / Max: 94.9Min: 86.34 / Avg: 87.94 / Max: 89.41Min: 88.41 / Avg: 89.43 / Max: 89.96Min: 85.65 / Avg: 86.68 / Max: 87.25Min: 84.36 / Avg: 86.23 / Max: 88.29Min: 90.24 / Avg: 91.31 / Max: 92.46Min: 86.01 / Avg: 86.57 / Max: 87.47Min: 91.9 / Avg: 92.59 / Max: 93.05Min: 91.47 / Avg: 91.51 / Max: 91.55Min: 96.62 / Avg: 97.29 / Max: 97.72Min: 96.92 / Avg: 97.95 / Max: 98.78Min: 96.6 / Avg: 97.23 / Max: 98.03Min: 92.83 / Avg: 93.79 / Max: 94.31Min: 82.43 / Avg: 83.28 / Max: 83.971. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5214K28K42K56K70KSE +/- 44.44, N = 3SE +/- 80.88, N = 3SE +/- 122.98, N = 3SE +/- 114.11, N = 3SE +/- 87.62, N = 3SE +/- 264.62, N = 15SE +/- 397.54, N = 5SE +/- 391.02, N = 6SE +/- 328.54, N = 3SE +/- 179.29, N = 3SE +/- 467.21, N = 3SE +/- 487.40, N = 3SE +/- 500.73, N = 3SE +/- 31.21, N = 3SE +/- 71.92, N = 3111361755022022234863318437085362233890453570544456229562754666391420427132
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212K24K36K48K60KMin: 11048 / Avg: 11136.33 / Max: 11189Min: 17397 / Avg: 17550 / Max: 17672Min: 21875 / Avg: 22021.67 / Max: 22266Min: 23305 / Avg: 23486.33 / Max: 23697Min: 33018 / Avg: 33183.67 / Max: 33316Min: 35376 / Avg: 37084.67 / Max: 38925Min: 35373 / Avg: 36223.2 / Max: 37595Min: 37112 / Avg: 38903.83 / Max: 39603Min: 53078 / Avg: 53569.67 / Max: 54193Min: 54235 / Avg: 54445.33 / Max: 54802Min: 61378 / Avg: 62294.67 / Max: 62910Min: 62183 / Avg: 62754.33 / Max: 63724Min: 65746 / Avg: 66639.33 / Max: 67478Min: 14151 / Avg: 14203.67 / Max: 14259Min: 27010 / Avg: 27132 / Max: 27259

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.16, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.31, N = 3SE +/- 0.28, N = 3SE +/- 0.19, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.18, N = 3SE +/- 0.05, N = 3SE +/- 0.14, N = 398.4198.3198.2495.4593.9293.8695.4892.7795.5395.3995.5694.7693.1780.8580.941. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 98.12 / Avg: 98.41 / Max: 98.67Min: 98.25 / Avg: 98.31 / Max: 98.39Min: 98.08 / Avg: 98.24 / Max: 98.48Min: 95.27 / Avg: 95.45 / Max: 95.58Min: 93.8 / Avg: 93.92 / Max: 94.11Min: 93.53 / Avg: 93.86 / Max: 94.1Min: 95.35 / Avg: 95.48 / Max: 95.62Min: 92.29 / Avg: 92.77 / Max: 93.36Min: 95.11 / Avg: 95.53 / Max: 96.06Min: 95.12 / Avg: 95.39 / Max: 95.75Min: 95.43 / Avg: 95.56 / Max: 95.8Min: 94.65 / Avg: 94.76 / Max: 94.97Min: 92.84 / Avg: 93.17 / Max: 93.47Min: 80.76 / Avg: 80.85 / Max: 80.91Min: 80.67 / Avg: 80.94 / Max: 81.131. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 0.03, N = 3SE +/- 7.84, N = 3SE +/- 1.25, N = 3SE +/- 3.28, N = 3SE +/- 1.11, N = 3SE +/- 4.32, N = 3SE +/- 1.11, N = 3SE +/- 5.63, N = 3SE +/- 6.05, N = 3SE +/- 8.46, N = 3SE +/- 9.36, N = 3SE +/- 9.06, N = 3SE +/- 12.08, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3579.18860.011156.261190.481813.292380.272333.542389.763292.843322.994019.093989.794269.26705.211410.591. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F527001400210028003500Min: 579.13 / Avg: 579.18 / Max: 579.23Min: 844.33 / Avg: 860.01 / Max: 868.26Min: 1153.76 / Avg: 1156.26 / Max: 1157.7Min: 1183.95 / Avg: 1190.48 / Max: 1194.25Min: 1811.43 / Avg: 1813.29 / Max: 1815.27Min: 2371.63 / Avg: 2380.27 / Max: 2384.67Min: 2331.32 / Avg: 2333.54 / Max: 2334.74Min: 2378.57 / Avg: 2389.76 / Max: 2396.35Min: 3282.03 / Avg: 3292.84 / Max: 3302.95Min: 3306.1 / Avg: 3322.99 / Max: 3332.1Min: 4001.46 / Avg: 4019.09 / Max: 4033.33Min: 3972.06 / Avg: 3989.79 / Max: 4001.86Min: 4252.23 / Avg: 4269.26 / Max: 4292.63Min: 705.16 / Avg: 705.21 / Max: 705.27Min: 1410.49 / Avg: 1410.59 / Max: 1410.661. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 1.97, N = 3SE +/- 3.17, N = 3SE +/- 2.25, N = 3SE +/- 0.38, N = 3SE +/- 1.08, N = 3SE +/- 2.18, N = 3SE +/- 5.23, N = 3SE +/- 2.43, N = 3SE +/- 1.92, N = 3SE +/- 2.03, N = 3SE +/- 11.35, N = 3SE +/- 3.86, N = 3SE +/- 18.38, N = 9SE +/- 0.89, N = 3SE +/- 6.65, N = 34347.663057.082834.092410.231674.662743.712230.352721.871352.621210.042230.182296.532263.323345.672013.24MIN: 4308.62MIN: 3044.36MIN: 2799.34MIN: 2395.31MIN: 1660.31MIN: 2727.14MIN: 2212.66MIN: 2708.11MIN: 1333.53MIN: 1190.59MIN: 2194.78MIN: 2270.01MIN: 2218.68MIN: 3327MIN: 1992.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F528001600240032004000Min: 4343.96 / Avg: 4347.66 / Max: 4350.69Min: 3051.79 / Avg: 3057.08 / Max: 3062.75Min: 2829.71 / Avg: 2834.09 / Max: 2837.17Min: 2409.5 / Avg: 2410.23 / Max: 2410.77Min: 1672.87 / Avg: 1674.66 / Max: 1676.6Min: 2741.09 / Avg: 2743.71 / Max: 2748.05Min: 2221.2 / Avg: 2230.35 / Max: 2239.31Min: 2718.51 / Avg: 2721.87 / Max: 2726.6Min: 1350.2 / Avg: 1352.62 / Max: 1356.42Min: 1206.71 / Avg: 1210.04 / Max: 1213.71Min: 2212.51 / Avg: 2230.18 / Max: 2251.36Min: 2288.81 / Avg: 2296.53 / Max: 2300.61Min: 2231.56 / Avg: 2263.32 / Max: 2407.42Min: 3344.19 / Avg: 3345.67 / Max: 3347.26Min: 2003.33 / Avg: 2013.24 / Max: 2025.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52300K600K900K1200K1500KSE +/- 1045.78, N = 3SE +/- 1165.03, N = 3SE +/- 1258.02, N = 3SE +/- 842.53, N = 3SE +/- 607.09, N = 3SE +/- 1998.87, N = 3SE +/- 2350.83, N = 3SE +/- 577.19, N = 3SE +/- 2507.53, N = 3SE +/- 472.72, N = 3SE +/- 2041.67, N = 3SE +/- 1936.89, N = 3SE +/- 2430.14, N = 3SE +/- 2229.96, N = 3SE +/- 771.81, N = 31041996.81143360.51172021.11188387.21248284.11262530.41197778.91162690.91208480.81215056.11209698.01212918.21217027.71155871.81247037.6
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KMin: 1040104.9 / Avg: 1041996.77 / Max: 1043715.1Min: 1141134.6 / Avg: 1143360.47 / Max: 1145070.1Min: 1169540.4 / Avg: 1172021.13 / Max: 1173625.2Min: 1187003 / Avg: 1188387.17 / Max: 1189911.5Min: 1247624.2 / Avg: 1248284.1 / Max: 1249496.7Min: 1259246.9 / Avg: 1262530.43 / Max: 1266147.1Min: 1194820.8 / Avg: 1197778.87 / Max: 1202422.8Min: 1161606.1 / Avg: 1162690.93 / Max: 1163575.1Min: 1203466.2 / Avg: 1208480.8 / Max: 1211047.5Min: 1214294.3 / Avg: 1215056.1 / Max: 1215921.9Min: 1206399.7 / Avg: 1209698.03 / Max: 1213431.9Min: 1209097.9 / Avg: 1212918.2 / Max: 1215383.9Min: 1212618.6 / Avg: 1217027.7 / Max: 1221003.3Min: 1152601.1 / Avg: 1155871.8 / Max: 1160133Min: 1245534.1 / Avg: 1247037.63 / Max: 1248092.1

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5290K180K270K360K450KSE +/- 533.37, N = 3SE +/- 1365.59, N = 3SE +/- 1079.81, N = 3SE +/- 639.41, N = 3SE +/- 3460.73, N = 3SE +/- 430.27, N = 3SE +/- 2791.26, N = 3SE +/- 3270.77, N = 3SE +/- 5135.26, N = 3SE +/- 4280.31, N = 3SE +/- 4382.81, N = 3SE +/- 4507.61, N = 3SE +/- 3531.23, N = 15SE +/- 183.84, N = 3SE +/- 428.23, N = 3399110.88418080.47420324.96405905.78415583.96413354.59404313.15422944.46394639.23386824.54376241.11371448.21380625.69424600.07433091.731. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280K160K240K320K400KMin: 398044.81 / Avg: 399110.88 / Max: 399676.81Min: 416043.78 / Avg: 418080.47 / Max: 420674.69Min: 419061.55 / Avg: 420324.96 / Max: 422473.5Min: 404993.42 / Avg: 405905.78 / Max: 407138Min: 412054.17 / Avg: 415583.96 / Max: 422504.97Min: 412498.94 / Avg: 413354.59 / Max: 413861.79Min: 399536.5 / Avg: 404313.15 / Max: 409203.69Min: 416505.35 / Avg: 422944.46 / Max: 427162.66Min: 385396.59 / Avg: 394639.23 / Max: 403139.08Min: 379984.9 / Avg: 386824.54 / Max: 394702.8Min: 368183.94 / Avg: 376241.11 / Max: 383259.45Min: 362545.96 / Avg: 371448.21 / Max: 377131.41Min: 366624.37 / Avg: 380625.69 / Max: 410991.61Min: 424252.48 / Avg: 424600.07 / Max: 424877.7Min: 432304.57 / Avg: 433091.73 / Max: 433777.591. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522M4M6M8M10MSE +/- 1710.84, N = 3SE +/- 24217.23, N = 3SE +/- 26792.49, N = 3SE +/- 25144.15, N = 3SE +/- 43537.48, N = 3SE +/- 24292.51, N = 3SE +/- 32730.71, N = 3SE +/- 55384.00, N = 3SE +/- 44027.23, N = 3SE +/- 69440.76, N = 6SE +/- 27617.12, N = 3SE +/- 47749.78, N = 3SE +/- 61282.17, N = 12SE +/- 10805.74, N = 13SE +/- 8186.58, N = 31470621235792830232393086237439769554860665214302556443371835797131600817538382324748597452160367330123871. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.5M3M4.5M6M7.5MMin: 1467509 / Avg: 1470620.67 / Max: 1473409Min: 2309505 / Avg: 2357928 / Max: 2383052Min: 2989438 / Avg: 3023238.67 / Max: 3076148Min: 3036500 / Avg: 3086236.67 / Max: 3117538Min: 4337144 / Avg: 4397695 / Max: 4482162Min: 5439082 / Avg: 5486065.67 / Max: 5520271Min: 5150993 / Avg: 5214301.67 / Max: 5260375Min: 5463565 / Avg: 5564432.67 / Max: 5654508Min: 7101279 / Avg: 7183579 / Max: 7251844Min: 7048399 / Avg: 7131599.5 / Max: 7477000Min: 8142180 / Avg: 8175382.67 / Max: 8230211Min: 8181136 / Avg: 8232473.67 / Max: 8327881Min: 8404648 / Avg: 8597452.08 / Max: 9185015Min: 1550246 / Avg: 1603672.85 / Max: 1683898Min: 3002918 / Avg: 3012386.67 / Max: 30286891. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200SE +/- 0.30, N = 3SE +/- 0.14, N = 3SE +/- 0.26, N = 3SE +/- 0.07, N = 3SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.17, N = 3SE +/- 0.31, N = 3SE +/- 0.18, N = 3SE +/- 0.21, N = 3SE +/- 0.32, N = 3172.13122.41103.7496.7776.1669.4269.6266.5762.4861.8162.2762.5660.37136.6284.21
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 171.71 / Avg: 172.13 / Max: 172.7Min: 122.19 / Avg: 122.41 / Max: 122.67Min: 103.26 / Avg: 103.74 / Max: 104.13Min: 96.64 / Avg: 96.77 / Max: 96.88Min: 75.93 / Avg: 76.16 / Max: 76.36Min: 69.16 / Avg: 69.42 / Max: 69.74Min: 69.45 / Avg: 69.62 / Max: 69.79Min: 66.45 / Avg: 66.57 / Max: 66.76Min: 62.24 / Avg: 62.48 / Max: 62.81Min: 61.69 / Avg: 61.81 / Max: 61.9Min: 61.99 / Avg: 62.27 / Max: 62.58Min: 62.22 / Avg: 62.56 / Max: 63.19Min: 60.14 / Avg: 60.37 / Max: 60.74Min: 136.37 / Avg: 136.62 / Max: 137.03Min: 83.58 / Avg: 84.21 / Max: 84.54

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KSE +/- 44.49, N = 3SE +/- 29.05, N = 3SE +/- 66.67, N = 3SE +/- 93.14, N = 3SE +/- 36.01, N = 4SE +/- 14.46, N = 3SE +/- 43.54, N = 3SE +/- 29.33, N = 3SE +/- 54.81, N = 3SE +/- 27.58, N = 3SE +/- 44.06, N = 3SE +/- 18.64, N = 7SE +/- 16.13, N = 13SE +/- 34.53, N = 3SE +/- 89.78, N = 310063.010101.010112.210172.710209.310232.810118.010222.410159.710123.010170.710162.010233.110567.210099.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KMin: 10013.3 / Avg: 10063.03 / Max: 10151.8Min: 10050 / Avg: 10101 / Max: 10150.6Min: 9978.9 / Avg: 10112.23 / Max: 10180.1Min: 10073 / Avg: 10172.67 / Max: 10358.8Min: 10125 / Avg: 10209.25 / Max: 10270.7Min: 10214 / Avg: 10232.77 / Max: 10261.2Min: 10073.2 / Avg: 10118.03 / Max: 10205.1Min: 10167.9 / Avg: 10222.43 / Max: 10268.4Min: 10076.5 / Avg: 10159.67 / Max: 10263.1Min: 10073.8 / Avg: 10123.03 / Max: 10169.2Min: 10082.6 / Avg: 10170.7 / Max: 10215.9Min: 10103.5 / Avg: 10162.01 / Max: 10237.8Min: 10146.7 / Avg: 10233.12 / Max: 10323.4Min: 10498.8 / Avg: 10567.23 / Max: 10609.5Min: 9990.9 / Avg: 10099.57 / Max: 10277.71. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521224364860SE +/- 0.43, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.53, N = 3SE +/- 0.55, N = 4SE +/- 0.29, N = 3SE +/- 0.37, N = 3SE +/- 0.04, N = 3SE +/- 0.65, N = 3SE +/- 0.56, N = 3SE +/- 0.31, N = 3SE +/- 0.42, N = 7SE +/- 0.31, N = 13SE +/- 0.35, N = 3SE +/- 0.01, N = 344.9343.2344.3844.5345.9845.7744.6945.9045.3844.8445.1945.6946.2052.7952.821. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521122334455Min: 44.07 / Avg: 44.93 / Max: 45.44Min: 42.93 / Avg: 43.23 / Max: 43.4Min: 44.15 / Avg: 44.38 / Max: 44.66Min: 44 / Avg: 44.53 / Max: 45.58Min: 45.35 / Avg: 45.98 / Max: 47.62Min: 45.44 / Avg: 45.77 / Max: 46.36Min: 44.12 / Avg: 44.69 / Max: 45.37Min: 45.84 / Avg: 45.9 / Max: 45.97Min: 44.32 / Avg: 45.38 / Max: 46.55Min: 44.27 / Avg: 44.84 / Max: 45.95Min: 44.56 / Avg: 45.19 / Max: 45.52Min: 44.72 / Avg: 45.69 / Max: 47.44Min: 45.1 / Avg: 46.2 / Max: 48.21Min: 52.36 / Avg: 52.79 / Max: 53.48Min: 52.79 / Avg: 52.82 / Max: 52.841. (CC) gcc options: -O3

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.991.551.982.153.073.753.704.045.105.156.005.926.521.272.34MAX: 1MIN: 1.54 / MAX: 1.57MIN: 1.96 / MAX: 1.99MIN: 2.13 / MAX: 2.16MIN: 3.02 / MAX: 3.11MIN: 3.72 / MAX: 3.77MIN: 3.64 / MAX: 3.73MIN: 3.98 / MAX: 4.07MIN: 5.05 / MAX: 5.15MIN: 5.08 / MAX: 5.21MIN: 5.92 / MAX: 6.06MIN: 5.85 / MAX: 5.99MIN: 6.45 / MAX: 6.58MIN: 1.26 / MAX: 1.28MIN: 2.29 / MAX: 2.38
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 0.99 / Avg: 0.99 / Max: 0.99Min: 1.55 / Avg: 1.55 / Max: 1.55Min: 1.98 / Avg: 1.98 / Max: 1.98Min: 2.15 / Avg: 2.15 / Max: 2.15Min: 3.06 / Avg: 3.07 / Max: 3.08Min: 3.75 / Avg: 3.75 / Max: 3.75Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 4.03 / Avg: 4.04 / Max: 4.05Min: 5.1 / Avg: 5.1 / Max: 5.1Min: 5.13 / Avg: 5.15 / Max: 5.15Min: 5.99 / Avg: 6 / Max: 6.02Min: 5.92 / Avg: 5.92 / Max: 5.92Min: 6.49 / Avg: 6.52 / Max: 6.54Min: 1.27 / Avg: 1.27 / Max: 1.27Min: 2.33 / Avg: 2.34 / Max: 2.35

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 5.10, N = 3SE +/- 1.59, N = 3SE +/- 1.43, N = 3SE +/- 2.99, N = 3SE +/- 0.54, N = 3SE +/- 5.60, N = 3SE +/- 2.63, N = 3SE +/- 1.42, N = 3SE +/- 1.32, N = 3SE +/- 0.57, N = 3SE +/- 8.27, N = 3SE +/- 5.75, N = 3SE +/- 22.03, N = 6SE +/- 0.86, N = 3SE +/- 2.05, N = 34345.123056.052836.682412.791672.242738.422214.792718.681350.551213.582221.332306.332283.913345.102013.75MIN: 4298.14MIN: 3041.35MIN: 2798.53MIN: 2397.68MIN: 1658.48MIN: 2722.76MIN: 2191.84MIN: 2705.41MIN: 1331.05MIN: 1190.99MIN: 2189.19MIN: 2274.33MIN: 2224.95MIN: 3322.56MIN: 1995.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F528001600240032004000Min: 4339.98 / Avg: 4345.12 / Max: 4355.32Min: 3053.73 / Avg: 3056.05 / Max: 3059.09Min: 2834.78 / Avg: 2836.68 / Max: 2839.49Min: 2408.58 / Avg: 2412.79 / Max: 2418.57Min: 1671.4 / Avg: 1672.24 / Max: 1673.25Min: 2731.04 / Avg: 2738.42 / Max: 2749.41Min: 2209.9 / Avg: 2214.79 / Max: 2218.93Min: 2716.43 / Avg: 2718.68 / Max: 2721.31Min: 1347.94 / Avg: 1350.55 / Max: 1352.2Min: 1212.83 / Avg: 1213.58 / Max: 1214.69Min: 2206.05 / Avg: 2221.33 / Max: 2234.44Min: 2294.83 / Avg: 2306.33 / Max: 2312.48Min: 2242.08 / Avg: 2283.91 / Max: 2381.6Min: 3343.83 / Avg: 3345.1 / Max: 3346.73Min: 2010.58 / Avg: 2013.75 / Max: 2017.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52300K600K900K1200K1500KSE +/- 1442.60, N = 3SE +/- 1242.45, N = 3SE +/- 1061.91, N = 3SE +/- 1121.87, N = 3SE +/- 2008.07, N = 3SE +/- 702.94, N = 3SE +/- 759.90, N = 3SE +/- 295.02, N = 3SE +/- 686.09, N = 3SE +/- 1733.41, N = 3SE +/- 762.66, N = 3SE +/- 628.66, N = 3SE +/- 546.91, N = 3SE +/- 1176.92, N = 3SE +/- 2686.24, N = 31113861.31185857.81211354.31249264.61290897.81303437.61281490.51206497.81288738.61295588.81294664.51308517.21294605.51279851.81438050.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KMin: 1110976.3 / Avg: 1113861.27 / Max: 1115334.9Min: 1183868.7 / Avg: 1185857.83 / Max: 1188142.2Min: 1209909.6 / Avg: 1211354.27 / Max: 1213424.8Min: 1247865.2 / Avg: 1249264.6 / Max: 1251483.2Min: 1287374.1 / Avg: 1290897.83 / Max: 1294328.4Min: 1302037.7 / Avg: 1303437.6 / Max: 1304249.7Min: 1279979.8 / Avg: 1281490.47 / Max: 1282389.9Min: 1205942.3 / Avg: 1206497.83 / Max: 1206947.8Min: 1287797.7 / Avg: 1288738.57 / Max: 1290074Min: 1293258.5 / Avg: 1295588.83 / Max: 1298976.9Min: 1293775.5 / Avg: 1294664.5 / Max: 1296182.4Min: 1307504.2 / Avg: 1308517.23 / Max: 1309668.7Min: 1293513.9 / Avg: 1294605.53 / Max: 1295211.3Min: 1277678.1 / Avg: 1279851.77 / Max: 1281720.8Min: 1433192.9 / Avg: 1438050.37 / Max: 1442466.9

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52700K1400K2100K2800K3500KSE +/- 326.24, N = 3SE +/- 518.11, N = 3SE +/- 1027.95, N = 3SE +/- 366.38, N = 3SE +/- 1259.14, N = 3SE +/- 466.83, N = 3SE +/- 386.91, N = 3SE +/- 149.07, N = 3SE +/- 1818.31, N = 3SE +/- 837.49, N = 3SE +/- 4181.60, N = 3SE +/- 6234.46, N = 7SE +/- 4362.30, N = 3SE +/- 314.32, N = 3SE +/- 1646.01, N = 33136920224295016391671567070120679089788492186584060489351783073065527970260764315025929701346153
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52500K1000K1500K2000K2500KMin: 3136350 / Avg: 3136920 / Max: 3137480Min: 2241960 / Avg: 2242950 / Max: 2243710Min: 1637410 / Avg: 1639166.67 / Max: 1640970Min: 1566650 / Avg: 1567070 / Max: 1567800Min: 1205130 / Avg: 1206790 / Max: 1209260Min: 897006 / Avg: 897884 / Max: 898598Min: 921416 / Avg: 921864.67 / Max: 922635Min: 840370 / Avg: 840604 / Max: 840881Min: 889933 / Avg: 893516.67 / Max: 895844Min: 829366 / Avg: 830730.33 / Max: 832254Min: 649807 / Avg: 655278.67 / Max: 663492Min: 690630 / Avg: 702607 / Max: 737249Min: 636650 / Avg: 643150 / Max: 651440Min: 2592570 / Avg: 2592970 / Max: 2593590Min: 1343550 / Avg: 1346153.33 / Max: 1349200

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.29, N = 3SE +/- 0.23, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.32, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.22, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.30, N = 3145.44105.7194.3587.6676.6574.2074.4871.4669.8669.4368.2967.9766.25113.2675.85
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 145.08 / Avg: 145.44 / Max: 146.02Min: 105.37 / Avg: 105.71 / Max: 106.15Min: 94.22 / Avg: 94.35 / Max: 94.43Min: 87.37 / Avg: 87.66 / Max: 87.82Min: 76.5 / Avg: 76.65 / Max: 76.84Min: 74.09 / Avg: 74.2 / Max: 74.34Min: 74.18 / Avg: 74.48 / Max: 74.74Min: 70.83 / Avg: 71.46 / Max: 71.85Min: 69.58 / Avg: 69.86 / Max: 70.15Min: 69.34 / Avg: 69.43 / Max: 69.57Min: 68.11 / Avg: 68.29 / Max: 68.39Min: 67.65 / Avg: 67.97 / Max: 68.39Min: 66.01 / Avg: 66.25 / Max: 66.52Min: 113.02 / Avg: 113.26 / Max: 113.53Min: 75.26 / Avg: 75.85 / Max: 76.19

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.08862.17723.26584.35445.443SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.011, N = 3SE +/- 0.008, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 3SE +/- 0.013, N = 3SE +/- 0.008, N = 3SE +/- 0.003, N = 3SE +/- 0.023, N = 3SE +/- 0.003, N = 3SE +/- 0.005, N = 30.9851.4091.6772.0142.7413.1283.2673.3233.8634.1174.5414.3734.8381.3451.9951. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.98 / Avg: 0.99 / Max: 0.99Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 1.68 / Avg: 1.68 / Max: 1.68Min: 2.01 / Avg: 2.01 / Max: 2.02Min: 2.74 / Avg: 2.74 / Max: 2.74Min: 3.11 / Avg: 3.13 / Max: 3.14Min: 3.25 / Avg: 3.27 / Max: 3.28Min: 3.32 / Avg: 3.32 / Max: 3.33Min: 3.86 / Avg: 3.86 / Max: 3.87Min: 4.09 / Avg: 4.12 / Max: 4.13Min: 4.53 / Avg: 4.54 / Max: 4.55Min: 4.37 / Avg: 4.37 / Max: 4.38Min: 4.81 / Avg: 4.84 / Max: 4.88Min: 1.34 / Avg: 1.35 / Max: 1.35Min: 1.99 / Avg: 2 / Max: 2.011. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200SE +/- 1.01, N = 3SE +/- 0.34, N = 3SE +/- 0.34, N = 3SE +/- 0.95, N = 3SE +/- 0.08, N = 3SE +/- 0.37, N = 3SE +/- 0.21, N = 3SE +/- 0.06, N = 3SE +/- 0.42, N = 3SE +/- 0.19, N = 3SE +/- 0.27, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 3SE +/- 0.25, N = 3SE +/- 0.27, N = 3198.45136.20108.34102.2969.8460.7561.2555.3246.5945.7840.3340.0736.60156.3183.45
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200Min: 196.87 / Avg: 198.45 / Max: 200.33Min: 135.6 / Avg: 136.2 / Max: 136.76Min: 107.74 / Avg: 108.34 / Max: 108.92Min: 100.63 / Avg: 102.29 / Max: 103.93Min: 69.76 / Avg: 69.84 / Max: 69.99Min: 60 / Avg: 60.75 / Max: 61.17Min: 60.93 / Avg: 61.25 / Max: 61.64Min: 55.22 / Avg: 55.32 / Max: 55.43Min: 45.95 / Avg: 46.59 / Max: 47.39Min: 45.39 / Avg: 45.78 / Max: 45.99Min: 39.99 / Avg: 40.33 / Max: 40.86Min: 39.84 / Avg: 40.07 / Max: 40.23Min: 36.43 / Avg: 36.6 / Max: 36.71Min: 155.93 / Avg: 156.31 / Max: 156.78Min: 83.1 / Avg: 83.45 / Max: 83.98

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52700K1400K2100K2800K3500KSE +/- 1129.29, N = 3SE +/- 436.36, N = 3SE +/- 565.25, N = 3SE +/- 272.21, N = 3SE +/- 1233.46, N = 3SE +/- 487.90, N = 3SE +/- 143.80, N = 3SE +/- 425.74, N = 3SE +/- 907.34, N = 3SE +/- 1781.71, N = 3SE +/- 3136.75, N = 3SE +/- 6069.08, N = 3SE +/- 5733.03, N = 3SE +/- 227.03, N = 3SE +/- 192.21, N = 3347294024937471820153174289013664101006347103436794322997882893869170083376426972369928722271499183
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52600K1200K1800K2400K3000KMin: 3471470 / Avg: 3472940 / Max: 3475160Min: 2492910 / Avg: 2493746.67 / Max: 2494380Min: 1819400 / Avg: 1820153.33 / Max: 1821260Min: 1742350 / Avg: 1742890 / Max: 1743220Min: 1365020 / Avg: 1366410 / Max: 1368870Min: 1005560 / Avg: 1006346.67 / Max: 1007240Min: 1034080 / Avg: 1034366.67 / Max: 1034530Min: 942378 / Avg: 943229 / Max: 943679Min: 977917 / Avg: 978828.33 / Max: 980643Min: 935608 / Avg: 938691 / Max: 941780Min: 695393 / Avg: 700833.33 / Max: 706259Min: 752152 / Avg: 764269 / Max: 770948Min: 713298 / Avg: 723698.67 / Max: 733079Min: 2871920 / Avg: 2872226.67 / Max: 2872670Min: 1498800 / Avg: 1499183.33 / Max: 1499400

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250M100M150M200M250MSE +/- 1075776.09, N = 4SE +/- 1366026.89, N = 3SE +/- 927929.45, N = 9SE +/- 3144064.63, N = 15SE +/- 481377.88, N = 4SE +/- 2175939.87, N = 3SE +/- 5452949.69, N = 15SE +/- 3811837.38, N = 15SE +/- 1883204.11, N = 15SE +/- 5572341.62, N = 12SE +/- 2208966.30, N = 15SE +/- 2966178.44, N = 15SE +/- 4938353.34, N = 12SE +/- 1036502.10, N = 15SE +/- 574642.20, N = 15100898758121256433112489722162898387209811150176798867216525133187397007211310307230851783215771227199683960212054200129536920714133941. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240M80M120M160M200MMin: 97675130 / Avg: 100898757.5 / Max: 102097200Min: 119380500 / Avg: 121256433.33 / Max: 123914500Min: 107412100 / Avg: 112489722.22 / Max: 117685700Min: 148262800 / Avg: 162898386.67 / Max: 178935200Min: 208902500 / Avg: 209811150 / Max: 211002600Min: 172473100 / Avg: 176798866.67 / Max: 179374000Min: 170848300 / Avg: 216525133.33 / Max: 232847700Min: 160767700 / Avg: 187397006.67 / Max: 215708300Min: 200163900 / Avg: 211310306.67 / Max: 221141000Min: 196106600 / Avg: 230851783.33 / Max: 255760700Min: 199388700 / Avg: 215771226.67 / Max: 228027900Min: 179253900 / Avg: 199683960 / Max: 214308000Min: 176234200 / Avg: 212054200 / Max: 228984100Min: 124322100 / Avg: 129536920 / Max: 133041300Min: 68795100 / Avg: 71413394 / Max: 742738401. (CXX) g++ options: -O3 -fopenmp

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300265.54195.08151.86149.34105.6987.5189.0181.8373.3370.4967.3367.7461.68225.09118.48

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: SkeincoinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52140K280K420K560K700KSE +/- 611.43, N = 6SE +/- 728.58, N = 3SE +/- 1328.09, N = 3SE +/- 981.65, N = 3SE +/- 3944.26, N = 15SE +/- 4313.15, N = 3SE +/- 2127.92, N = 3SE +/- 2411.69, N = 3SE +/- 6158.44, N = 12SE +/- 9107.04, N = 12SE +/- 5351.86, N = 13SE +/- 12582.41, N = 12SE +/- 9139.66, N = 12SE +/- 654.80, N = 15SE +/- 335.77, N = 363648105140150287151030210539316350317250324420468227470502631038605852629368793651824031. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: SkeincoinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52110K220K330K440K550KMin: 61790 / Avg: 63648.33 / Max: 66140Min: 103990 / Avg: 105140 / Max: 106490Min: 148430 / Avg: 150286.67 / Max: 152860Min: 149150 / Avg: 151030 / Max: 152460Min: 187570 / Avg: 210538.67 / Max: 242310Min: 310740 / Avg: 316350 / Max: 324830Min: 313810 / Avg: 317250 / Max: 321140Min: 319750 / Avg: 324420 / Max: 327800Min: 404270 / Avg: 468226.67 / Max: 483970Min: 373060 / Avg: 470501.67 / Max: 494480Min: 570830 / Avg: 631038.46 / Max: 649140Min: 470320 / Avg: 605851.67 / Max: 630960Min: 533720 / Avg: 629367.5 / Max: 655420Min: 75330 / Avg: 79365.33 / Max: 84270Min: 181900 / Avg: 182403.33 / Max: 1830401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 4.35, N = 3SE +/- 0.81, N = 3SE +/- 1.77, N = 3SE +/- 3.71, N = 3SE +/- 0.94, N = 3SE +/- 4.79, N = 3SE +/- 5.96, N = 3SE +/- 2.51, N = 3SE +/- 1.28, N = 3SE +/- 3.66, N = 3SE +/- 0.34, N = 3SE +/- 9.75, N = 3SE +/- 6.65, N = 3SE +/- 2.55, N = 3SE +/- 0.63, N = 34348.883059.192835.352407.931673.312740.522221.372716.541350.131211.542203.162314.362251.633348.061997.39MIN: 4311.94MIN: 3051.11MIN: 2800.15MIN: 2392.45MIN: 1660.31MIN: 2722.51MIN: 2198.48MIN: 2705.25MIN: 1333.75MIN: 1187.15MIN: 2183.71MIN: 2276.78MIN: 2223.61MIN: 3331.21MIN: 1989.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F528001600240032004000Min: 4341.28 / Avg: 4348.88 / Max: 4356.33Min: 3057.83 / Avg: 3059.19 / Max: 3060.62Min: 2831.82 / Avg: 2835.35 / Max: 2837.39Min: 2401.58 / Avg: 2407.93 / Max: 2414.43Min: 1671.47 / Avg: 1673.31 / Max: 1674.53Min: 2731.74 / Avg: 2740.52 / Max: 2748.21Min: 2213.13 / Avg: 2221.37 / Max: 2232.96Min: 2713.68 / Avg: 2716.54 / Max: 2721.54Min: 1347.63 / Avg: 1350.13 / Max: 1351.85Min: 1207.18 / Avg: 1211.54 / Max: 1218.81Min: 2202.51 / Avg: 2203.16 / Max: 2203.65Min: 2296.67 / Avg: 2314.36 / Max: 2330.29Min: 2238.42 / Avg: 2251.63 / Max: 2259.51Min: 3342.99 / Avg: 3348.06 / Max: 3351.09Min: 1996.27 / Avg: 1997.39 / Max: 1998.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1EPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 332.1632.6934.2232.7633.1538.4237.94
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1EPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 32.13 / Avg: 32.16 / Max: 32.2Min: 32.59 / Avg: 32.69 / Max: 32.77Min: 34.2 / Avg: 34.22 / Max: 34.24Min: 32.68 / Avg: 32.76 / Max: 32.82Min: 33.13 / Avg: 33.15 / Max: 33.18Min: 38.34 / Avg: 38.42 / Max: 38.47Min: 37.88 / Avg: 37.94 / Max: 37.97

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.09622.19243.28864.38485.481SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.004, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.006, N = 31.4071.6753.1403.2563.3174.1064.5334.3714.8721.3482.3131. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareEPYC 7272EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 1.67 / Avg: 1.68 / Max: 1.68Min: 3.14 / Avg: 3.14 / Max: 3.14Min: 3.24 / Avg: 3.26 / Max: 3.27Min: 3.31 / Avg: 3.32 / Max: 3.33Min: 4.1 / Avg: 4.11 / Max: 4.12Min: 4.52 / Avg: 4.53 / Max: 4.54Min: 4.37 / Avg: 4.37 / Max: 4.37Min: 4.87 / Avg: 4.87 / Max: 4.88Min: 1.35 / Avg: 1.35 / Max: 1.35Min: 2.31 / Avg: 2.31 / Max: 2.331. (CXX) g++ options: -O3 -pthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220M40M60M80M100MSE +/- 150250.14, N = 3SE +/- 71263.56, N = 3SE +/- 236966.05, N = 10SE +/- 332801.18, N = 6SE +/- 304616.08, N = 3SE +/- 436034.06, N = 15SE +/- 708563.50, N = 4SE +/- 88880.09, N = 3SE +/- 82765.80, N = 3SE +/- 261273.92, N = 3SE +/- 1212155.43, N = 4SE +/- 597478.91, N = 3SE +/- 935189.20, N = 15SE +/- 79798.13, N = 3SE +/- 416432.88, N = 3160349942431444331246137329737494880473260864616583840266415292982397132835681669884746910090845310854312519191199390434101. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220M40M60M80M100MMin: 15735361 / Avg: 16034994 / Max: 16204568Min: 24195092 / Avg: 24314443.33 / Max: 24441587Min: 30342075 / Avg: 31246136.8 / Max: 32525396Min: 32141990 / Avg: 32973748.83 / Max: 34537511Min: 48420286 / Avg: 48804732.33 / Max: 49406252Min: 57227030 / Avg: 60864616.13 / Max: 63501145Min: 56735799 / Avg: 58384025.75 / Max: 60090226Min: 63992140 / Avg: 64152928.67 / Max: 64298968Min: 82260216 / Avg: 82397132 / Max: 82546157Min: 83220250 / Avg: 83568166.33 / Max: 84079775Min: 95322228 / Avg: 98847468.5 / Max: 100858323Min: 99825755 / Avg: 100908453 / Max: 101887714Min: 103465683 / Avg: 108543124.73 / Max: 118339568Min: 19064701 / Avg: 19191199.33 / Max: 19338721Min: 38530677 / Avg: 39043410.33 / Max: 398681761. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.27230.54460.81691.08921.3615SE +/- 0.01, N = 15SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.980.890.780.840.920.840.950.770.991.021.061.211.070.790.81
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.91 / Avg: 0.98 / Max: 1.03Min: 0.87 / Avg: 0.89 / Max: 0.91Min: 0.77 / Avg: 0.78 / Max: 0.79Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.91 / Avg: 0.92 / Max: 0.93Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.99 / Avg: 0.99 / Max: 0.99Min: 1.01 / Avg: 1.02 / Max: 1.02Min: 1.06 / Avg: 1.06 / Max: 1.07Min: 1.21 / Avg: 1.21 / Max: 1.21Min: 1.07 / Avg: 1.07 / Max: 1.08Min: 0.78 / Avg: 0.79 / Max: 0.81Min: 0.8 / Avg: 0.81 / Max: 0.82

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F526K12K18K24K30KSE +/- 41.71, N = 15SE +/- 89.67, N = 3SE +/- 50.79, N = 3SE +/- 21.57, N = 3SE +/- 95.61, N = 3SE +/- 42.04, N = 3SE +/- 22.86, N = 3SE +/- 85.72, N = 3SE +/- 55.79, N = 3SE +/- 31.70, N = 3SE +/- 46.32, N = 3SE +/- 28.57, N = 3SE +/- 136.51, N = 3SE +/- 52.69, N = 3SE +/- 43.76, N = 34012.406689.9510017.989316.8012800.5918612.5416446.8819922.0123441.9122986.0528284.8725067.0428361.734953.659663.43
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525K10K15K20K25KMin: 3827.5 / Avg: 4012.4 / Max: 4339.41Min: 6524.74 / Avg: 6689.95 / Max: 6832.97Min: 9917.01 / Avg: 10017.98 / Max: 10078.03Min: 9293.16 / Avg: 9316.8 / Max: 9359.87Min: 12615.39 / Avg: 12800.59 / Max: 12934.43Min: 18567.53 / Avg: 18612.54 / Max: 18696.54Min: 16403.4 / Avg: 16446.88 / Max: 16480.83Min: 19751.03 / Avg: 19922.01 / Max: 20018.47Min: 23347.44 / Avg: 23441.91 / Max: 23540.57Min: 22927.38 / Avg: 22986.05 / Max: 23036.21Min: 28192.24 / Avg: 28284.87 / Max: 28331.27Min: 25025.67 / Avg: 25067.04 / Max: 25121.85Min: 28090.37 / Avg: 28361.73 / Max: 28523.46Min: 4855.38 / Avg: 4953.65 / Max: 5035.74Min: 9575.98 / Avg: 9663.43 / Max: 9710.01

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525001000150020002500SE +/- 3.61, N = 3SE +/- 4.61, N = 3SE +/- 5.33, N = 3SE +/- 0.54, N = 3SE +/- 1.57, N = 3SE +/- 2.58, N = 3SE +/- 1.32, N = 3SE +/- 1.35, N = 3SE +/- 2.09, N = 3SE +/- 3.28, N = 3SE +/- 1.88, N = 3SE +/- 2.81, N = 3SE +/- 5.32, N = 3SE +/- 0.95, N = 3SE +/- 3.97, N = 32397.222032.881886.301227.56974.12919.61812.99886.88784.82734.40793.23878.20808.621742.79998.67MIN: 2370.41MIN: 2016.92MIN: 1851.55MIN: 1218.44MIN: 962.55MIN: 906.32MIN: 799.51MIN: 865.5MIN: 771.05MIN: 716.82MIN: 777.03MIN: 851.72MIN: 785.04MIN: 1728.82MIN: 987.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400800120016002000Min: 2392.08 / Avg: 2397.22 / Max: 2404.17Min: 2023.66 / Avg: 2032.88 / Max: 2037.71Min: 1876.6 / Avg: 1886.3 / Max: 1894.99Min: 1226.72 / Avg: 1227.56 / Max: 1228.57Min: 971.07 / Avg: 974.12 / Max: 976.32Min: 916.98 / Avg: 919.61 / Max: 924.77Min: 810.37 / Avg: 812.99 / Max: 814.64Min: 884.18 / Avg: 886.88 / Max: 888.33Min: 780.67 / Avg: 784.82 / Max: 787.4Min: 728.04 / Avg: 734.4 / Max: 738.94Min: 791.29 / Avg: 793.23 / Max: 796.99Min: 873.24 / Avg: 878.2 / Max: 882.97Min: 801.38 / Avg: 808.62 / Max: 818.99Min: 1741.68 / Avg: 1742.79 / Max: 1744.69Min: 991.29 / Avg: 998.67 / Max: 1004.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525001000150020002500SE +/- 2.26, N = 3SE +/- 2.69, N = 3SE +/- 3.12, N = 3SE +/- 0.19, N = 3SE +/- 0.91, N = 3SE +/- 5.72, N = 3SE +/- 1.51, N = 3SE +/- 4.31, N = 3SE +/- 2.70, N = 3SE +/- 1.28, N = 3SE +/- 1.17, N = 3SE +/- 3.54, N = 3SE +/- 1.15, N = 3SE +/- 0.51, N = 3SE +/- 1.61, N = 32397.562022.451882.641229.69972.93902.26813.12890.48787.43732.81793.20874.44811.001745.54997.85MIN: 2367.09MIN: 2012.67MIN: 1853.21MIN: 1221.64MIN: 960.02MIN: 882.8MIN: 799.88MIN: 867.03MIN: 769.99MIN: 712.9MIN: 776.66MIN: 847.07MIN: 788.26MIN: 1733.89MIN: 987.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400800120016002000Min: 2394.02 / Avg: 2397.56 / Max: 2401.75Min: 2018.73 / Avg: 2022.45 / Max: 2027.68Min: 1876.41 / Avg: 1882.64 / Max: 1885.98Min: 1229.46 / Avg: 1229.69 / Max: 1230.06Min: 971.59 / Avg: 972.93 / Max: 974.66Min: 890.83 / Avg: 902.26 / Max: 908.5Min: 810.1 / Avg: 813.12 / Max: 814.7Min: 882.05 / Avg: 890.48 / Max: 896.25Min: 782.55 / Avg: 787.43 / Max: 791.88Min: 731.06 / Avg: 732.81 / Max: 735.31Min: 790.85 / Avg: 793.2 / Max: 794.46Min: 868.68 / Avg: 874.44 / Max: 880.89Min: 808.75 / Avg: 811 / Max: 812.5Min: 1744.59 / Avg: 1745.54 / Max: 1746.31Min: 995.25 / Avg: 997.85 / Max: 1000.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525001000150020002500SE +/- 4.09, N = 3SE +/- 1.28, N = 3SE +/- 4.32, N = 3SE +/- 1.03, N = 3SE +/- 0.65, N = 3SE +/- 5.94, N = 3SE +/- 2.11, N = 3SE +/- 1.82, N = 3SE +/- 1.66, N = 3SE +/- 2.51, N = 3SE +/- 0.96, N = 3SE +/- 3.99, N = 3SE +/- 5.74, N = 3SE +/- 2.16, N = 3SE +/- 0.99, N = 32398.982029.281884.701228.50972.77917.61814.03888.54782.65735.82792.55875.71810.181744.75999.14MIN: 2367.55MIN: 2020.38MIN: 1853.9MIN: 1219.05MIN: 962.66MIN: 892.04MIN: 801.22MIN: 872.96MIN: 769.36MIN: 715.63MIN: 773.96MIN: 846.67MIN: 783.06MIN: 1731.94MIN: 991.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400800120016002000Min: 2391.8 / Avg: 2398.98 / Max: 2405.96Min: 2026.92 / Avg: 2029.28 / Max: 2031.33Min: 1878.26 / Avg: 1884.7 / Max: 1892.91Min: 1226.65 / Avg: 1228.5 / Max: 1230.2Min: 971.9 / Avg: 972.77 / Max: 974.05Min: 907.1 / Avg: 917.61 / Max: 927.67Min: 810.97 / Avg: 814.03 / Max: 818.07Min: 885.37 / Avg: 888.54 / Max: 891.69Min: 779.51 / Avg: 782.65 / Max: 785.17Min: 731.42 / Avg: 735.82 / Max: 740.12Min: 791.43 / Avg: 792.55 / Max: 794.46Min: 869.12 / Avg: 875.71 / Max: 882.9Min: 800.07 / Avg: 810.18 / Max: 819.95Min: 1742.54 / Avg: 1744.75 / Max: 1749.07Min: 997.98 / Avg: 999.14 / Max: 1001.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.03520.07040.10560.14080.176SE +/- 0.00101629, N = 3SE +/- 0.00070595, N = 3SE +/- 0.00047942, N = 3SE +/- 0.00104595, N = 3SE +/- 0.00052181, N = 3SE +/- 0.00050264, N = 3SE +/- 0.00097851, N = 3SE +/- 0.00010921, N = 3SE +/- 0.00076232, N = 3SE +/- 0.00113052, N = 3SE +/- 0.00009609, N = 3SE +/- 0.00022828, N = 3SE +/- 0.00081699, N = 3SE +/- 0.00007037, N = 3SE +/- 0.00013172, N = 30.156532080.154578900.153417050.149784890.145735970.148592730.149882930.148508120.151249760.148682530.149767950.149543390.145541430.128195450.12924285
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212345Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.15 / Avg: 0.15 / Max: 0.16Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.14 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.14 / Avg: 0.15 / Max: 0.15Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 25.83, N = 3SE +/- 37.21, N = 3SE +/- 17.87, N = 3SE +/- 46.66, N = 4SE +/- 37.84, N = 3SE +/- 43.10, N = 4SE +/- 42.46, N = 3SE +/- 9.42, N = 3SE +/- 31.29, N = 9SE +/- 44.49, N = 4SE +/- 47.38, N = 4SE +/- 36.48, N = 3SE +/- 21.82, N = 3SE +/- 33.69, N = 3SE +/- 38.21, N = 33589.933634.173756.963792.813765.853853.503830.203988.943793.173816.083837.563762.493937.234336.414367.121. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F528001600240032004000Min: 3541.75 / Avg: 3589.93 / Max: 3630.15Min: 3590.61 / Avg: 3634.17 / Max: 3708.22Min: 3721.38 / Avg: 3756.96 / Max: 3777.75Min: 3699.39 / Avg: 3792.81 / Max: 3917.69Min: 3719.42 / Avg: 3765.85 / Max: 3840.83Min: 3731.7 / Avg: 3853.5 / Max: 3929.87Min: 3768.21 / Avg: 3830.2 / Max: 3911.46Min: 3976.02 / Avg: 3988.94 / Max: 4007.28Min: 3632 / Avg: 3793.17 / Max: 3925.79Min: 3692 / Avg: 3816.08 / Max: 3902.55Min: 3701.86 / Avg: 3837.56 / Max: 3910.17Min: 3718.35 / Avg: 3762.49 / Max: 3834.86Min: 3909.26 / Avg: 3937.23 / Max: 3980.22Min: 4277.41 / Avg: 4336.41 / Max: 4394.1Min: 4297.85 / Avg: 4367.12 / Max: 4429.71. (CC) gcc options: -O3 -mavx2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5211002200330044005500SE +/- 3.19, N = 3SE +/- 9.03, N = 3SE +/- 2.79, N = 3SE +/- 5.57, N = 3SE +/- 14.72, N = 3SE +/- 1.53, N = 3SE +/- 10.25, N = 3SE +/- 2.78, N = 3SE +/- 5.33, N = 3SE +/- 4.94, N = 3SE +/- 6.15, N = 3SE +/- 6.77, N = 3SE +/- 18.38, N = 3SE +/- 4.62, N = 3SE +/- 1.31, N = 33071.913264.493344.393145.893310.913578.093757.353276.524197.604132.134731.725170.994536.002399.192643.39
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500Min: 3065.98 / Avg: 3071.91 / Max: 3076.9Min: 3253.32 / Avg: 3264.49 / Max: 3282.36Min: 3340.81 / Avg: 3344.39 / Max: 3349.89Min: 3138.42 / Avg: 3145.89 / Max: 3156.77Min: 3286.85 / Avg: 3310.91 / Max: 3337.63Min: 3576.1 / Avg: 3578.09 / Max: 3581.09Min: 3741.86 / Avg: 3757.35 / Max: 3776.73Min: 3273.17 / Avg: 3276.52 / Max: 3282.04Min: 4187.64 / Avg: 4197.6 / Max: 4205.86Min: 4122.78 / Avg: 4132.13 / Max: 4139.56Min: 4723.88 / Avg: 4731.72 / Max: 4743.85Min: 5162.09 / Avg: 5170.99 / Max: 5184.28Min: 4505.91 / Avg: 4536 / Max: 4569.34Min: 2393.6 / Avg: 2399.19 / Max: 2408.36Min: 2641.61 / Avg: 2643.39 / Max: 2645.95

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.291.812.372.513.534.374.204.745.535.676.626.026.831.662.99
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 1.29 / Avg: 1.29 / Max: 1.3Min: 1.8 / Avg: 1.81 / Max: 1.82Min: 2.34 / Avg: 2.37 / Max: 2.38Min: 2.5 / Avg: 2.51 / Max: 2.52Min: 3.52 / Avg: 3.53 / Max: 3.56Min: 4.33 / Avg: 4.37 / Max: 4.4Min: 4.19 / Avg: 4.2 / Max: 4.21Min: 4.72 / Avg: 4.74 / Max: 4.75Min: 5.51 / Avg: 5.53 / Max: 5.56Min: 5.66 / Avg: 5.67 / Max: 5.68Min: 6.6 / Avg: 6.62 / Max: 6.64Min: 6.01 / Avg: 6.02 / Max: 6.03Min: 6.77 / Avg: 6.83 / Max: 6.91Min: 1.65 / Avg: 1.66 / Max: 1.67Min: 2.97 / Avg: 2.99 / Max: 3.01

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230060090012001500SE +/- 1.33, N = 3SE +/- 0.69, N = 3SE +/- 1.04, N = 3SE +/- 0.74, N = 3SE +/- 39.35, N = 15SE +/- 37.08, N = 15SE +/- 0.86, N = 3SE +/- 2.23, N = 3SE +/- 11.98, N = 15SE +/- 1.46, N = 3SE +/- 32.30, N = 15SE +/- 29.63, N = 14SE +/- 19.88, N = 3SE +/- 1.31, N = 3SE +/- 1.25, N = 3216.58322.83429.51447.01862.81883.87807.02891.021139.381147.361398.771360.061429.25262.90524.311. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000Min: 214.04 / Avg: 216.58 / Max: 218.55Min: 321.57 / Avg: 322.83 / Max: 323.96Min: 427.95 / Avg: 429.51 / Max: 431.48Min: 445.61 / Avg: 447.01 / Max: 448.15Min: 675.58 / Avg: 862.81 / Max: 1074.46Min: 822.42 / Avg: 883.87 / Max: 1393.5Min: 806.02 / Avg: 807.02 / Max: 808.73Min: 887.85 / Avg: 891.02 / Max: 895.31Min: 1114.29 / Avg: 1139.38 / Max: 1274.78Min: 1144.44 / Avg: 1147.36 / Max: 1148.92Min: 1361.4 / Avg: 1398.77 / Max: 1850.16Min: 1313.18 / Avg: 1360.06 / Max: 1735.96Min: 1405.75 / Avg: 1429.25 / Max: 1468.78Min: 260.27 / Avg: 262.9 / Max: 264.3Min: 521.85 / Avg: 524.31 / Max: 525.941. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5211002200330044005500SE +/- 10.38, N = 3SE +/- 8.60, N = 3SE +/- 1.17, N = 3SE +/- 5.33, N = 3SE +/- 4.18, N = 3SE +/- 1.62, N = 3SE +/- 3.83, N = 3SE +/- 5.09, N = 3SE +/- 8.65, N = 3SE +/- 14.85, N = 3SE +/- 4.63, N = 3SE +/- 6.37, N = 3SE +/- 8.32, N = 3SE +/- 1.90, N = 3SE +/- 4.58, N = 33090.583265.933342.133155.853315.633582.563754.853271.264203.814138.694732.735153.994525.322409.032649.35
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500Min: 3072.59 / Avg: 3090.58 / Max: 3108.55Min: 3248.92 / Avg: 3265.93 / Max: 3276.67Min: 3339.82 / Avg: 3342.13 / Max: 3343.55Min: 3145.96 / Avg: 3155.85 / Max: 3164.23Min: 3309.61 / Avg: 3315.63 / Max: 3323.66Min: 3579.4 / Avg: 3582.56 / Max: 3584.73Min: 3748.56 / Avg: 3754.85 / Max: 3761.77Min: 3264.96 / Avg: 3271.26 / Max: 3281.33Min: 4186.54 / Avg: 4203.81 / Max: 4213.33Min: 4119.13 / Avg: 4138.69 / Max: 4167.82Min: 4726.94 / Avg: 4732.73 / Max: 4741.89Min: 5142.04 / Avg: 5153.99 / Max: 5163.81Min: 4510.25 / Avg: 4525.32 / Max: 4538.98Min: 2405.41 / Avg: 2409.03 / Max: 2411.84Min: 2640.2 / Avg: 2649.35 / Max: 2654.02

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.281.812.382.513.514.364.194.705.535.676.606.056.871.642.98
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 1.27 / Avg: 1.28 / Max: 1.29Min: 1.8 / Avg: 1.81 / Max: 1.82Min: 2.37 / Avg: 2.38 / Max: 2.39Min: 2.5 / Avg: 2.51 / Max: 2.52Min: 3.5 / Avg: 3.51 / Max: 3.53Min: 4.32 / Avg: 4.36 / Max: 4.4Min: 4.18 / Avg: 4.19 / Max: 4.2Min: 4.69 / Avg: 4.7 / Max: 4.72Min: 5.52 / Avg: 5.53 / Max: 5.54Min: 5.66 / Avg: 5.67 / Max: 5.68Min: 6.58 / Avg: 6.6 / Max: 6.63Min: 6.03 / Avg: 6.05 / Max: 6.06Min: 6.86 / Avg: 6.87 / Max: 6.88Min: 1.64 / Avg: 1.64 / Max: 1.65Min: 2.97 / Avg: 2.98 / Max: 2.99

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.05532.11063.16594.22125.2765SE +/- 0.004, N = 3SE +/- 0.009, N = 3SE +/- 0.021, N = 15SE +/- 0.029, N = 12SE +/- 0.017, N = 3SE +/- 0.015, N = 15SE +/- 0.008, N = 3SE +/- 0.024, N = 3SE +/- 0.022, N = 3SE +/- 0.016, N = 3SE +/- 0.006, N = 3SE +/- 0.016, N = 3SE +/- 0.005, N = 3SE +/- 0.011, N = 3SE +/- 0.024, N = 154.6903.1372.5962.6062.1011.9352.1031.9151.7591.7011.6431.7981.4174.1452.6971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 4.68 / Avg: 4.69 / Max: 4.7Min: 3.13 / Avg: 3.14 / Max: 3.16Min: 2.49 / Avg: 2.6 / Max: 2.73Min: 2.42 / Avg: 2.61 / Max: 2.71Min: 2.07 / Avg: 2.1 / Max: 2.12Min: 1.84 / Avg: 1.94 / Max: 2.02Min: 2.09 / Avg: 2.1 / Max: 2.11Min: 1.87 / Avg: 1.91 / Max: 1.95Min: 1.73 / Avg: 1.76 / Max: 1.8Min: 1.68 / Avg: 1.7 / Max: 1.73Min: 1.63 / Avg: 1.64 / Max: 1.65Min: 1.77 / Avg: 1.8 / Max: 1.82Min: 1.41 / Avg: 1.42 / Max: 1.42Min: 4.13 / Avg: 4.14 / Max: 4.16Min: 2.5 / Avg: 2.7 / Max: 2.861. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5215K30K45K60K75KSE +/- 18.98, N = 3SE +/- 91.21, N = 3SE +/- 309.38, N = 15SE +/- 447.90, N = 12SE +/- 387.48, N = 3SE +/- 406.89, N = 15SE +/- 174.06, N = 3SE +/- 676.58, N = 3SE +/- 709.54, N = 3SE +/- 539.78, N = 3SE +/- 229.52, N = 3SE +/- 486.09, N = 3SE +/- 283.18, N = 3SE +/- 63.04, N = 3SE +/- 337.73, N = 152133431896385863845847662517944761552325569455888960949556997069024137371521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212K24K36K48K60KMin: 21301.63 / Avg: 21333.63 / Max: 21367.31Min: 31714.82 / Avg: 31895.88 / Max: 32005.63Min: 36728.54 / Avg: 38586.13 / Max: 40215.78Min: 36956.38 / Avg: 38457.92 / Max: 41415.74Min: 47265.69 / Avg: 47661.51 / Max: 48436.42Min: 49652.94 / Avg: 51793.61 / Max: 54520.15Min: 47424.2 / Avg: 47614.55 / Max: 47962.14Min: 51433.97 / Avg: 52325.08 / Max: 53652.51Min: 55594.88 / Avg: 56944.6 / Max: 57999Min: 57853.09 / Avg: 58888.92 / Max: 59670.2Min: 60636.54 / Avg: 60949.34 / Max: 61396.7Min: 54882.69 / Avg: 55699.33 / Max: 56564.47Min: 70359.3 / Avg: 70689.79 / Max: 71253.35Min: 24025.48 / Avg: 24137.33 / Max: 24243.66Min: 35001.56 / Avg: 37152.01 / Max: 39986.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400K800K1200K1600K2000KSE +/- 102.44, N = 3SE +/- 270.65, N = 3SE +/- 538.09, N = 3SE +/- 1815.81, N = 3SE +/- 1542.09, N = 3SE +/- 609.31, N = 3SE +/- 707.15, N = 3SE +/- 1150.50, N = 3SE +/- 10873.39, N = 3SE +/- 8320.13, N = 3SE +/- 473.83, N = 3SE +/- 1046.01, N = 3SE +/- 1593.22, N = 3SE +/- 818.34, N = 3SE +/- 1470.27, N = 3293387.40440879.64586659.10603381.33902070.151147733.951123555.301203386.721530338.791550727.151867023.461845888.801985242.50356570.41716393.881. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52300K600K900K1200K1500KMin: 293258.49 / Avg: 293387.4 / Max: 293589.77Min: 440356.87 / Avg: 440879.64 / Max: 441262.65Min: 585752.04 / Avg: 586659.1 / Max: 587614.19Min: 599995.31 / Avg: 603381.33 / Max: 606211.3Min: 899666.14 / Avg: 902070.15 / Max: 904945.39Min: 1146516.78 / Avg: 1147733.95 / Max: 1148394.04Min: 1122462.4 / Avg: 1123555.3 / Max: 1124879.16Min: 1201201.2 / Avg: 1203386.72 / Max: 1205102.86Min: 1517351.75 / Avg: 1530338.79 / Max: 1551938.41Min: 1537352.87 / Avg: 1550727.15 / Max: 1565988.68Min: 1866161.25 / Avg: 1867023.46 / Max: 1867795.13Min: 1844712.66 / Avg: 1845888.8 / Max: 1847975.17Min: 1982728.58 / Avg: 1985242.5 / Max: 1988195.09Min: 355244.72 / Avg: 356570.41 / Max: 358064.49Min: 713469.52 / Avg: 716393.88 / Max: 718122.781. (CC) gcc options: -O2 -lrt" -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 14.48, N = 3SE +/- 9.26, N = 3SE +/- 7.78, N = 3SE +/- 1.52, N = 3SE +/- 12.54, N = 3SE +/- 1.04, N = 3SE +/- 1.33, N = 3SE +/- 3.65, N = 3SE +/- 2.44, N = 3SE +/- 2.50, N = 3SE +/- 0.31, N = 3SE +/- 12.73, N = 3SE +/- 2.49, N = 3SE +/- 19.31, N = 3SE +/- 3.08, N = 32298.632434.172527.862414.742507.582794.252815.802579.793325.993059.053562.634029.983589.691836.511976.70
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F527001400210028003500Min: 2271.79 / Avg: 2298.63 / Max: 2321.46Min: 2418.87 / Avg: 2434.17 / Max: 2450.86Min: 2515.93 / Avg: 2527.86 / Max: 2542.47Min: 2412.09 / Avg: 2414.74 / Max: 2417.36Min: 2482.49 / Avg: 2507.58 / Max: 2520.14Min: 2792.4 / Avg: 2794.25 / Max: 2795.99Min: 2813.23 / Avg: 2815.8 / Max: 2817.7Min: 2575.39 / Avg: 2579.79 / Max: 2587.03Min: 3322.37 / Avg: 3325.99 / Max: 3330.62Min: 3056.48 / Avg: 3059.05 / Max: 3064.04Min: 3562.19 / Avg: 3562.63 / Max: 3563.22Min: 4015.57 / Avg: 4029.98 / Max: 4055.36Min: 3585.11 / Avg: 3589.69 / Max: 3593.65Min: 1797.95 / Avg: 1836.51 / Max: 1857.52Min: 1971.22 / Avg: 1976.7 / Max: 1981.88

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 31.752.473.163.314.765.735.686.177.207.818.977.888.892.164.04
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 1.72 / Avg: 1.75 / Max: 1.76Min: 2.45 / Avg: 2.47 / Max: 2.48Min: 3.14 / Avg: 3.16 / Max: 3.18Min: 3.3 / Avg: 3.31 / Max: 3.32Min: 4.74 / Avg: 4.76 / Max: 4.8Min: 5.73 / Avg: 5.73 / Max: 5.73Min: 5.67 / Avg: 5.68 / Max: 5.68Min: 6.13 / Avg: 6.17 / Max: 6.19Min: 7.19 / Avg: 7.2 / Max: 7.22Min: 7.8 / Avg: 7.81 / Max: 7.83Min: 8.97 / Avg: 8.97 / Max: 8.97Min: 7.84 / Avg: 7.88 / Max: 7.9Min: 8.86 / Avg: 8.89 / Max: 8.93Min: 2.14 / Avg: 2.16 / Max: 2.2Min: 4.02 / Avg: 4.04 / Max: 4.07

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 17.95, N = 3SE +/- 34.12, N = 3SE +/- 13.37, N = 3SE +/- 2.67, N = 3SE +/- 3.50, N = 3SE +/- 0.81, N = 3SE +/- 2.24, N = 3SE +/- 2.29, N = 3SE +/- 3.41, N = 3SE +/- 1.92, N = 3SE +/- 0.45, N = 3SE +/- 8.92, N = 3SE +/- 1.62, N = 3SE +/- 14.55, N = 3SE +/- 1.86, N = 32314.662406.222446.022413.452522.612791.542814.152572.363327.503069.383558.094030.533584.771844.311980.56
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F527001400210028003500Min: 2283.58 / Avg: 2314.66 / Max: 2345.75Min: 2355.5 / Avg: 2406.22 / Max: 2471.13Min: 2426.14 / Avg: 2446.02 / Max: 2471.46Min: 2410.34 / Avg: 2413.45 / Max: 2418.77Min: 2516.2 / Avg: 2522.61 / Max: 2528.27Min: 2789.93 / Avg: 2791.54 / Max: 2792.44Min: 2809.91 / Avg: 2814.15 / Max: 2817.55Min: 2567.84 / Avg: 2572.36 / Max: 2575.2Min: 3322.21 / Avg: 3327.5 / Max: 3333.86Min: 3065.61 / Avg: 3069.38 / Max: 3071.93Min: 3557.24 / Avg: 3558.09 / Max: 3558.76Min: 4020.18 / Avg: 4030.53 / Max: 4048.29Min: 3581.64 / Avg: 3584.77 / Max: 3587.04Min: 1815.75 / Avg: 1844.31 / Max: 1863.4Min: 1978.38 / Avg: 1980.56 / Max: 1984.26

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.742.473.233.324.735.725.686.137.227.798.987.888.922.164.02
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 1.71 / Avg: 1.74 / Max: 1.76Min: 2.44 / Avg: 2.47 / Max: 2.5Min: 3.2 / Avg: 3.23 / Max: 3.25Min: 3.31 / Avg: 3.32 / Max: 3.32Min: 4.68 / Avg: 4.73 / Max: 4.79Min: 5.7 / Avg: 5.72 / Max: 5.74Min: 5.67 / Avg: 5.68 / Max: 5.69Min: 6.03 / Avg: 6.13 / Max: 6.19Min: 7.19 / Avg: 7.22 / Max: 7.24Min: 7.78 / Avg: 7.79 / Max: 7.81Min: 8.97 / Avg: 8.98 / Max: 9.01Min: 7.84 / Avg: 7.88 / Max: 7.9Min: 8.9 / Avg: 8.92 / Max: 8.93Min: 2.14 / Avg: 2.16 / Max: 2.18Min: 4.02 / Avg: 4.02 / Max: 4.03

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5270140210280350SE +/- 0.14, N = 3SE +/- 0.17, N = 3SE +/- 0.25, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.93, N = 3SE +/- 0.18, N = 3SE +/- 0.01, N = 3SE +/- 0.55, N = 3SE +/- 0.04, N = 3SE +/- 0.17, N = 3286.19286.18285.25294.44298.49298.71294.27302.93293.88292.60293.81298.73301.54343.61342.931. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300Min: 285.95 / Avg: 286.19 / Max: 286.43Min: 286.01 / Avg: 286.18 / Max: 286.52Min: 284.78 / Avg: 285.25 / Max: 285.61Min: 294.21 / Avg: 294.44 / Max: 294.59Min: 298.33 / Avg: 298.49 / Max: 298.7Min: 298.57 / Avg: 298.71 / Max: 298.93Min: 294.13 / Avg: 294.27 / Max: 294.49Min: 302.75 / Avg: 302.93 / Max: 303.14Min: 293.7 / Avg: 293.88 / Max: 294.04Min: 290.74 / Avg: 292.6 / Max: 293.53Min: 293.61 / Avg: 293.81 / Max: 294.16Min: 298.7 / Avg: 298.73 / Max: 298.75Min: 300.44 / Avg: 301.54 / Max: 302.12Min: 343.56 / Avg: 343.61 / Max: 343.68Min: 342.6 / Avg: 342.93 / Max: 343.111. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250M100M150M200M250MSE +/- 468827.18, N = 3SE +/- 532212.35, N = 3SE +/- 860580.20, N = 3SE +/- 731623.27, N = 6SE +/- 701112.41, N = 3SE +/- 298644.62, N = 3SE +/- 577476.24, N = 3SE +/- 979987.31, N = 3SE +/- 1503518.94, N = 3SE +/- 1627088.40, N = 3SE +/- 1573991.80, N = 3SE +/- 2628540.04, N = 3SE +/- 2178579.83, N = 3SE +/- 173513.15, N = 3SE +/- 33050.19, N = 33920615656928967748165607697209011439143413933805613303855014748828018509500818571298221471776021846982423823973648331735938831301. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240M80M120M160M200MMin: 38308495 / Avg: 39206155.67 / Max: 39889615Min: 55956147 / Avg: 56928967.33 / Max: 57789497Min: 73275140 / Avg: 74816560.33 / Max: 76250454Min: 75222240 / Avg: 76972090.17 / Max: 79656987Min: 112997116 / Avg: 114391434.33 / Max: 115217369Min: 138995330 / Avg: 139338056 / Max: 139933058Min: 131941123 / Avg: 133038549.67 / Max: 133898996Min: 146389489 / Avg: 147488280.33 / Max: 149443243Min: 182409655 / Avg: 185095007.67 / Max: 187609592Min: 182486843 / Avg: 185712982.33 / Max: 187695198Min: 211571907 / Avg: 214717760.33 / Max: 216390965Min: 213921113 / Avg: 218469824.33 / Max: 223026632Min: 235588085 / Avg: 238239735.67 / Max: 242559754Min: 48063107 / Avg: 48331734.67 / Max: 48656311Min: 93839853 / Avg: 93883129.67 / Max: 939480381. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.27450.5490.82351.0981.3725SE +/- 0.01, N = 4SE +/- 0.00, N = 3SE +/- 0.01, N = 4SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.960.870.790.840.880.840.950.770.991.021.071.221.080.780.79
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.94 / Avg: 0.96 / Max: 0.99Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.77 / Avg: 0.79 / Max: 0.81Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.87 / Avg: 0.88 / Max: 0.91Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.98 / Avg: 0.99 / Max: 0.99Min: 1.01 / Avg: 1.02 / Max: 1.02Min: 1.06 / Avg: 1.07 / Max: 1.07Min: 1.21 / Avg: 1.22 / Max: 1.22Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 0.78 / Avg: 0.78 / Max: 0.79Min: 0.78 / Avg: 0.79 / Max: 0.79

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F526K12K18K24K30KSE +/- 46.05, N = 4SE +/- 21.76, N = 3SE +/- 103.67, N = 4SE +/- 6.65, N = 3SE +/- 157.24, N = 3SE +/- 60.57, N = 3SE +/- 24.60, N = 3SE +/- 75.75, N = 3SE +/- 49.13, N = 3SE +/- 4.60, N = 3SE +/- 30.27, N = 3SE +/- 48.27, N = 3SE +/- 87.64, N = 3SE +/- 6.26, N = 3SE +/- 47.83, N = 34115.446803.099884.629314.8213220.9518514.6016443.7319958.4123559.1722996.9328316.5025012.9828270.214984.359866.08
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525K10K15K20K25KMin: 4003.8 / Avg: 4115.44 / Max: 4226.83Min: 6767.48 / Avg: 6803.09 / Max: 6842.56Min: 9618.02 / Avg: 9884.62 / Max: 10112.13Min: 9307.85 / Avg: 9314.82 / Max: 9328.12Min: 12906.62 / Avg: 13220.95 / Max: 13386.45Min: 18416.86 / Avg: 18514.6 / Max: 18625.46Min: 16395.26 / Avg: 16443.73 / Max: 16475.33Min: 19820.95 / Avg: 19958.41 / Max: 20082.32Min: 23467.34 / Avg: 23559.17 / Max: 23635.37Min: 22990.44 / Avg: 22996.93 / Max: 23005.82Min: 28279.61 / Avg: 28316.5 / Max: 28376.51Min: 24963.18 / Avg: 25012.98 / Max: 25109.5Min: 28157.11 / Avg: 28270.21 / Max: 28442.73Min: 4971.91 / Avg: 4984.35 / Max: 4991.8Min: 9816.8 / Avg: 9866.08 / Max: 9961.72

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.006, N = 3SE +/- 0.009, N = 3SE +/- 0.006, N = 3SE +/- 0.008, N = 3SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.020, N = 3SE +/- 0.013, N = 3SE +/- 0.006, N = 3SE +/- 0.010, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 31.4562.2892.9323.1294.4945.4775.4155.9507.5557.5968.8988.8119.6711.8083.461
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 1.45 / Avg: 1.46 / Max: 1.46Min: 2.29 / Avg: 2.29 / Max: 2.29Min: 2.93 / Avg: 2.93 / Max: 2.93Min: 3.12 / Avg: 3.13 / Max: 3.14Min: 4.48 / Avg: 4.49 / Max: 4.51Min: 5.47 / Avg: 5.48 / Max: 5.49Min: 5.4 / Avg: 5.42 / Max: 5.43Min: 5.95 / Avg: 5.95 / Max: 5.96Min: 7.54 / Avg: 7.56 / Max: 7.57Min: 7.57 / Avg: 7.6 / Max: 7.64Min: 8.88 / Avg: 8.9 / Max: 8.93Min: 8.8 / Avg: 8.81 / Max: 8.82Min: 9.66 / Avg: 9.67 / Max: 9.69Min: 1.8 / Avg: 1.81 / Max: 1.81Min: 3.46 / Avg: 3.46 / Max: 3.47

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.42, N = 3SE +/- 0.59, N = 3SE +/- 0.75, N = 3SE +/- 0.65, N = 4SE +/- 0.47, N = 4SE +/- 0.42, N = 4SE +/- 0.42, N = 4SE +/- 0.41, N = 4SE +/- 0.30, N = 6SE +/- 0.30, N = 6SE +/- 0.28, N = 6SE +/- 0.26, N = 7SE +/- 0.25, N = 7SE +/- 0.46, N = 3SE +/- 0.43, N = 3101.0969.6157.9153.4041.5036.3936.8534.5530.2130.0827.9828.1326.7279.6646.91
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 100.61 / Avg: 101.09 / Max: 101.92Min: 68.52 / Avg: 69.61 / Max: 70.53Min: 56.64 / Avg: 57.91 / Max: 59.25Min: 52.39 / Avg: 53.4 / Max: 55.3Min: 40.71 / Avg: 41.5 / Max: 42.85Min: 35.93 / Avg: 36.39 / Max: 37.65Min: 36.38 / Avg: 36.85 / Max: 38.1Min: 34.07 / Avg: 34.55 / Max: 35.78Min: 29.87 / Avg: 30.21 / Max: 31.71Min: 29.67 / Avg: 30.08 / Max: 31.58Min: 27.59 / Avg: 27.98 / Max: 29.37Min: 27.79 / Avg: 28.13 / Max: 29.66Min: 26.43 / Avg: 26.72 / Max: 28.22Min: 79.06 / Avg: 79.66 / Max: 80.57Min: 46.21 / Avg: 46.91 / Max: 47.7

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025SE +/- 0.002, N = 3SE +/- 0.010, N = 3SE +/- 0.005, N = 3SE +/- 0.006, N = 3SE +/- 0.014, N = 3SE +/- 0.031, N = 3SE +/- 0.016, N = 3SE +/- 0.008, N = 3SE +/- 0.038, N = 3SE +/- 0.035, N = 3SE +/- 0.063, N = 3SE +/- 0.043, N = 3SE +/- 0.050, N = 3SE +/- 0.002, N = 3SE +/- 0.013, N = 33.1174.8556.2146.5759.63011.85411.53112.73815.98416.23019.06219.09520.7433.8837.684
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025Min: 3.11 / Avg: 3.12 / Max: 3.12Min: 4.84 / Avg: 4.85 / Max: 4.87Min: 6.21 / Avg: 6.21 / Max: 6.22Min: 6.57 / Avg: 6.58 / Max: 6.59Min: 9.6 / Avg: 9.63 / Max: 9.65Min: 11.8 / Avg: 11.85 / Max: 11.9Min: 11.52 / Avg: 11.53 / Max: 11.56Min: 12.72 / Avg: 12.74 / Max: 12.75Min: 15.93 / Avg: 15.98 / Max: 16.06Min: 16.18 / Avg: 16.23 / Max: 16.3Min: 18.94 / Avg: 19.06 / Max: 19.14Min: 19.02 / Avg: 19.1 / Max: 19.16Min: 20.64 / Avg: 20.74 / Max: 20.8Min: 3.88 / Avg: 3.88 / Max: 3.89Min: 7.67 / Avg: 7.68 / Max: 7.71

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.381.982.532.683.834.654.545.036.136.197.217.047.771.683.27MIN: 1.34 / MAX: 1.42MIN: 1.91 / MAX: 2.03MIN: 2.46 / MAX: 2.62MIN: 2.58 / MAX: 2.77MIN: 3.74 / MAX: 4.01MIN: 4.49 / MAX: 5.07MIN: 4.38 / MAX: 4.8MIN: 4.8 / MAX: 5.35MIN: 5.97 / MAX: 6.57MIN: 6.04 / MAX: 6.75MIN: 7.03 / MAX: 7.62MIN: 6.88 / MAX: 7.39MIN: 7.58 / MAX: 8.26MIN: 1.61 / MAX: 1.72MIN: 3.18 / MAX: 3.41
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 1.37 / Avg: 1.38 / Max: 1.39Min: 1.97 / Avg: 1.98 / Max: 1.99Min: 2.52 / Avg: 2.53 / Max: 2.55Min: 2.66 / Avg: 2.68 / Max: 2.71Min: 3.81 / Avg: 3.83 / Max: 3.84Min: 4.58 / Avg: 4.65 / Max: 4.76Min: 4.47 / Avg: 4.54 / Max: 4.64Min: 4.96 / Avg: 5.03 / Max: 5.09Min: 6.07 / Avg: 6.13 / Max: 6.22Min: 6.14 / Avg: 6.19 / Max: 6.26Min: 7.13 / Avg: 7.21 / Max: 7.27Min: 6.98 / Avg: 7.04 / Max: 7.07Min: 7.68 / Avg: 7.77 / Max: 7.83Min: 1.66 / Avg: 1.68 / Max: 1.69Min: 3.26 / Avg: 3.27 / Max: 3.29

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 31.502.142.742.914.175.044.925.546.756.817.927.828.641.843.57MIN: 1.48 / MAX: 1.54MIN: 2.08 / MAX: 2.17MIN: 2.69 / MAX: 2.78MIN: 2.88 / MAX: 2.96MIN: 4.11 / MAX: 4.26MIN: 4.94 / MAX: 5.12MIN: 4.81 / MAX: 5.05MIN: 5.5 / MAX: 5.57MIN: 6.64 / MAX: 6.79MIN: 6.61 / MAX: 6.89MIN: 7.72 / MAX: 8.01MIN: 7.53 / MAX: 7.98MIN: 8.32 / MAX: 8.76MIN: 1.81 / MAX: 1.92MIN: 3.48 / MAX: 3.63
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 1.5 / Avg: 1.5 / Max: 1.51Min: 2.12 / Avg: 2.14 / Max: 2.15Min: 2.73 / Avg: 2.74 / Max: 2.76Min: 2.89 / Avg: 2.91 / Max: 2.95Min: 4.11 / Avg: 4.17 / Max: 4.22Min: 4.94 / Avg: 5.04 / Max: 5.12Min: 4.81 / Avg: 4.92 / Max: 5.01Min: 5.52 / Avg: 5.54 / Max: 5.55Min: 6.73 / Avg: 6.75 / Max: 6.78Min: 6.71 / Avg: 6.81 / Max: 6.88Min: 7.83 / Avg: 7.92 / Max: 7.98Min: 7.71 / Avg: 7.82 / Max: 7.94Min: 8.49 / Avg: 8.64 / Max: 8.72Min: 1.83 / Avg: 1.84 / Max: 1.87Min: 3.53 / Avg: 3.57 / Max: 3.6

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250K100K150K200K250KSE +/- 202.36, N = 3SE +/- 82.39, N = 3SE +/- 38.26, N = 3SE +/- 25.85, N = 3SE +/- 6.99, N = 3SE +/- 126.18, N = 3SE +/- 39.58, N = 3SE +/- 77.46, N = 3SE +/- 74.34, N = 3SE +/- 111.10, N = 3SE +/- 174.48, N = 3SE +/- 39.50, N = 3SE +/- 8.41, N = 3SE +/- 44.74, N = 3SE +/- 132.22, N = 3242480.0173807.0129293.0123947.093404.175080.077264.770622.968430.465104.456195.561679.657509.9201258.0107391.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240K80K120K160K200KMin: 242075 / Avg: 242479.67 / Max: 242688Min: 173701 / Avg: 173806.67 / Max: 173969Min: 129222 / Avg: 129293.33 / Max: 129353Min: 123895 / Avg: 123946.67 / Max: 123974Min: 93391.7 / Avg: 93404.1 / Max: 93415.9Min: 74862.2 / Avg: 75080 / Max: 75299.3Min: 77193.3 / Avg: 77264.7 / Max: 77330Min: 70510.4 / Avg: 70622.9 / Max: 70771.4Min: 68340.5 / Avg: 68430.4 / Max: 68577.9Min: 64904.3 / Avg: 65104.43 / Max: 65288.1Min: 55903.9 / Avg: 56195.5 / Max: 56507.3Min: 61602.1 / Avg: 61679.57 / Max: 61731.7Min: 57494.2 / Avg: 57509.87 / Max: 57523Min: 201180 / Avg: 201257.67 / Max: 201335Min: 107127 / Avg: 107390.67 / Max: 107540

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KSE +/- 22.85, N = 3SE +/- 107.22, N = 3SE +/- 63.38, N = 3SE +/- 21.22, N = 3SE +/- 193.63, N = 3SE +/- 76.82, N = 3SE +/- 28.71, N = 3SE +/- 86.41, N = 3SE +/- 194.69, N = 3SE +/- 322.42, N = 3SE +/- 187.50, N = 3SE +/- 280.28, N = 3SE +/- 34.57, N = 3SE +/- 599.00, N = 3SE +/- 13.80, N = 3160713.0117060.084723.580320.561419.947083.448084.743977.948507.445436.632085.535037.432117.9133236.068637.6
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KMin: 160685 / Avg: 160712.67 / Max: 160758Min: 116941 / Avg: 117060 / Max: 117274Min: 84636.8 / Avg: 84723.47 / Max: 84846.9Min: 80284 / Avg: 80320.5 / Max: 80357.5Min: 61061.3 / Avg: 61419.9 / Max: 61725.8Min: 46946.3 / Avg: 47083.37 / Max: 47212Min: 48044.8 / Avg: 48084.67 / Max: 48140.4Min: 43815.4 / Avg: 43977.87 / Max: 44110.1Min: 48205.1 / Avg: 48507.43 / Max: 48871.1Min: 44937.4 / Avg: 45436.6 / Max: 46039.7Min: 31772.4 / Avg: 32085.53 / Max: 32420.8Min: 34603.7 / Avg: 35037.37 / Max: 35561.8Min: 32050.1 / Avg: 32117.93 / Max: 32163.4Min: 132636 / Avg: 133236 / Max: 134434Min: 68610.2 / Avg: 68637.6 / Max: 68654.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240K80K120K160K200KSE +/- 20.79, N = 3SE +/- 28.48, N = 3SE +/- 47.75, N = 3SE +/- 15.21, N = 3SE +/- 40.77, N = 3SE +/- 44.27, N = 3SE +/- 69.37, N = 3SE +/- 72.07, N = 3SE +/- 518.49, N = 3SE +/- 392.20, N = 3SE +/- 61.64, N = 3SE +/- 118.11, N = 3SE +/- 110.87, N = 3SE +/- 13.05, N = 3SE +/- 61.58, N = 3165426.0120119.086277.682490.462904.147911.749078.545118.150401.845818.833024.036468.833434.6136023.070154.5
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KMin: 165394 / Avg: 165426 / Max: 165465Min: 120062 / Avg: 120118.67 / Max: 120152Min: 86198 / Avg: 86277.57 / Max: 86363.1Min: 82465 / Avg: 82490.43 / Max: 82517.6Min: 62851.8 / Avg: 62904.07 / Max: 62984.4Min: 47856.7 / Avg: 47911.7 / Max: 47999.3Min: 48987 / Avg: 49078.53 / Max: 49214.6Min: 45013.3 / Avg: 45118.1 / Max: 45256.2Min: 49377.6 / Avg: 50401.83 / Max: 51054.3Min: 45327.1 / Avg: 45818.77 / Max: 46593.9Min: 32924.1 / Avg: 33024 / Max: 33136.5Min: 36258.5 / Avg: 36468.83 / Max: 36667.1Min: 33282.2 / Avg: 33434.6 / Max: 33650.3Min: 135998 / Avg: 136023 / Max: 136042Min: 70069.3 / Avg: 70154.47 / Max: 70274.1

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5270K140K210K280K350KSE +/- 204.78, N = 3SE +/- 223.01, N = 3SE +/- 203.74, N = 3SE +/- 607.97, N = 3SE +/- 219.22, N = 3SE +/- 622.13, N = 3SE +/- 344.84, N = 3SE +/- 749.32, N = 3SE +/- 768.33, N = 3SE +/- 735.95, N = 3SE +/- 166.32, N = 3SE +/- 23.47, N = 3SE +/- 175.05, N = 3SE +/- 436.15, N = 3943711315831635351632482164272593532536892587593313003317913334122871153314391008431725851. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260K120K180K240K300KMin: 93963 / Avg: 94371 / Max: 94606Min: 131216 / Avg: 131583 / Max: 131986Min: 163163 / Avg: 163535 / Max: 163865Min: 215402 / Avg: 216427 / Max: 217506Min: 259071 / Avg: 259353.33 / Max: 259785Min: 252478 / Avg: 253689 / Max: 254542Min: 258263 / Avg: 258759 / Max: 259422Min: 329808 / Avg: 331300 / Max: 332168Min: 330767 / Avg: 331790.67 / Max: 333295Min: 332027 / Avg: 333412 / Max: 334536Min: 286867 / Avg: 287115 / Max: 287431Min: 331409 / Avg: 331438.67 / Max: 331485Min: 100631 / Avg: 100842.67 / Max: 101190Min: 171773 / Avg: 172585 / Max: 1732671. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521000K2000K3000K4000K5000KSE +/- 798.79, N = 3SE +/- 881.92, N = 3SE +/- 2000.00, N = 3SE +/- 2603.42, N = 3SE +/- 1452.97, N = 3SE +/- 2185.81, N = 3SE +/- 1527.53, N = 3SE +/- 2905.93, N = 3SE +/- 881.92, N = 3SE +/- 881.92, N = 3SE +/- 3282.95, N = 3SE +/- 1527.53, N = 3SE +/- 525.27, N = 3SE +/- 1763.83, N = 371717510723331354000146033321273332596333253400027986673528667355766741953334183000456800087470017316671. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52800K1600K2400K3200K4000KMin: 716134 / Avg: 717175 / Max: 718745Min: 1071000 / Avg: 1072333.33 / Max: 1074000Min: 1352000 / Avg: 1354000 / Max: 1358000Min: 1456000 / Avg: 1460333.33 / Max: 1465000Min: 2125000 / Avg: 2127333.33 / Max: 2130000Min: 2592000 / Avg: 2596333.33 / Max: 2599000Min: 2531000 / Avg: 2534000 / Max: 2536000Min: 2794000 / Avg: 2798666.67 / Max: 2804000Min: 3527000 / Avg: 3528666.67 / Max: 3530000Min: 3556000 / Avg: 3557666.67 / Max: 3559000Min: 4189000 / Avg: 4195333.33 / Max: 4200000Min: 4566000 / Avg: 4568000 / Max: 4571000Min: 873984 / Avg: 874700.33 / Max: 875724Min: 1729000 / Avg: 1731666.67 / Max: 17350001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.08550.1710.25650.3420.4275SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.310.320.320.330.330.330.320.330.320.320.320.330.330.380.381. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212345Min: 0.31 / Avg: 0.31 / Max: 0.31Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.33 / Avg: 0.33 / Max: 0.34Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.38 / Avg: 0.38 / Max: 0.39Min: 0.38 / Avg: 0.38 / Max: 0.381. (CXX) g++ options: -O3 -pthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.726.498.619.1511.2213.9713.6514.9415.5015.9416.4414.9416.505.7310.661. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620Min: 4.72 / Avg: 4.72 / Max: 4.73Min: 6.48 / Avg: 6.49 / Max: 6.49Min: 8.58 / Avg: 8.61 / Max: 8.63Min: 9.13 / Avg: 9.15 / Max: 9.17Min: 11.18 / Avg: 11.22 / Max: 11.24Min: 13.95 / Avg: 13.97 / Max: 13.98Min: 13.61 / Avg: 13.65 / Max: 13.67Min: 14.93 / Avg: 14.94 / Max: 14.95Min: 15.49 / Avg: 15.5 / Max: 15.51Min: 15.92 / Avg: 15.94 / Max: 15.97Min: 16.42 / Avg: 16.44 / Max: 16.48Min: 14.93 / Avg: 14.94 / Max: 14.97Min: 16.44 / Avg: 16.5 / Max: 16.57Min: 5.71 / Avg: 5.73 / Max: 5.74Min: 10.64 / Avg: 10.66 / Max: 10.691. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5211K22K33K44K55KSE +/- 275.73, N = 3SE +/- 108.67, N = 3SE +/- 109.50, N = 3SE +/- 383.79, N = 3SE +/- 17.38, N = 3SE +/- 71.59, N = 3SE +/- 133.22, N = 3SE +/- 84.26, N = 3SE +/- 241.95, N = 3SE +/- 6.70, N = 3SE +/- 71.36, N = 3SE +/- 45.54, N = 3SE +/- 5.24, N = 3SE +/- 107.65, N = 3SE +/- 174.69, N = 353237.6852936.9852977.6051629.1050474.7050609.0451428.8950076.2751630.5051397.3651437.7951015.8749870.8243548.6343527.281. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529K18K27K36K45KMin: 52860.58 / Avg: 53237.68 / Max: 53774.7Min: 52784.59 / Avg: 52936.98 / Max: 53147.39Min: 52852.14 / Avg: 52977.6 / Max: 53195.79Min: 51241.3 / Avg: 51629.1 / Max: 52396.66Min: 50450.23 / Avg: 50474.7 / Max: 50508.32Min: 50528.84 / Avg: 50609.04 / Max: 50751.85Min: 51271.63 / Avg: 51428.89 / Max: 51693.79Min: 49985.34 / Avg: 50076.27 / Max: 50244.61Min: 51347.45 / Avg: 51630.5 / Max: 52111.92Min: 51383.97 / Avg: 51397.36 / Max: 51404.23Min: 51363.7 / Avg: 51437.79 / Max: 51580.48Min: 50939.99 / Avg: 51015.87 / Max: 51097.43Min: 49860.73 / Avg: 49870.82 / Max: 49878.3Min: 43343.38 / Avg: 43548.63 / Max: 43707.56Min: 43324.05 / Avg: 43527.28 / Max: 43875.021. (CXX) g++ options: -O3 -march=native -fopenmp

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521632486480SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.59, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 372.4762.7661.5458.5153.9352.7454.7752.1354.4954.4855.0456.0355.6162.2453.151. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521428425670Min: 72.44 / Avg: 72.47 / Max: 72.51Min: 62.66 / Avg: 62.76 / Max: 62.82Min: 60.92 / Avg: 61.54 / Max: 62.73Min: 58.49 / Avg: 58.51 / Max: 58.52Min: 53.88 / Avg: 53.93 / Max: 54Min: 52.69 / Avg: 52.74 / Max: 52.79Min: 54.71 / Avg: 54.77 / Max: 54.89Min: 52.04 / Avg: 52.12 / Max: 52.19Min: 54.34 / Avg: 54.49 / Max: 54.59Min: 54.32 / Avg: 54.48 / Max: 54.59Min: 55.03 / Avg: 55.04 / Max: 55.06Min: 55.95 / Avg: 56.03 / Max: 56.09Min: 55.54 / Avg: 55.61 / Max: 55.68Min: 62.13 / Avg: 62.24 / Max: 62.4Min: 52.9 / Avg: 53.15 / Max: 53.281. RawTherapee, version 5.8, command line.

MBW

This is a basic/simple memory (RAM) bandwidth benchmark for memory copy operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 8192 MiBEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KSE +/- 33.58, N = 3SE +/- 85.10, N = 3SE +/- 87.56, N = 3SE +/- 60.36, N = 3SE +/- 93.82, N = 3SE +/- 83.23, N = 3SE +/- 36.28, N = 3SE +/- 24.53, N = 3SE +/- 23.45, N = 3SE +/- 36.16, N = 3SE +/- 2.47, N = 3SE +/- 67.71, N = 3SE +/- 38.86, N = 3SE +/- 4.94, N = 3SE +/- 1.29, N = 315523.1415482.7115482.7415480.2915510.9215621.6415503.0515641.0915459.1615528.6515616.9015599.9015534.9815666.7014958.191. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 8192 MiBEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KMin: 15473.92 / Avg: 15523.14 / Max: 15587.31Min: 15312.58 / Avg: 15482.71 / Max: 15571.59Min: 15323.81 / Avg: 15482.73 / Max: 15625.9Min: 15391.63 / Avg: 15480.29 / Max: 15595.56Min: 15344.4 / Avg: 15510.92 / Max: 15669.08Min: 15473.05 / Avg: 15621.64 / Max: 15760.91Min: 15458.28 / Avg: 15503.05 / Max: 15574.89Min: 15608.52 / Avg: 15641.09 / Max: 15689.15Min: 15413.52 / Avg: 15459.16 / Max: 15491.31Min: 15471.64 / Avg: 15528.65 / Max: 15595.7Min: 15613.31 / Avg: 15616.9 / Max: 15621.62Min: 15488.47 / Avg: 15599.9 / Max: 15722.26Min: 15458.19 / Avg: 15534.97 / Max: 15583.72Min: 15657.32 / Avg: 15666.7 / Max: 15674.1Min: 14955.9 / Avg: 14958.19 / Max: 14960.351. (CC) gcc options: -O3 -march=native

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620SE +/- 0.004, N = 3SE +/- 0.040, N = 3SE +/- 0.011, N = 3SE +/- 0.082, N = 6SE +/- 0.075, N = 4SE +/- 0.056, N = 3SE +/- 0.063, N = 5SE +/- 0.034, N = 13SE +/- 0.069, N = 3SE +/- 0.031, N = 3SE +/- 0.063, N = 3SE +/- 0.034, N = 3SE +/- 0.064, N = 15SE +/- 0.020, N = 3SE +/- 0.029, N = 314.1239.9397.7918.2586.0675.4815.6485.3694.8724.9724.6174.8044.40312.1768.8761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620Min: 14.12 / Avg: 14.12 / Max: 14.13Min: 9.87 / Avg: 9.94 / Max: 10.01Min: 7.77 / Avg: 7.79 / Max: 7.81Min: 7.91 / Avg: 8.26 / Max: 8.44Min: 5.94 / Avg: 6.07 / Max: 6.22Min: 5.37 / Avg: 5.48 / Max: 5.55Min: 5.51 / Avg: 5.65 / Max: 5.86Min: 4.97 / Avg: 5.37 / Max: 5.44Min: 4.76 / Avg: 4.87 / Max: 5Min: 4.91 / Avg: 4.97 / Max: 5.01Min: 4.51 / Avg: 4.62 / Max: 4.73Min: 4.76 / Avg: 4.8 / Max: 4.87Min: 3.96 / Avg: 4.4 / Max: 4.7Min: 12.14 / Avg: 12.18 / Max: 12.2Min: 8.83 / Avg: 8.88 / Max: 8.931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212K24K36K48K60KSE +/- 5.29, N = 3SE +/- 98.97, N = 3SE +/- 44.96, N = 3SE +/- 306.40, N = 6SE +/- 510.25, N = 4SE +/- 468.05, N = 3SE +/- 489.37, N = 5SE +/- 320.52, N = 13SE +/- 725.42, N = 3SE +/- 308.59, N = 3SE +/- 734.82, N = 3SE +/- 365.68, N = 3SE +/- 858.45, N = 15SE +/- 32.87, N = 3SE +/- 92.29, N = 31770925170321153031341273456714434346655514445038154318521815710220541281931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5210K20K30K40K50KMin: 17702.56 / Avg: 17709.27 / Max: 17719.71Min: 25002.83 / Avg: 25170.22 / Max: 25345.42Min: 32033.65 / Avg: 32115.45 / Max: 32188.7Min: 29631.74 / Avg: 30313.35 / Max: 31648.97Min: 40219.85 / Avg: 41272.74 / Max: 42149.61Min: 45084.17 / Avg: 45670.73 / Max: 46595.8Min: 42733.56 / Avg: 44342.61 / Max: 45416.75Min: 46038.91 / Avg: 46655.09 / Max: 50394.39Min: 50134.59 / Avg: 51444.21 / Max: 52639.74Min: 49982.78 / Avg: 50381.39 / Max: 50988.76Min: 53010.6 / Avg: 54318.48 / Max: 55552.89Min: 51458.66 / Avg: 52181.25 / Max: 52640.41Min: 53269.54 / Avg: 57102 / Max: 63379.05Min: 20498.68 / Avg: 20541.36 / Max: 20606Min: 28026.73 / Avg: 28193.29 / Max: 28345.441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.61321.22641.83962.45283.066SE +/- 0.00080, N = 3SE +/- 0.00045, N = 3SE +/- 0.00125, N = 3SE +/- 0.00048, N = 3SE +/- 0.00125, N = 3SE +/- 0.00041, N = 3SE +/- 0.00074, N = 3SE +/- 0.00071, N = 3SE +/- 0.00037, N = 3SE +/- 0.00034, N = 3SE +/- 0.00014, N = 3SE +/- 0.00053, N = 3SE +/- 0.00033, N = 3SE +/- 0.00063, N = 3SE +/- 0.00173, N = 32.725531.838951.451451.350580.937050.774390.790790.715490.574840.570480.489080.492640.446272.229491.14375
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 2.72 / Avg: 2.73 / Max: 2.73Min: 1.84 / Avg: 1.84 / Max: 1.84Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.35 / Avg: 1.35 / Max: 1.35Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.79 / Avg: 0.79 / Max: 0.79Min: 0.71 / Avg: 0.72 / Max: 0.72Min: 0.57 / Avg: 0.57 / Max: 0.58Min: 0.57 / Avg: 0.57 / Max: 0.57Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 2.23 / Avg: 2.23 / Max: 2.23Min: 1.14 / Avg: 1.14 / Max: 1.15

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.13950.2790.41850.5580.6975SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.500.500.500.520.530.520.510.530.520.520.520.520.530.620.611. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.52 / Avg: 0.53 / Max: 0.54Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.61 / Avg: 0.61 / Max: 0.621. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.14180.28360.42540.56720.709SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 30.520.510.510.530.530.530.520.540.520.520.530.540.540.630.621. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.54Min: 0.53 / Avg: 0.53 / Max: 0.54Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.54 / Avg: 0.54 / Max: 0.55Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.55Min: 0.62 / Avg: 0.63 / Max: 0.63Min: 0.61 / Avg: 0.62 / Max: 0.631. (CXX) g++ options: -O3 -pthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521326395265SE +/- 0.80, N = 3SE +/- 0.07, N = 3SE +/- 0.50, N = 3SE +/- 0.45, N = 3SE +/- 0.41, N = 3SE +/- 0.42, N = 3SE +/- 0.36, N = 3SE +/- 0.30, N = 3SE +/- 0.28, N = 3SE +/- 0.25, N = 3SE +/- 0.23, N = 3SE +/- 0.32, N = 3SE +/- 0.19, N = 3SE +/- 0.33, N = 3SE +/- 0.24, N = 359.3157.2456.1055.7753.3653.7255.5752.9955.4255.3055.9555.6855.6052.9950.59
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521224364860Min: 57.78 / Avg: 59.31 / Max: 60.47Min: 57.12 / Avg: 57.24 / Max: 57.34Min: 55.21 / Avg: 56.1 / Max: 56.93Min: 54.93 / Avg: 55.77 / Max: 56.46Min: 52.68 / Avg: 53.36 / Max: 54.09Min: 52.98 / Avg: 53.72 / Max: 54.43Min: 55.11 / Avg: 55.57 / Max: 56.28Min: 52.43 / Avg: 52.99 / Max: 53.44Min: 54.86 / Avg: 55.42 / Max: 55.71Min: 54.8 / Avg: 55.3 / Max: 55.59Min: 55.51 / Avg: 55.95 / Max: 56.27Min: 55.04 / Avg: 55.68 / Max: 56.12Min: 55.35 / Avg: 55.6 / Max: 55.97Min: 52.38 / Avg: 52.99 / Max: 53.51Min: 50.13 / Avg: 50.59 / Max: 50.91

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.440.440.440.450.460.460.450.460.450.450.450.460.460.530.531. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.43 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -O3 -pthread

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.30, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.18, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3113.1667.2356.5740.8733.8130.6226.5329.6225.1723.4322.8323.3522.3660.5039.891. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 112.77 / Avg: 113.16 / Max: 113.75Min: 67.08 / Avg: 67.23 / Max: 67.35Min: 56.49 / Avg: 56.57 / Max: 56.65Min: 40.67 / Avg: 40.87 / Max: 41.21Min: 33.73 / Avg: 33.81 / Max: 33.97Min: 30.52 / Avg: 30.62 / Max: 30.71Min: 26.35 / Avg: 26.53 / Max: 26.65Min: 29.5 / Avg: 29.62 / Max: 29.8Min: 25.06 / Avg: 25.17 / Max: 25.35Min: 23.3 / Avg: 23.43 / Max: 23.53Min: 22.63 / Avg: 22.83 / Max: 22.97Min: 23.06 / Avg: 23.35 / Max: 23.68Min: 22.24 / Avg: 22.36 / Max: 22.44Min: 60.32 / Avg: 60.5 / Max: 60.7Min: 39.79 / Avg: 39.89 / Max: 39.941. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: AllEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F5250100150200250SE +/- 0.06, N = 3SE +/- 0.41, N = 3SE +/- 0.90, N = 3SE +/- 0.42, N = 3SE +/- 0.19, N = 3SE +/- 0.06, N = 3SE +/- 0.29, N = 3197.76171.73188.06136.31145.07183.95207.63
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: AllEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F524080120160200Min: 197.65 / Avg: 197.76 / Max: 197.84Min: 171 / Avg: 171.73 / Max: 172.4Min: 186.86 / Avg: 188.06 / Max: 189.82Min: 135.58 / Avg: 136.31 / Max: 137.03Min: 144.68 / Avg: 145.07 / Max: 145.26Min: 183.83 / Avg: 183.95 / Max: 184.04Min: 207.06 / Avg: 207.63 / Max: 208.05

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.00020.00040.00060.00080.001SE +/- 0.00000737, N = 3SE +/- 0.00001257, N = 3SE +/- 0.00000497, N = 3SE +/- 0.00000413, N = 3SE +/- 0.00000962, N = 3SE +/- 0.00000724, N = 3SE +/- 0.00000252, N = 3SE +/- 0.00000879, N = 3SE +/- 0.00000921, N = 3SE +/- 0.00000264, N = 3SE +/- 0.00000365, N = 3SE +/- 0.00000621, N = 3SE +/- 0.00000656, N = 3SE +/- 0.00001015, N = 3SE +/- 0.00000483, N = 30.001006080.000980740.000992100.000954780.000955860.000960820.000967530.000954850.000961530.000965790.000974550.000970840.000961160.000835630.00082570
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.21, N = 375.3360.2055.2252.7547.4145.4446.1044.5543.8443.8243.0642.5641.8759.5646.54
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521428425670Min: 75.27 / Avg: 75.33 / Max: 75.44Min: 60.16 / Avg: 60.2 / Max: 60.27Min: 55.1 / Avg: 55.22 / Max: 55.29Min: 52.62 / Avg: 52.75 / Max: 52.9Min: 47.38 / Avg: 47.41 / Max: 47.44Min: 45.42 / Avg: 45.44 / Max: 45.46Min: 46.08 / Avg: 46.1 / Max: 46.13Min: 44.52 / Avg: 44.55 / Max: 44.58Min: 43.81 / Avg: 43.84 / Max: 43.91Min: 43.8 / Avg: 43.82 / Max: 43.84Min: 43 / Avg: 43.06 / Max: 43.15Min: 42.5 / Avg: 42.56 / Max: 42.67Min: 41.81 / Avg: 41.87 / Max: 41.92Min: 59.55 / Avg: 59.56 / Max: 59.58Min: 46.12 / Avg: 46.54 / Max: 46.75

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260K120K180K240K300KSE +/- 84.48, N = 3SE +/- 108.98, N = 3SE +/- 313.91, N = 3SE +/- 192.03, N = 3SE +/- 315.33, N = 3SE +/- 249.88, N = 3SE +/- 894.75, N = 3SE +/- 190.42, N = 3SE +/- 453.51, N = 3SE +/- 370.12, N = 3SE +/- 419.38, N = 3SE +/- 226.28, N = 3SE +/- 1266.96, N = 3SE +/- 288.04, N = 3SE +/- 558.99, N = 345941696559058895137138575171362171242176148229747234062270140264908279656569551090691. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250K100K150K200K250KMin: 45801 / Avg: 45941.33 / Max: 46093Min: 69500 / Avg: 69654.67 / Max: 69865Min: 90180 / Avg: 90587.67 / Max: 91205Min: 94939 / Avg: 95137 / Max: 95521Min: 138164 / Avg: 138575.33 / Max: 139195Min: 170978 / Avg: 171362 / Max: 171831Min: 169527 / Avg: 171242 / Max: 172542Min: 175784 / Avg: 176148 / Max: 176427Min: 229189 / Avg: 229746.67 / Max: 230645Min: 233325 / Avg: 234061.67 / Max: 234493Min: 269353 / Avg: 270139.67 / Max: 270785Min: 264563 / Avg: 264907.67 / Max: 265334Min: 278191 / Avg: 279656 / Max: 282179Min: 56532 / Avg: 56954.67 / Max: 57505Min: 108136 / Avg: 109069.33 / Max: 1100691. (CXX) g++ options: -pipe -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.892.933.724.025.767.016.907.589.529.5811.2411.0712.202.394.45MIN: 1.84 / MAX: 1.91MIN: 2.9 / MAX: 2.97MIN: 3.69 / MAX: 3.76MIN: 3.98 / MAX: 4.05MIN: 5.68 / MAX: 5.81MIN: 6.94 / MAX: 7.09MIN: 6.8 / MAX: 6.99MIN: 7.46 / MAX: 7.63MIN: 9.35 / MAX: 9.62MIN: 9.17 / MAX: 9.71MIN: 10.99 / MAX: 11.36MIN: 10.75 / MAX: 11.24MIN: 11.24 / MAX: 12.35MIN: 2.37 / MAX: 2.42MIN: 4.31 / MAX: 4.52
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620Min: 1.89 / Avg: 1.89 / Max: 1.89Min: 2.93 / Avg: 2.93 / Max: 2.93Min: 3.72 / Avg: 3.72 / Max: 3.72Min: 4.02 / Avg: 4.02 / Max: 4.02Min: 5.75 / Avg: 5.76 / Max: 5.78Min: 6.99 / Avg: 7.01 / Max: 7.04Min: 6.9 / Avg: 6.9 / Max: 6.9Min: 7.58 / Avg: 7.58 / Max: 7.58Min: 9.52 / Avg: 9.52 / Max: 9.52Min: 9.52 / Avg: 9.58 / Max: 9.62Min: 11.24 / Avg: 11.24 / Max: 11.24Min: 10.99 / Avg: 11.07 / Max: 11.11Min: 12.2 / Avg: 12.2 / Max: 12.2Min: 2.39 / Avg: 2.39 / Max: 2.39Min: 4.44 / Avg: 4.45 / Max: 4.46

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.11, N = 4SE +/- 0.08, N = 4SE +/- 0.10, N = 4SE +/- 0.12, N = 4SE +/- 0.07, N = 4SE +/- 0.08, N = 4SE +/- 0.07, N = 4SE +/- 0.04, N = 4SE +/- 0.11, N = 4SE +/- 0.06, N = 4SE +/- 0.09, N = 4SE +/- 0.12, N = 4SE +/- 0.15, N = 4SE +/- 0.07, N = 4SE +/- 0.08, N = 437.4037.4037.5536.3635.6935.7336.2435.2736.4036.2836.2035.9335.3030.8430.751. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 37.13 / Avg: 37.4 / Max: 37.65Min: 37.26 / Avg: 37.4 / Max: 37.61Min: 37.36 / Avg: 37.55 / Max: 37.83Min: 36.06 / Avg: 36.36 / Max: 36.65Min: 35.55 / Avg: 35.69 / Max: 35.87Min: 35.5 / Avg: 35.73 / Max: 35.85Min: 36.12 / Avg: 36.24 / Max: 36.4Min: 35.16 / Avg: 35.27 / Max: 35.37Min: 36.11 / Avg: 36.4 / Max: 36.59Min: 36.17 / Avg: 36.28 / Max: 36.45Min: 35.97 / Avg: 36.2 / Max: 36.42Min: 35.63 / Avg: 35.93 / Max: 36.21Min: 34.92 / Avg: 35.3 / Max: 35.57Min: 30.64 / Avg: 30.84 / Max: 30.96Min: 30.52 / Avg: 30.75 / Max: 30.861. (CC) gcc options: -O2 -std=c99

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52300K600K900K1200K1500KSE +/- 11537.36, N = 4SE +/- 17229.52, N = 15SE +/- 15930.17, N = 3SE +/- 12613.83, N = 5SE +/- 4313.29, N = 3SE +/- 14615.10, N = 4SE +/- 15826.22, N = 15SE +/- 15275.38, N = 3SE +/- 14106.76, N = 3SE +/- 16083.41, N = 14SE +/- 18435.82, N = 15SE +/- 22602.36, N = 12SE +/- 9391.21, N = 3SE +/- 13442.74, N = 3SE +/- 8091.69, N = 31126925.501156241.741143594.631173570.001147451.381180459.911186674.321189270.871170719.461183717.451200527.311228831.791185927.041315900.461336546.211. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KMin: 1105957.12 / Avg: 1126925.5 / Max: 1159285.88Min: 1088613.12 / Avg: 1156241.74 / Max: 1310666.25Min: 1112121 / Avg: 1143594.63 / Max: 1163617.38Min: 1141950.88 / Avg: 1173570 / Max: 1208467Min: 1143020.75 / Avg: 1147451.38 / Max: 1156076.88Min: 1147582.38 / Avg: 1180459.91 / Max: 1209351.5Min: 1117600.38 / Avg: 1186674.32 / Max: 1295341Min: 1161717.5 / Avg: 1189270.87 / Max: 1214476.5Min: 1151024 / Avg: 1170719.46 / Max: 1198062Min: 1125387.38 / Avg: 1183717.45 / Max: 1363145Min: 1107542.38 / Avg: 1200527.31 / Max: 1349007.38Min: 1142730.12 / Avg: 1228831.79 / Max: 1378956.5Min: 1176055.5 / Avg: 1185927.04 / Max: 1204701.12Min: 1289503 / Avg: 1315900.46 / Max: 1333515.38Min: 1320510.25 / Avg: 1336546.21 / Max: 1346451.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400K800K1200K1600K2000KSE +/- 18495.33, N = 15SE +/- 12289.67, N = 3SE +/- 14367.89, N = 15SE +/- 13583.32, N = 6SE +/- 13170.58, N = 15SE +/- 18437.40, N = 3SE +/- 21214.43, N = 3SE +/- 13573.21, N = 15SE +/- 8071.25, N = 3SE +/- 11135.64, N = 15SE +/- 13883.40, N = 3SE +/- 18256.32, N = 3SE +/- 14975.61, N = 3SE +/- 19118.80, N = 4SE +/- 17849.12, N = 51421564.351372468.881414840.411438157.731466557.031419600.131481804.001508082.411450496.711437534.231417246.671482738.921493222.211601070.501635794.571. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52300K600K900K1200K1500KMin: 1340662.25 / Avg: 1421564.35 / Max: 1628409.75Min: 1350621.38 / Avg: 1372468.88 / Max: 1393145.75Min: 1349900.62 / Avg: 1414840.41 / Max: 1535877.12Min: 1387351.75 / Avg: 1438157.73 / Max: 1487252.88Min: 1388893.38 / Avg: 1466557.03 / Max: 1567157.5Min: 1387942.88 / Avg: 1419600.13 / Max: 1451804.88Min: 1456461.38 / Avg: 1481804 / Max: 1523945.12Min: 1433897.38 / Avg: 1508082.41 / Max: 1604899.38Min: 1439056 / Avg: 1450496.71 / Max: 1466079.5Min: 1363529.88 / Avg: 1437534.23 / Max: 1507163.75Min: 1397042.75 / Avg: 1417246.67 / Max: 1443844Min: 1456664.25 / Avg: 1482738.92 / Max: 1517911.38Min: 1469521.88 / Avg: 1493222.21 / Max: 1520932Min: 1561051.5 / Avg: 1601070.5 / Max: 1650709.75Min: 1586048 / Avg: 1635794.57 / Max: 1671133.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.26, N = 3SE +/- 0.04, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 3SE +/- 0.05, N = 3112.3975.0458.4754.8437.7330.9031.5728.5922.8122.5619.0819.1217.5492.1646.591. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 112.07 / Avg: 112.39 / Max: 112.9Min: 74.96 / Avg: 75.04 / Max: 75.08Min: 58.2 / Avg: 58.47 / Max: 59Min: 54.8 / Avg: 54.84 / Max: 54.9Min: 37.62 / Avg: 37.73 / Max: 37.87Min: 30.86 / Avg: 30.9 / Max: 30.92Min: 31.53 / Avg: 31.57 / Max: 31.59Min: 28.5 / Avg: 28.59 / Max: 28.69Min: 22.76 / Avg: 22.81 / Max: 22.84Min: 22.52 / Avg: 22.56 / Max: 22.63Min: 19.06 / Avg: 19.08 / Max: 19.12Min: 19.11 / Avg: 19.12 / Max: 19.14Min: 17.52 / Avg: 17.54 / Max: 17.56Min: 91.95 / Avg: 92.16 / Max: 92.42Min: 46.49 / Avg: 46.59 / Max: 46.651. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.00, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.55, N = 3SE +/- 1.22, N = 3SE +/- 0.38, N = 3SE +/- 0.03, N = 3SE +/- 0.79, N = 337.355.266.473.896.9114.3122.7114.4125.7130.1149.6147.5148.847.076.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 37.2 / Avg: 37.3 / Max: 37.4Min: 55.1 / Avg: 55.2 / Max: 55.3Min: 66.2 / Avg: 66.4 / Max: 66.6Min: 73.8 / Avg: 73.83 / Max: 73.9Min: 96.8 / Avg: 96.93 / Max: 97.1Min: 114.2 / Avg: 114.33 / Max: 114.5Min: 122.7 / Avg: 122.7 / Max: 122.7Min: 114.2 / Avg: 114.4 / Max: 114.7Min: 125.4 / Avg: 125.67 / Max: 125.9Min: 129.9 / Avg: 130.1 / Max: 130.3Min: 148.7 / Avg: 149.6 / Max: 150.6Min: 145.1 / Avg: 147.53 / Max: 148.8Min: 148.2 / Avg: 148.83 / Max: 149.5Min: 47 / Avg: 47.03 / Max: 47.1Min: 74.9 / Avg: 76.13 / Max: 77.61. (CC) gcc options: -O3 -pthread -lz -llzma

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620SE +/- 0.010754, N = 3SE +/- 0.008716, N = 3SE +/- 0.037637, N = 3SE +/- 0.017716, N = 3SE +/- 0.025438, N = 3SE +/- 0.066319, N = 3SE +/- 0.072162, N = 3SE +/- 0.091101, N = 3SE +/- 0.083534, N = 3SE +/- 0.030087, N = 3SE +/- 0.099835, N = 4SE +/- 0.163720, N = 4SE +/- 0.190487, N = 4SE +/- 0.039059, N = 3SE +/- 0.028662, N = 31.5837043.4640034.4524194.9637157.2082359.3269099.8288718.86333612.73503113.94246516.88116815.61481216.2988523.0115954.8976531. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620Min: 1.57 / Avg: 1.58 / Max: 1.61Min: 3.45 / Avg: 3.46 / Max: 3.48Min: 4.39 / Avg: 4.45 / Max: 4.52Min: 4.93 / Avg: 4.96 / Max: 4.99Min: 7.16 / Avg: 7.21 / Max: 7.24Min: 9.25 / Avg: 9.33 / Max: 9.46Min: 9.75 / Avg: 9.83 / Max: 9.97Min: 8.68 / Avg: 8.86 / Max: 8.96Min: 12.62 / Avg: 12.74 / Max: 12.9Min: 13.91 / Avg: 13.94 / Max: 14Min: 16.59 / Avg: 16.88 / Max: 17.02Min: 15.39 / Avg: 15.61 / Max: 16.1Min: 15.78 / Avg: 16.3 / Max: 16.69Min: 2.95 / Avg: 3.01 / Max: 3.08Min: 4.85 / Avg: 4.9 / Max: 4.951. (CC) gcc options: -O3 -march=native -fopenmp

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.58, N = 3208208208202200197202198202203202198195171174
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200Min: 208 / Avg: 208.33 / Max: 209Min: 196 / Avg: 197.33 / Max: 198Min: 197 / Avg: 197.67 / Max: 198Min: 202 / Avg: 202.67 / Max: 203Min: 201 / Avg: 201.67 / Max: 203Min: 195 / Avg: 195.33 / Max: 196Min: 170 / Avg: 171 / Max: 172

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525K10K15K20K25KSE +/- 50.92, N = 3SE +/- 54.85, N = 9SE +/- 69.42, N = 3SE +/- 0.00, N = 3SE +/- 82.20, N = 3SE +/- 101.60, N = 3SE +/- 233.09, N = 3SE +/- 136.29, N = 3SE +/- 0.00, N = 3SE +/- 111.87, N = 3SE +/- 197.85, N = 2SE +/- 108.20, N = 2SE +/- 66.13, N = 4SE +/- 0.00, N = 33879.107130.897358.1912204.4013892.5012698.2020585.3012699.5017300.8023040.7020991.7020382.2021311.908265.9916399.701. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KMin: 3802.84 / Avg: 3879.1 / Max: 3975.69Min: 6786.09 / Avg: 7130.89 / Max: 7288.77Min: 7288.77 / Avg: 7358.19 / Max: 7497.02Min: 12204.4 / Avg: 12204.4 / Max: 12204.4Min: 13810.3 / Avg: 13892.5 / Max: 14056.9Min: 12495 / Avg: 12698.2 / Max: 12799.8Min: 20184.3 / Avg: 20585.33 / Max: 20991.7Min: 12495 / Avg: 12699.47 / Max: 12957.8Min: 17300.8 / Avg: 17300.8 / Max: 17300.8Min: 22817 / Avg: 23040.73 / Max: 23152.6Min: 20184.3 / Avg: 20382.15 / Max: 20580Min: 21203.7 / Avg: 21311.9 / Max: 21420.1Min: 8199.86 / Avg: 8265.99 / Max: 8464.38Min: 16399.7 / Avg: 16399.7 / Max: 16399.71. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525K10K15K20K25KSE +/- 28.76, N = 3SE +/- 48.79, N = 9SE +/- 27.68, N = 3SE +/- 68.60, N = 3SE +/- 63.07, N = 3SE +/- 137.10, N = 3SE +/- 215.40, N = 3SE +/- 162.55, N = 3SE +/- 61.33, N = 3SE +/- 197.00, N = 3SE +/- 121.22, N = 3SE +/- 112.07, N = 3SE +/- 145.73, N = 3SE +/- 74.25, N = 4SE +/- 118.03, N = 33350.226100.186587.5710427.2012173.7012067.3018311.7011976.9016990.1021472.3020993.1020185.5021495.206907.5413692.301. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KMin: 3321.46 / Avg: 3350.22 / Max: 3407.74Min: 5788.14 / Avg: 6100.18 / Max: 6247.52Min: 6559.89 / Avg: 6587.57 / Max: 6642.93Min: 10290 / Avg: 10427.2 / Max: 10495.8Min: 12110.6 / Avg: 12173.67 / Max: 12299.8Min: 11793.1 / Avg: 12067.3 / Max: 12204.4Min: 18096.3 / Avg: 18311.7 / Max: 18742.5Min: 11662 / Avg: 11976.87 / Max: 12204.4Min: 16928.8 / Avg: 16990.13 / Max: 17112.8Min: 21275.3 / Avg: 21472.3 / Max: 21866.3Min: 20783.8 / Avg: 20993.07 / Max: 21203.7Min: 19992 / Avg: 20185.5 / Max: 20380.2Min: 21203.7 / Avg: 21495.17 / Max: 21640.9Min: 6728.09 / Avg: 6907.54 / Max: 7091.77Min: 13456.2 / Avg: 13692.27 / Max: 13810.31. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521428425670SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.17, N = 310.3115.6319.6120.8330.3037.0435.7140.0050.0050.0058.8255.5662.5012.5022.56MIN: 10.1 / MAX: 10.99MIN: 15.15 / MAX: 16.67MIN: 19.23 / MAX: 21.28MIN: 20.41 / MAX: 22.73MIN: 29.41 / MAX: 32.26MIN: 34.48 / MAX: 40MIN: 33.33 / MAX: 38.46MIN: 37.04 / MAX: 41.67MIN: 47.62 / MAX: 52.63MIN: 47.62 / MAX: 52.63MIN: 52.63 / MAX: 62.5MIN: 50 / MAX: 62.5MIN: 55.56 / MAX: 66.67MIN: 12.2 / MAX: 13.33MIN: 21.28 / MAX: 24.39
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521224364860Min: 10.31 / Avg: 10.31 / Max: 10.31Min: 15.63 / Avg: 15.63 / Max: 15.63Min: 19.61 / Avg: 19.61 / Max: 19.61Min: 20.83 / Avg: 20.83 / Max: 20.83Min: 30.3 / Avg: 30.3 / Max: 30.3Min: 37.04 / Avg: 37.04 / Max: 37.04Min: 35.71 / Avg: 35.71 / Max: 35.71Min: 58.82 / Avg: 58.82 / Max: 58.82Min: 55.56 / Avg: 55.56 / Max: 55.56Min: 62.5 / Avg: 62.5 / Max: 62.5Min: 12.5 / Avg: 12.5 / Max: 12.5Min: 22.22 / Avg: 22.56 / Max: 22.73

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525001000150020002500SE +/- 10.95, N = 3SE +/- 12.91, N = 3SE +/- 8.78, N = 3SE +/- 10.13, N = 3SE +/- 12.22, N = 3SE +/- 9.97, N = 3SE +/- 10.26, N = 3SE +/- 11.83, N = 3SE +/- 13.45, N = 3SE +/- 13.12, N = 3SE +/- 10.12, N = 3SE +/- 11.47, N = 3SE +/- 11.68, N = 3SE +/- 13.33, N = 3SE +/- 10.66, N = 31892.01893.21887.01952.51977.31984.51922.32016.41945.31944.51955.31970.12004.42285.22300.71. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400800120016002000Min: 1870.4 / Avg: 1892 / Max: 1905.9Min: 1867.5 / Avg: 1893.2 / Max: 1908.2Min: 1869.4 / Avg: 1886.97 / Max: 1895.9Min: 1932.3 / Avg: 1952.53 / Max: 1963.6Min: 1953.5 / Avg: 1977.33 / Max: 1993.9Min: 1964.7 / Avg: 1984.5 / Max: 1996.4Min: 1902.5 / Avg: 1922.27 / Max: 1936.9Min: 1993.3 / Avg: 2016.43 / Max: 2032.3Min: 1918.7 / Avg: 1945.33 / Max: 1961.9Min: 1918.3 / Avg: 1944.53 / Max: 1958.4Min: 1935.1 / Avg: 1955.3 / Max: 1966.5Min: 1947.7 / Avg: 1970.13 / Max: 1985.5Min: 1981.1 / Avg: 2004.43 / Max: 2017.2Min: 2258.5 / Avg: 2285.17 / Max: 2298.8Min: 2279.8 / Avg: 2300.73 / Max: 2314.71. (CXX) g++ options: -O3 -march=native -rdynamic

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP LBMEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521020304050SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.26, N = 3SE +/- 0.03, N = 3SE +/- 0.26, N = 5SE +/- 0.07, N = 3SE +/- 0.17, N = 11SE +/- 0.03, N = 3SE +/- 0.03, N = 345.7642.9546.0724.9229.0227.6022.9427.6426.4721.9423.8223.3223.3327.2535.421. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP LBMEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645Min: 45.56 / Avg: 45.76 / Max: 46.03Min: 42.88 / Avg: 42.95 / Max: 43.08Min: 46.05 / Avg: 46.07 / Max: 46.1Min: 24.77 / Avg: 24.92 / Max: 25.05Min: 28.89 / Avg: 29.02 / Max: 29.09Min: 27.55 / Avg: 27.6 / Max: 27.63Min: 22.73 / Avg: 22.94 / Max: 23.1Min: 27.53 / Avg: 27.64 / Max: 27.7Min: 26.02 / Avg: 26.47 / Max: 26.92Min: 21.88 / Avg: 21.94 / Max: 21.99Min: 23.2 / Avg: 23.82 / Max: 24.72Min: 23.22 / Avg: 23.32 / Max: 23.46Min: 22.5 / Avg: 23.33 / Max: 24.76Min: 27.2 / Avg: 27.25 / Max: 27.28Min: 35.36 / Avg: 35.42 / Max: 35.471. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52130K260K390K520K650KSE +/- 569.07, N = 3SE +/- 2157.92, N = 3SE +/- 973.54, N = 3SE +/- 957.34, N = 3SE +/- 4035.67, N = 3SE +/- 948.15, N = 3SE +/- 4131.15, N = 3SE +/- 741.11, N = 3SE +/- 142.64, N = 3SE +/- 215.70, N = 3SE +/- 1174.39, N = 3SE +/- 1921.47, N = 3SE +/- 1456.19, N = 3SE +/- 7086.33, N = 3SE +/- 1476.72, N = 3511906513356511228524189542521531740532957539060523575521572524751529609540797621672614039
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52110K220K330K440K550KMin: 511300 / Avg: 511905.67 / Max: 513043Min: 510924 / Avg: 513356.33 / Max: 517660Min: 510027 / Avg: 511228.33 / Max: 513156Min: 522517 / Avg: 524188.67 / Max: 525833Min: 534521 / Avg: 542520.67 / Max: 547450Min: 530354 / Avg: 531740.33 / Max: 533554Min: 524902 / Avg: 532956.67 / Max: 538578Min: 538010 / Avg: 539060 / Max: 540491Min: 523393 / Avg: 523574.67 / Max: 523856Min: 521324 / Avg: 521572.33 / Max: 522002Min: 523197 / Avg: 524750.67 / Max: 527053Min: 525962 / Avg: 529608.67 / Max: 532482Min: 538241 / Avg: 540796.67 / Max: 543284Min: 613060 / Avg: 621672 / Max: 635726Min: 611253 / Avg: 614039 / Max: 616281

ebizzy

This is a test of ebizzy, a program to generate workloads resembling web server workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52600K1200K1800K2400K3000KSE +/- 4933.88, N = 3SE +/- 906.56, N = 3SE +/- 8956.45, N = 3SE +/- 8057.83, N = 3SE +/- 15686.47, N = 3SE +/- 10663.27, N = 3SE +/- 15669.57, N = 3SE +/- 8453.72, N = 3SE +/- 24033.50, N = 3SE +/- 37169.94, N = 3SE +/- 34594.95, N = 12SE +/- 25794.01, N = 15SE +/- 24194.67, N = 15SE +/- 6904.71, N = 7SE +/- 7117.75, N = 36232728839651021990120846617218541947836197732521368502456511271938827626472701767285378377688014752801. (CC) gcc options: -pthread -lpthread -O3 -march=native
OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52500K1000K1500K2000K2500KMin: 616705 / Avg: 623272 / Max: 632934Min: 882270 / Avg: 883965 / Max: 885370Min: 1009030 / Avg: 1021990 / Max: 1039179Min: 1192406 / Avg: 1208466.33 / Max: 1217652Min: 1691610 / Avg: 1721854 / Max: 1744199Min: 1926759 / Avg: 1947835.67 / Max: 1961193Min: 1951917 / Avg: 1977324.67 / Max: 2005917Min: 2127912 / Avg: 2136850 / Max: 2153748Min: 2420178 / Avg: 2456510.67 / Max: 2501931Min: 2645052 / Avg: 2719388 / Max: 2757214Min: 2508914 / Avg: 2762646.92 / Max: 2926207Min: 2510210 / Avg: 2701767.07 / Max: 2845248Min: 2672217 / Avg: 2853782.87 / Max: 2972568Min: 747575 / Avg: 776880.43 / Max: 796470Min: 1461618 / Avg: 1475280 / Max: 14855751. (CC) gcc options: -pthread -lpthread -O3 -march=native

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.23, N = 3SE +/- 0.19, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 332.4833.1233.0833.7134.0334.1133.0634.3833.6432.3134.0134.5034.7338.5938.011. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 32.44 / Avg: 32.48 / Max: 32.51Min: 33.04 / Avg: 33.12 / Max: 33.21Min: 32.96 / Avg: 33.08 / Max: 33.27Min: 33.67 / Avg: 33.71 / Max: 33.74Min: 33.89 / Avg: 34.03 / Max: 34.12Min: 33.96 / Avg: 34.11 / Max: 34.27Min: 32.93 / Avg: 33.06 / Max: 33.13Min: 34.31 / Avg: 34.38 / Max: 34.48Min: 33.2 / Avg: 33.64 / Max: 34Min: 32.12 / Avg: 32.31 / Max: 32.69Min: 33.71 / Avg: 34.01 / Max: 34.31Min: 34.32 / Avg: 34.5 / Max: 34.63Min: 34.46 / Avg: 34.73 / Max: 34.97Min: 38.57 / Avg: 38.59 / Max: 38.62Min: 37.96 / Avg: 38.01 / Max: 38.061. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 4SE +/- 0.01, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.01, N = 3SE +/- 0.01, N = 387.1958.2243.7442.3828.1922.6323.0521.3716.6116.3013.7513.8412.7871.5835.921. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 87.19 / Avg: 87.19 / Max: 87.2Min: 58.19 / Avg: 58.22 / Max: 58.26Min: 43.69 / Avg: 43.74 / Max: 43.81Min: 42.35 / Avg: 42.38 / Max: 42.41Min: 28.07 / Avg: 28.19 / Max: 28.33Min: 22.57 / Avg: 22.63 / Max: 22.73Min: 23.01 / Avg: 23.05 / Max: 23.09Min: 21.31 / Avg: 21.37 / Max: 21.46Min: 16.48 / Avg: 16.61 / Max: 16.78Min: 16.26 / Avg: 16.3 / Max: 16.33Min: 13.72 / Avg: 13.75 / Max: 13.84Min: 13.77 / Avg: 13.84 / Max: 13.89Min: 12.73 / Avg: 12.78 / Max: 12.87Min: 71.55 / Avg: 71.58 / Max: 71.6Min: 35.9 / Avg: 35.92 / Max: 35.931. (CXX) g++ options: -fopenmp -O2 -march=native

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521530456075SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 368.5049.9242.8639.7432.0729.1229.2327.8625.5025.2123.9824.0323.0354.1533.98
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521326395265Min: 68.42 / Avg: 68.5 / Max: 68.63Min: 49.75 / Avg: 49.92 / Max: 50.11Min: 42.78 / Avg: 42.86 / Max: 42.98Min: 39.64 / Avg: 39.74 / Max: 39.84Min: 32.01 / Avg: 32.07 / Max: 32.1Min: 29.07 / Avg: 29.12 / Max: 29.19Min: 29.13 / Avg: 29.23 / Max: 29.37Min: 27.84 / Avg: 27.86 / Max: 27.89Min: 25.47 / Avg: 25.5 / Max: 25.56Min: 25.18 / Avg: 25.21 / Max: 25.25Min: 23.93 / Avg: 23.98 / Max: 24.01Min: 23.95 / Avg: 24.03 / Max: 24.14Min: 23.01 / Avg: 23.03 / Max: 23.06Min: 54.13 / Avg: 54.15 / Max: 54.16Min: 33.92 / Avg: 33.98 / Max: 34.07

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 3SE +/- 0.00, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3131.18131.15130.88135.18137.09137.34135.01139.25135.14135.26135.24137.28138.89159.79159.601. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 131.16 / Avg: 131.18 / Max: 131.22Min: 131.11 / Avg: 131.15 / Max: 131.19Min: 130.77 / Avg: 130.88 / Max: 131.05Min: 135.12 / Avg: 135.18 / Max: 135.24Min: 136.98 / Avg: 137.09 / Max: 137.3Min: 137.34 / Avg: 137.34 / Max: 137.35Min: 134.91 / Avg: 135.01 / Max: 135.17Min: 139.03 / Avg: 139.25 / Max: 139.39Min: 135.11 / Avg: 135.14 / Max: 135.2Min: 135.2 / Avg: 135.26 / Max: 135.29Min: 135.2 / Avg: 135.23 / Max: 135.25Min: 137.26 / Avg: 137.28 / Max: 137.3Min: 138.88 / Avg: 138.89 / Max: 138.91Min: 159.77 / Avg: 159.79 / Max: 159.8Min: 159.56 / Avg: 159.6 / Max: 159.651. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.33140.66280.99421.32561.657SE +/- 0.005, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 31.4730.9080.7290.6790.4800.4070.4260.3820.3160.3230.2670.2790.2511.1730.6301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 1.47 / Avg: 1.47 / Max: 1.48Min: 0.9 / Avg: 0.91 / Max: 0.91Min: 0.73 / Avg: 0.73 / Max: 0.73Min: 0.68 / Avg: 0.68 / Max: 0.68Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.26 / Avg: 0.27 / Max: 0.27Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.25 / Avg: 0.25 / Max: 0.26Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 0.63 / Avg: 0.63 / Max: 0.631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KSE +/- 518.37, N = 3SE +/- 696.35, N = 3SE +/- 804.68, N = 3SE +/- 891.46, N = 3SE +/- 1318.12, N = 3SE +/- 1108.32, N = 3SE +/- 853.89, N = 3SE +/- 1178.79, N = 3SE +/- 3710.21, N = 3SE +/- 1360.28, N = 3SE +/- 6923.68, N = 3SE +/- 5067.03, N = 3SE +/- 6275.15, N = 3SE +/- 681.51, N = 3SE +/- 248.26, N = 31698322756773432293684495216536141835875486557747914567756879397508999929976522133453970381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KMin: 168803.67 / Avg: 169832.3 / Max: 170458.78Min: 274390.74 / Avg: 275676.66 / Max: 276782.78Min: 341698.14 / Avg: 343228.63 / Max: 344424.83Min: 367104.95 / Avg: 368449.41 / Max: 370135.74Min: 519362.06 / Avg: 521653.42 / Max: 523928.08Min: 613074.34 / Avg: 614182.67 / Max: 616399.31Min: 585986.61 / Avg: 587547.73 / Max: 588927.93Min: 654021.41 / Avg: 655774.08 / Max: 658015.96Min: 784040.65 / Avg: 791455.71 / Max: 795407.24Min: 773949.79 / Avg: 775687.15 / Max: 778368.93Min: 926183.25 / Avg: 939750.48 / Max: 948934.04Min: 889916.01 / Avg: 899992.3 / Max: 905966.24Min: 985316.28 / Avg: 997652.34 / Max: 1005820.11Min: 212208.72 / Avg: 213344.58 / Max: 214564.99Min: 396660.01 / Avg: 397038.18 / Max: 397505.911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.12220.24440.36660.48880.611SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.5430.3590.2900.2800.2010.1640.1730.1530.1200.1200.1090.1120.1020.4640.2631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.29 / Avg: 0.29 / Max: 0.29Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.17 / Avg: 0.17 / Max: 0.17Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.26 / Avg: 0.26 / Max: 0.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KSE +/- 258.28, N = 3SE +/- 527.65, N = 3SE +/- 721.66, N = 3SE +/- 406.00, N = 3SE +/- 1145.86, N = 3SE +/- 1605.99, N = 3SE +/- 1146.88, N = 3SE +/- 684.57, N = 3SE +/- 2696.01, N = 3SE +/- 492.11, N = 3SE +/- 2726.39, N = 3SE +/- 3714.01, N = 3SE +/- 1487.88, N = 3SE +/- 196.73, N = 3SE +/- 501.41, N = 31842512785103445983577814991386106675783406549148325098354579155238974539813822156173811841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KMin: 183809.92 / Avg: 184251.3 / Max: 184704.38Min: 277469.8 / Avg: 278509.56 / Max: 279185.65Min: 343314.32 / Avg: 344597.63 / Max: 345811.31Min: 357317.88 / Avg: 357781.31 / Max: 358590.46Min: 497224.75 / Avg: 499138.43 / Max: 501187.22Min: 608958.8 / Avg: 610666.61 / Max: 613876.39Min: 576053.18 / Avg: 578339.82 / Max: 579639.5Min: 654192.39 / Avg: 654914.4 / Max: 656282.85Min: 827135.25 / Avg: 832508.89 / Max: 835580.98Min: 834632.97 / Avg: 835456.65 / Max: 836335.05Min: 911128.88 / Avg: 915522.69 / Max: 920516.11Min: 891671.21 / Avg: 897453.24 / Max: 904382.54Min: 979697.89 / Avg: 981381.7 / Max: 984348.44Min: 215250.14 / Avg: 215616.98 / Max: 215923.6Min: 380600.92 / Avg: 381183.76 / Max: 382181.91. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521020304050SE +/- 0.11, N = 3SE +/- 0.13, N = 3SE +/- 0.48, N = 3SE +/- 0.10, N = 3SE +/- 0.24, N = 3SE +/- 0.27, N = 3SE +/- 0.11, N = 3SE +/- 0.20, N = 3SE +/- 0.14, N = 3SE +/- 0.24, N = 3SE +/- 0.35, N = 3SE +/- 0.26, N = 3SE +/- 0.40, N = 3SE +/- 0.22, N = 3SE +/- 0.20, N = 345.2138.0736.3534.8434.3132.6533.4332.8733.9633.6033.4133.2732.6235.6729.45
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645Min: 45 / Avg: 45.21 / Max: 45.37Min: 37.91 / Avg: 38.07 / Max: 38.32Min: 35.71 / Avg: 36.35 / Max: 37.28Min: 34.65 / Avg: 34.84 / Max: 35.01Min: 33.83 / Avg: 34.31 / Max: 34.61Min: 32.15 / Avg: 32.64 / Max: 33.08Min: 33.26 / Avg: 33.43 / Max: 33.62Min: 32.54 / Avg: 32.87 / Max: 33.22Min: 33.82 / Avg: 33.96 / Max: 34.23Min: 33.36 / Avg: 33.6 / Max: 34.09Min: 32.9 / Avg: 33.41 / Max: 34.08Min: 32.84 / Avg: 33.27 / Max: 33.73Min: 32.22 / Avg: 32.62 / Max: 33.42Min: 35.42 / Avg: 35.67 / Max: 36.1Min: 29.13 / Avg: 29.45 / Max: 29.83

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 321.121.021.320.420.219.920.419.920.520.520.520.219.917.217.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025Min: 21.1 / Avg: 21.1 / Max: 21.1Min: 20.9 / Avg: 20.97 / Max: 21Min: 21.2 / Avg: 21.27 / Max: 21.3Min: 20.4 / Avg: 20.43 / Max: 20.5Min: 20.2 / Avg: 20.23 / Max: 20.3Min: 19.9 / Avg: 19.9 / Max: 19.9Min: 20.4 / Avg: 20.43 / Max: 20.5Min: 19.8 / Avg: 19.87 / Max: 19.9Min: 20.5 / Avg: 20.5 / Max: 20.5Min: 20.5 / Avg: 20.5 / Max: 20.5Min: 20.5 / Avg: 20.53 / Max: 20.6Min: 20.2 / Avg: 20.2 / Max: 20.2Min: 19.9 / Avg: 19.9 / Max: 19.9Min: 17.2 / Avg: 17.2 / Max: 17.2Min: 17.5 / Avg: 17.5 / Max: 17.5

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KSE +/- 24.34, N = 3SE +/- 8.24, N = 3SE +/- 16.90, N = 3SE +/- 6.53, N = 3SE +/- 16.45, N = 3SE +/- 30.09, N = 3SE +/- 125.44, N = 9SE +/- 32.93, N = 3SE +/- 24.65, N = 3SE +/- 88.09, N = 3SE +/- 64.33, N = 3SE +/- 57.10, N = 3SE +/- 21.66, N = 3SE +/- 13.57, N = 3SE +/- 17.97, N = 34965.967332.829134.729499.3313570.8316396.5715568.2417266.2118192.3218451.9119990.7519577.0320955.935638.538415.191. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KMin: 4920.56 / Avg: 4965.96 / Max: 5003.89Min: 7316.76 / Avg: 7332.82 / Max: 7344.03Min: 9104 / Avg: 9134.72 / Max: 9162.29Min: 9488.78 / Avg: 9499.33 / Max: 9511.28Min: 13539.78 / Avg: 13570.83 / Max: 13595.76Min: 16363.28 / Avg: 16396.57 / Max: 16456.63Min: 14571.02 / Avg: 15568.24 / Max: 15743.07Min: 17206.64 / Avg: 17266.21 / Max: 17320.33Min: 18145.03 / Avg: 18192.32 / Max: 18228.03Min: 18293.63 / Avg: 18451.91 / Max: 18598.07Min: 19911.11 / Avg: 19990.75 / Max: 20118.07Min: 19482.9 / Avg: 19577.03 / Max: 19680.11Min: 20921.85 / Avg: 20955.93 / Max: 20996.13Min: 5624.62 / Avg: 5638.53 / Max: 5665.66Min: 8394.92 / Avg: 8415.19 / Max: 8451.031. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521632486480SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 373.4351.7741.2439.7529.1625.2825.8823.9720.8020.6918.6618.8717.9860.4033.801. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521428425670Min: 73.43 / Avg: 73.43 / Max: 73.44Min: 51.75 / Avg: 51.77 / Max: 51.78Min: 41.2 / Avg: 41.24 / Max: 41.3Min: 39.75 / Avg: 39.75 / Max: 39.76Min: 29.15 / Avg: 29.16 / Max: 29.17Min: 25.27 / Avg: 25.28 / Max: 25.31Min: 25.87 / Avg: 25.88 / Max: 25.91Min: 23.96 / Avg: 23.97 / Max: 23.98Min: 20.77 / Avg: 20.8 / Max: 20.82Min: 20.68 / Avg: 20.69 / Max: 20.7Min: 18.65 / Avg: 18.66 / Max: 18.66Min: 18.86 / Avg: 18.87 / Max: 18.88Min: 17.94 / Avg: 17.98 / Max: 18.03Min: 60.39 / Avg: 60.4 / Max: 60.41Min: 33.79 / Avg: 33.8 / Max: 33.811. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KSE +/- 10.40, N = 3SE +/- 16.20, N = 3SE +/- 1.40, N = 3SE +/- 24.99, N = 3SE +/- 5.98, N = 3SE +/- 22.39, N = 3SE +/- 8.80, N = 3SE +/- 5.12, N = 3SE +/- 68.31, N = 3SE +/- 29.81, N = 3SE +/- 5.01, N = 3SE +/- 34.52, N = 3SE +/- 8.90, N = 3SE +/- 8.04, N = 3SE +/- 66.01, N = 1410156.6010066.6010003.6017082.4016743.6016649.5019645.4016649.7018424.1019436.3019256.7019155.4019245.9017351.206787.061. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KMin: 10136.2 / Avg: 10156.6 / Max: 10170.3Min: 10034.4 / Avg: 10066.63 / Max: 10085.6Min: 10000.9 / Avg: 10003.63 / Max: 10005.5Min: 17033.7 / Avg: 17082.43 / Max: 17116.4Min: 16735 / Avg: 16743.6 / Max: 16755.1Min: 16604.7 / Avg: 16649.47 / Max: 16672.4Min: 19632.2 / Avg: 19645.43 / Max: 19662.1Min: 16643.8 / Avg: 16649.7 / Max: 16659.9Min: 18336.1 / Avg: 18424.1 / Max: 18558.6Min: 19380.5 / Avg: 19436.3 / Max: 19482.4Min: 19250.6 / Avg: 19256.67 / Max: 19266.6Min: 19088.2 / Avg: 19155.37 / Max: 19202.8Min: 19230.9 / Avg: 19245.87 / Max: 19261.7Min: 17335.9 / Avg: 17351.17 / Max: 17363.2Min: 5976.74 / Avg: 6787.06 / Max: 6965.651. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 4SE +/- 0.01, N = 4SE +/- 0.03, N = 4SE +/- 0.02, N = 3SE +/- 0.02, N = 379.1752.8139.6638.4925.6820.9121.4319.5515.2115.0412.8212.8511.9064.9732.581. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521530456075Min: 79.13 / Avg: 79.17 / Max: 79.22Min: 52.8 / Avg: 52.81 / Max: 52.82Min: 39.61 / Avg: 39.66 / Max: 39.7Min: 38.48 / Avg: 38.49 / Max: 38.51Min: 25.67 / Avg: 25.68 / Max: 25.7Min: 20.89 / Avg: 20.91 / Max: 20.94Min: 21.4 / Avg: 21.43 / Max: 21.45Min: 19.49 / Avg: 19.55 / Max: 19.6Min: 15.17 / Avg: 15.21 / Max: 15.24Min: 15.01 / Avg: 15.04 / Max: 15.06Min: 12.79 / Avg: 12.81 / Max: 12.87Min: 12.83 / Avg: 12.85 / Max: 12.87Min: 11.82 / Avg: 11.9 / Max: 11.96Min: 64.93 / Avg: 64.97 / Max: 64.99Min: 32.55 / Avg: 32.58 / Max: 32.611. (CC) gcc options: -lm -lpthread -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KSE +/- 12.95, N = 3SE +/- 24.66, N = 3SE +/- 22.56, N = 3SE +/- 3.68, N = 3SE +/- 81.46, N = 3SE +/- 106.91, N = 3SE +/- 26.26, N = 3SE +/- 207.08, N = 3SE +/- 82.31, N = 3SE +/- 281.17, N = 3SE +/- 298.03, N = 3SE +/- 34.41, N = 3SE +/- 491.67, N = 3SE +/- 31.62, N = 3SE +/- 82.34, N = 333816.4843321.3849617.9261041.2774421.6178846.9686547.2879644.8895079.4199557.23103985.91102548.95105428.6743172.5144891.691. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KMin: 33792.21 / Avg: 33816.48 / Max: 33836.43Min: 43285.83 / Avg: 43321.38 / Max: 43368.77Min: 49572.87 / Avg: 49617.92 / Max: 49642.69Min: 61034.85 / Avg: 61041.27 / Max: 61047.59Min: 74315 / Avg: 74421.61 / Max: 74581.6Min: 78710.27 / Avg: 78846.96 / Max: 79057.7Min: 86498.25 / Avg: 86547.28 / Max: 86588.09Min: 79399.38 / Avg: 79644.88 / Max: 80056.49Min: 94922.44 / Avg: 95079.41 / Max: 95200.87Min: 98994.91 / Avg: 99557.23 / Max: 99842.55Min: 103567 / Avg: 103985.91 / Max: 104562.59Min: 102490.22 / Avg: 102548.95 / Max: 102609.37Min: 104525.99 / Avg: 105428.67 / Max: 106217.81Min: 43122.35 / Avg: 43172.51 / Max: 43230.93Min: 44754.21 / Avg: 44891.69 / Max: 45038.951. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525M10M15M20M25MSE +/- 9437.39, N = 3SE +/- 15626.15, N = 3SE +/- 32954.83, N = 3SE +/- 35625.01, N = 3SE +/- 28222.94, N = 3SE +/- 19374.10, N = 3SE +/- 55451.20, N = 3SE +/- 65922.96, N = 3SE +/- 29075.41, N = 3SE +/- 79112.63, N = 3SE +/- 163606.09, N = 3SE +/- 235328.27, N = 3SE +/- 206001.50, N = 7SE +/- 6120.44, N = 3SE +/- 30731.73, N = 33438510.355095560.166612368.677003110.1810443329.1212896969.6412627261.9313858394.6117727598.8417595501.8120981193.8820917619.7622318900.994265815.038225794.401. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524M8M12M16M20MMin: 3425266.43 / Avg: 3438510.35 / Max: 3456778.84Min: 5064455.01 / Avg: 5095560.16 / Max: 5113736.06Min: 6549932.25 / Avg: 6612368.67 / Max: 6661871.66Min: 6942183.09 / Avg: 7003110.18 / Max: 7065563.44Min: 10405966.38 / Avg: 10443329.12 / Max: 10498652.3Min: 12861868.91 / Avg: 12896969.64 / Max: 12928733.42Min: 12516504.31 / Avg: 12627261.93 / Max: 12687546.78Min: 13729117.41 / Avg: 13858394.61 / Max: 13945462.29Min: 17694612.55 / Avg: 17727598.84 / Max: 17785565.68Min: 17508090.68 / Avg: 17595501.81 / Max: 17753426.01Min: 20708784.06 / Avg: 20981193.88 / Max: 21274387.73Min: 20467620.04 / Avg: 20917619.76 / Max: 21262048.98Min: 21674928.96 / Avg: 22318900.99 / Max: 22997780.77Min: 4253702.33 / Avg: 4265815.03 / Max: 4273401.47Min: 8193174.36 / Avg: 8225794.4 / Max: 8287218.361. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 3SE +/- 0.02, N = 32.824.395.656.078.7710.6010.4211.3614.2914.4916.9516.6718.183.656.83MIN: 2.79 / MAX: 2.86MIN: 4.33 / MAX: 4.44MIN: 5.59 / MAX: 5.75MIN: 5.99 / MAX: 6.13MIN: 8.62 / MAX: 8.85MIN: 10.42 / MAX: 10.75MIN: 10.2 / MAX: 10.53MIN: 11.11 / MAX: 11.49MIN: 13.89 / MAX: 14.49MIN: 14.08 / MAX: 14.71MIN: 16.39 / MAX: 17.24MIN: 16.39 / MAX: 16.95MIN: 16.39 / MAX: 18.87MIN: 3.58 / MAX: 3.69MIN: 6.67 / MAX: 6.94
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025Min: 2.82 / Avg: 2.82 / Max: 2.83Min: 4.39 / Avg: 4.39 / Max: 4.39Min: 5.65 / Avg: 5.65 / Max: 5.65Min: 6.06 / Avg: 6.07 / Max: 6.1Min: 8.77 / Avg: 8.77 / Max: 8.77Min: 10.53 / Avg: 10.6 / Max: 10.64Min: 10.42 / Avg: 10.42 / Max: 10.42Min: 11.36 / Avg: 11.36 / Max: 11.36Min: 14.29 / Avg: 14.29 / Max: 14.29Min: 14.49 / Avg: 14.49 / Max: 14.49Min: 16.95 / Avg: 16.95 / Max: 16.95Min: 16.67 / Avg: 16.67 / Max: 16.67Min: 18.18 / Avg: 18.18 / Max: 18.18Min: 3.64 / Avg: 3.65 / Max: 3.65Min: 6.8 / Avg: 6.83 / Max: 6.85

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.21, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.13, N = 4SE +/- 0.12, N = 39.2017.0920.0320.4423.8425.1323.7125.2125.3625.2325.6525.0425.5710.2020.001. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430Min: 9.07 / Avg: 9.2 / Max: 9.37Min: 16.96 / Avg: 17.09 / Max: 17.21Min: 19.97 / Avg: 20.03 / Max: 20.1Min: 20.34 / Avg: 20.44 / Max: 20.61Min: 23.72 / Avg: 23.84 / Max: 23.98Min: 25.08 / Avg: 25.13 / Max: 25.16Min: 23.64 / Avg: 23.71 / Max: 23.81Min: 25.09 / Avg: 25.21 / Max: 25.34Min: 25.3 / Avg: 25.36 / Max: 25.48Min: 25.14 / Avg: 25.23 / Max: 25.39Min: 25.41 / Avg: 25.65 / Max: 26.06Min: 24.94 / Avg: 25.04 / Max: 25.19Min: 25.56 / Avg: 25.57 / Max: 25.58Min: 9.85 / Avg: 10.2 / Max: 10.45Min: 19.78 / Avg: 20 / Max: 20.21. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.02020, N = 3SE +/- 0.00923, N = 3SE +/- 0.01071, N = 3SE +/- 0.01740, N = 3SE +/- 0.00832, N = 3SE +/- 0.01343, N = 3SE +/- 0.01295, N = 3SE +/- 0.00152, N = 3SE +/- 0.01668, N = 15SE +/- 0.02012, N = 3SE +/- 0.01627, N = 15SE +/- 0.00813, N = 3SE +/- 0.02021, N = 3SE +/- 0.06606, N = 3SE +/- 0.00961, N = 36.122614.440773.501753.283772.489772.197272.222422.048662.157941.987262.022312.335192.068845.032362.75743MIN: 5.93MIN: 4.2MIN: 3.1MIN: 3.08MIN: 2.4MIN: 2.12MIN: 2.06MIN: 1.99MIN: 1.99MIN: 1.87MIN: 1.84MIN: 2.08MIN: 1.89MIN: 4.86MIN: 2.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 6.1 / Avg: 6.12 / Max: 6.16Min: 4.43 / Avg: 4.44 / Max: 4.46Min: 3.49 / Avg: 3.5 / Max: 3.52Min: 3.25 / Avg: 3.28 / Max: 3.3Min: 2.48 / Avg: 2.49 / Max: 2.51Min: 2.18 / Avg: 2.2 / Max: 2.22Min: 2.21 / Avg: 2.22 / Max: 2.25Min: 2.05 / Avg: 2.05 / Max: 2.05Min: 2.07 / Avg: 2.16 / Max: 2.28Min: 1.95 / Avg: 1.99 / Max: 2.02Min: 1.93 / Avg: 2.02 / Max: 2.15Min: 2.32 / Avg: 2.34 / Max: 2.35Min: 2.04 / Avg: 2.07 / Max: 2.11Min: 4.94 / Avg: 5.03 / Max: 5.16Min: 2.74 / Avg: 2.76 / Max: 2.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521632486480SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 4SE +/- 0.03, N = 3SE +/- 0.02, N = 4SE +/- 0.08, N = 4SE +/- 0.07, N = 4SE +/- 0.05, N = 4SE +/- 0.12, N = 15SE +/- 0.09, N = 3SE +/- 0.06, N = 369.9148.2945.4625.2120.3519.0715.2918.6115.2813.6614.0314.7014.4036.8022.831. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521428425670Min: 69.87 / Avg: 69.91 / Max: 69.96Min: 48.21 / Avg: 48.29 / Max: 48.38Min: 45.38 / Avg: 45.46 / Max: 45.52Min: 25.12 / Avg: 25.21 / Max: 25.39Min: 20.26 / Avg: 20.35 / Max: 20.46Min: 19 / Avg: 19.07 / Max: 19.13Min: 15.1 / Avg: 15.29 / Max: 15.4Min: 18.57 / Avg: 18.61 / Max: 18.67Min: 15.23 / Avg: 15.28 / Max: 15.32Min: 13.55 / Avg: 13.66 / Max: 13.9Min: 13.91 / Avg: 14.03 / Max: 14.23Min: 14.59 / Avg: 14.7 / Max: 14.82Min: 14.05 / Avg: 14.4 / Max: 15.79Min: 36.61 / Avg: 36.8 / Max: 36.9Min: 22.76 / Avg: 22.83 / Max: 22.941. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KSE +/- 5.69, N = 3SE +/- 6.17, N = 3SE +/- 3.19, N = 3SE +/- 34.67, N = 3SE +/- 21.24, N = 3SE +/- 18.74, N = 3SE +/- 14.29, N = 3SE +/- 4.93, N = 3SE +/- 25.58, N = 3SE +/- 34.97, N = 3SE +/- 8.45, N = 3SE +/- 33.88, N = 3SE +/- 6.83, N = 3SE +/- 4.75, N = 3SE +/- 20.53, N = 35123.26498.46706.27699.48033.07903.68476.37885.08172.08499.28287.78248.38270.46573.97899.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5215003000450060007500Min: 5117.2 / Avg: 5123.23 / Max: 5134.6Min: 6486.1 / Avg: 6498.4 / Max: 6505.4Min: 6700.4 / Avg: 6706.17 / Max: 6711.4Min: 7651 / Avg: 7699.4 / Max: 7766.6Min: 7990.6 / Avg: 8033.03 / Max: 8056Min: 7870.9 / Avg: 7903.63 / Max: 7935.8Min: 8449.2 / Avg: 8476.3 / Max: 8497.7Min: 7875.6 / Avg: 7885 / Max: 7892.3Min: 8129 / Avg: 8172.03 / Max: 8217.5Min: 8460.8 / Avg: 8499.17 / Max: 8569Min: 8277.2 / Avg: 8287.67 / Max: 8304.4Min: 8204.5 / Avg: 8248.33 / Max: 8315Min: 8256.7 / Avg: 8270.37 / Max: 8277.3Min: 6567.3 / Avg: 6573.87 / Max: 6583.1Min: 7860.7 / Avg: 7899.7 / Max: 7930.31. (CC) gcc options: -O3 -pthread -lz -llzma

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F525K10K15K20K25KSE +/- 8.87, N = 3SE +/- 5.85, N = 3SE +/- 4.51, N = 3SE +/- 1.11, N = 3SE +/- 3.25, N = 3SE +/- 8.85, N = 3SE +/- 14.79, N = 3SE +/- 39.53, N = 3SE +/- 23.53, N = 3SE +/- 30.57, N = 3SE +/- 45.79, N = 3SE +/- 24.42, N = 3SE +/- 63.30, N = 3SE +/- 4.39, N = 3SE +/- 1.01, N = 32708.904078.215463.455622.828541.1211121.5910901.4711234.6015958.1616152.0820174.7619926.6121092.563306.396629.061. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KMin: 2691.46 / Avg: 2708.9 / Max: 2720.48Min: 4067.37 / Avg: 4078.21 / Max: 4087.43Min: 5456.78 / Avg: 5463.45 / Max: 5472.04Min: 5620.68 / Avg: 5622.82 / Max: 5624.41Min: 8535.73 / Avg: 8541.12 / Max: 8546.95Min: 11109.84 / Avg: 11121.59 / Max: 11138.93Min: 10874.37 / Avg: 10901.47 / Max: 10925.3Min: 11159.1 / Avg: 11234.6 / Max: 11292.66Min: 15919.41 / Avg: 15958.16 / Max: 16000.66Min: 16091.15 / Avg: 16152.08 / Max: 16186.88Min: 20083.31 / Avg: 20174.76 / Max: 20224.79Min: 19890.13 / Avg: 19926.61 / Max: 19972.98Min: 20966.66 / Avg: 21092.56 / Max: 21167.03Min: 3301.92 / Avg: 3306.39 / Max: 3315.16Min: 6628.02 / Avg: 6629.06 / Max: 6631.081. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521530456075SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 4SE +/- 0.04, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.06, N = 4SE +/- 0.02, N = 3SE +/- 0.02, N = 366.4249.1141.6934.0325.8923.0318.0622.2315.3513.8711.8011.9411.1352.7925.881. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521326395265Min: 66.13 / Avg: 66.42 / Max: 66.58Min: 48.99 / Avg: 49.11 / Max: 49.32Min: 41.58 / Avg: 41.69 / Max: 41.79Min: 33.93 / Avg: 34.03 / Max: 34.16Min: 25.8 / Avg: 25.89 / Max: 25.96Min: 22.87 / Avg: 23.03 / Max: 23.23Min: 18.03 / Avg: 18.06 / Max: 18.09Min: 22.07 / Avg: 22.23 / Max: 22.32Min: 15.3 / Avg: 15.35 / Max: 15.39Min: 13.77 / Avg: 13.87 / Max: 13.95Min: 11.75 / Avg: 11.8 / Max: 11.87Min: 11.88 / Avg: 11.94 / Max: 12Min: 10.98 / Avg: 11.13 / Max: 11.24Min: 52.75 / Avg: 52.79 / Max: 52.81Min: 25.85 / Avg: 25.88 / Max: 25.921. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5216K32K48K64K80KSE +/- 2.31, N = 3SE +/- 4.10, N = 3SE +/- 12.17, N = 3SE +/- 3.06, N = 3SE +/- 1.00, N = 3SE +/- 8.37, N = 3SE +/- 9.84, N = 3SE +/- 17.06, N = 3SE +/- 72.25, N = 3SE +/- 3.00, N = 3SE +/- 4.67, N = 3SE +/- 15.77, N = 3SE +/- 34.82, N = 3SE +/- 1.67, N = 3SE +/- 3.33, N = 31082516224214542231433755437034280044596599636111973579700337523413183263451. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5213K26K39K52K65KMin: 10821 / Avg: 10825 / Max: 10829Min: 16216 / Avg: 16223.67 / Max: 16230Min: 21436 / Avg: 21453.67 / Max: 21477Min: 22310 / Avg: 22314 / Max: 22320Min: 33753 / Avg: 33755 / Max: 33756Min: 43689 / Avg: 43703.33 / Max: 43718Min: 42782 / Avg: 42799.67 / Max: 42816Min: 44563 / Avg: 44596 / Max: 44620Min: 59835 / Avg: 59963.33 / Max: 60085Min: 61113 / Avg: 61119 / Max: 61122Min: 73574 / Avg: 73578.67 / Max: 73588Min: 70003 / Avg: 70033.33 / Max: 70056Min: 75187 / Avg: 75234 / Max: 75302Min: 13181 / Avg: 13182.67 / Max: 13186Min: 26342 / Avg: 26345.33 / Max: 263521. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KSE +/- 6.91, N = 3SE +/- 28.65, N = 3SE +/- 30.64, N = 3SE +/- 34.38, N = 3SE +/- 16.32, N = 3SE +/- 15.49, N = 3SE +/- 7.45, N = 3SE +/- 28.55, N = 3SE +/- 87.07, N = 3SE +/- 45.55, N = 3SE +/- 32.87, N = 3SE +/- 31.81, N = 3SE +/- 39.30, N = 3SE +/- 28.30, N = 3SE +/- 28.27, N = 323406.4035086.9546017.2048284.8371372.7987391.7985303.3594237.65119683.65120520.68143292.79143169.12155994.0328528.3456973.851. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KMin: 23398.23 / Avg: 23406.4 / Max: 23420.13Min: 35057.87 / Avg: 35086.95 / Max: 35144.26Min: 45958.39 / Avg: 46017.2 / Max: 46061.5Min: 48218.5 / Avg: 48284.83 / Max: 48333.69Min: 71350.26 / Avg: 71372.79 / Max: 71404.51Min: 87370.61 / Avg: 87391.79 / Max: 87421.95Min: 85294.1 / Avg: 85303.35 / Max: 85318.1Min: 94189.9 / Avg: 94237.65 / Max: 94288.64Min: 119543.75 / Avg: 119683.65 / Max: 119843.41Min: 120433.36 / Avg: 120520.68 / Max: 120586.84Min: 143233.45 / Avg: 143292.79 / Max: 143346.95Min: 143136.28 / Avg: 143169.12 / Max: 143232.73Min: 155919.59 / Avg: 155994.03 / Max: 156053.11Min: 28484.88 / Avg: 28528.34 / Max: 28581.47Min: 56938.61 / Avg: 56973.85 / Max: 57029.761. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KSE +/- 0.98, N = 3SE +/- 0.42, N = 3SE +/- 0.60, N = 3SE +/- 5.77, N = 3SE +/- 0.38, N = 3SE +/- 6.32, N = 3SE +/- 2.43, N = 3SE +/- 2.98, N = 3SE +/- 6.05, N = 3SE +/- 11.28, N = 3SE +/- 2.16, N = 3SE +/- 5.99, N = 3SE +/- 14.83, N = 3SE +/- 2.21, N = 3SE +/- 2.53, N = 31878.742821.193760.663871.575847.707273.717105.737720.589992.1510112.2212091.5611996.5913026.172288.334580.081. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KMin: 1877.16 / Avg: 1878.74 / Max: 1880.55Min: 2820.54 / Avg: 2821.19 / Max: 2821.99Min: 3759.79 / Avg: 3760.66 / Max: 3761.8Min: 3864.76 / Avg: 3871.57 / Max: 3883.04Min: 5846.99 / Avg: 5847.7 / Max: 5848.3Min: 7261.15 / Avg: 7273.71 / Max: 7281.24Min: 7100.89 / Avg: 7105.73 / Max: 7108.61Min: 7715.51 / Avg: 7720.58 / Max: 7725.83Min: 9980.12 / Avg: 9992.15 / Max: 9999.28Min: 10089.84 / Avg: 10112.22 / Max: 10125.85Min: 12087.35 / Avg: 12091.56 / Max: 12094.48Min: 11984.94 / Avg: 11996.59 / Max: 12004.82Min: 12998.72 / Avg: 13026.17 / Max: 13049.62Min: 2285.96 / Avg: 2288.33 / Max: 2292.74Min: 4577.39 / Avg: 4580.08 / Max: 4585.141. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5240K80K120K160K200KSE +/- 186.09, N = 3SE +/- 183.06, N = 3SE +/- 294.16, N = 3SE +/- 123.36, N = 3SE +/- 377.08, N = 3SE +/- 1288.21, N = 3SE +/- 270.71, N = 3SE +/- 157.32, N = 3SE +/- 457.19, N = 3SE +/- 217.28, N = 3SE +/- 1795.92, N = 3SE +/- 558.83, N = 3SE +/- 602.57, N = 3SE +/- 382.53, N = 3SE +/- 417.25, N = 331690.6046780.9459431.0363930.7292207.07112205.01110766.79122762.96152001.95127853.09179496.78179226.24197426.5738378.0876923.391. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KMin: 31367.04 / Avg: 31690.6 / Max: 32011.65Min: 46417.91 / Avg: 46780.94 / Max: 47003.53Min: 59024.95 / Avg: 59431.03 / Max: 60002.73Min: 63684.02 / Avg: 63930.72 / Max: 64057.16Min: 91633.53 / Avg: 92207.07 / Max: 92917.94Min: 109640.44 / Avg: 112205.01 / Max: 113701Min: 110323.78 / Avg: 110766.79 / Max: 111257.86Min: 122497.64 / Avg: 122762.96 / Max: 123042.09Min: 151506.09 / Avg: 152001.95 / Max: 152915.2Min: 127425.29 / Avg: 127853.09 / Max: 128133.12Min: 175929.7 / Avg: 179496.78 / Max: 181644.9Min: 178344.47 / Avg: 179226.24 / Max: 180261.88Min: 196236.3 / Avg: 197426.57 / Max: 198185.13Min: 37633.18 / Avg: 38378.08 / Max: 38901.65Min: 76194.59 / Avg: 76923.39 / Max: 77639.821. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5290K180K270K360K450KSE +/- 4.97, N = 3SE +/- 1.61, N = 3SE +/- 4.86, N = 3SE +/- 3.30, N = 3SE +/- 100.32, N = 3SE +/- 66.32, N = 3SE +/- 47.74, N = 3SE +/- 32.95, N = 3SE +/- 189.45, N = 3SE +/- 9.93, N = 3SE +/- 137.59, N = 3SE +/- 205.94, N = 3SE +/- 31.36, N = 3SE +/- 4.12, N = 3SE +/- 3.54, N = 358583.3887851.58117080.86120826.81182716.39237596.66232455.68241722.09328019.70332363.94400773.38397326.45427545.6671366.17142756.351. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5270K140K210K280K350KMin: 58573.44 / Avg: 58583.38 / Max: 58588.42Min: 87849.93 / Avg: 87851.58 / Max: 87854.8Min: 117074.78 / Avg: 117080.86 / Max: 117090.47Min: 120822.19 / Avg: 120826.81 / Max: 120833.2Min: 182574.85 / Avg: 182716.39 / Max: 182910.32Min: 237474.07 / Avg: 237596.66 / Max: 237701.83Min: 232394.78 / Avg: 232455.68 / Max: 232549.81Min: 241657.95 / Avg: 241722.09 / Max: 241767.28Min: 327828.63 / Avg: 328019.7 / Max: 328398.59Min: 332347.41 / Avg: 332363.94 / Max: 332381.73Min: 400607.54 / Avg: 400773.38 / Max: 401046.47Min: 397034.87 / Avg: 397326.45 / Max: 397724.18Min: 427483.29 / Avg: 427545.66 / Max: 427582.63Min: 71360.29 / Avg: 71366.17 / Max: 71374.11Min: 142752.27 / Avg: 142756.35 / Max: 142763.41. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52160320480640800SE +/- 0.27, N = 3SE +/- 0.12, N = 3SE +/- 0.36, N = 3SE +/- 0.13, N = 3SE +/- 0.32, N = 3SE +/- 0.11, N = 3SE +/- 0.46, N = 3SE +/- 0.70, N = 3SE +/- 0.70, N = 3SE +/- 0.76, N = 3SE +/- 0.57, N = 3SE +/- 0.57, N = 3SE +/- 0.66, N = 3SE +/- 0.17, N = 3SE +/- 0.59, N = 3608.61608.05605.83627.74636.43637.38627.06645.87626.57623.84624.73635.92643.82740.58739.721. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52130260390520650Min: 608.15 / Avg: 608.61 / Max: 609.07Min: 607.84 / Avg: 608.05 / Max: 608.27Min: 605.12 / Avg: 605.82 / Max: 606.33Min: 627.49 / Avg: 627.74 / Max: 627.91Min: 635.86 / Avg: 636.43 / Max: 636.98Min: 637.17 / Avg: 637.38 / Max: 637.55Min: 626.15 / Avg: 627.06 / Max: 627.62Min: 644.48 / Avg: 645.87 / Max: 646.66Min: 625.29 / Avg: 626.57 / Max: 627.72Min: 622.33 / Avg: 623.84 / Max: 624.72Min: 623.79 / Avg: 624.73 / Max: 625.77Min: 634.78 / Avg: 635.92 / Max: 636.53Min: 642.67 / Avg: 643.82 / Max: 644.97Min: 740.39 / Avg: 740.58 / Max: 740.92Min: 738.56 / Avg: 739.72 / Max: 740.431. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52400800120016002000SE +/- 2.89, N = 3SE +/- 1.26, N = 3SE +/- 2.26, N = 3SE +/- 3.36, N = 3SE +/- 1.95, N = 3SE +/- 5.74, N = 3SE +/- 11.89, N = 3SE +/- 4.04, N = 3SE +/- 3.31, N = 3SE +/- 5.70, N = 3SE +/- 9.24, N = 31083.361422.071647.781884.311992.761885.982006.811971.502007.421277.15934.001. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230060090012001500Min: 1077.67 / Avg: 1083.36 / Max: 1087.08Min: 1420.09 / Avg: 1422.07 / Max: 1424.4Min: 1643.54 / Avg: 1647.78 / Max: 1651.27Min: 1880.37 / Avg: 1884.31 / Max: 1890.99Min: 1988.95 / Avg: 1992.76 / Max: 1995.4Min: 1876.61 / Avg: 1885.98 / Max: 1896.4Min: 1983.12 / Avg: 2006.81 / Max: 2020.44Min: 1965.15 / Avg: 1971.5 / Max: 1979Min: 2003.75 / Avg: 2007.42 / Max: 2014.03Min: 1269.25 / Avg: 1277.15 / Max: 1288.21Min: 920.59 / Avg: 934 / Max: 951.731. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000SE +/- 1.85, N = 3SE +/- 3.25, N = 3SE +/- 3.07, N = 3SE +/- 1.97, N = 3SE +/- 2.49, N = 3SE +/- 3.41, N = 3SE +/- 1.89, N = 3SE +/- 3.23, N = 3SE +/- 3.34, N = 3SE +/- 1.66, N = 3SE +/- 7.17, N = 3SE +/- 3.26, N = 3SE +/- 6.63, N = 3SE +/- 2.91, N = 3SE +/- 5.28, N = 3861.07864.62858.54898.21899.45898.84895.33908.46895.52903.30897.59861.35902.38967.37773.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000Min: 857.36 / Avg: 861.07 / Max: 863.04Min: 858.29 / Avg: 864.62 / Max: 869.06Min: 852.55 / Avg: 858.54 / Max: 862.73Min: 894.42 / Avg: 898.21 / Max: 901.07Min: 895.16 / Avg: 899.45 / Max: 903.77Min: 892.02 / Avg: 898.84 / Max: 902.4Min: 892.13 / Avg: 895.33 / Max: 898.66Min: 902.91 / Avg: 908.46 / Max: 914.1Min: 888.84 / Avg: 895.52 / Max: 899.09Min: 900.01 / Avg: 903.3 / Max: 905.33Min: 885.88 / Avg: 897.59 / Max: 910.61Min: 854.83 / Avg: 861.35 / Max: 864.7Min: 889.15 / Avg: 902.38 / Max: 909.81Min: 961.58 / Avg: 967.37 / Max: 970.7Min: 763.33 / Avg: 773.86 / Max: 779.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KSE +/- 11.79, N = 6SE +/- 38.61, N = 6SE +/- 15.52, N = 6SE +/- 8.39, N = 6SE +/- 9.05, N = 6SE +/- 131.70, N = 5SE +/- 43.36, N = 4SE +/- 157.59, N = 4SE +/- 60.62, N = 4SE +/- 58.13, N = 4SE +/- 142.13, N = 3SE +/- 146.93, N = 12SE +/- 138.93, N = 7SE +/- 12.34, N = 6SE +/- 40.17, N = 66757.996871.936954.807954.158024.3712840.6413938.2513099.6213435.3014015.6614603.3314612.2114778.648720.886839.171. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KMin: 6718.89 / Avg: 6757.99 / Max: 6804.97Min: 6730.47 / Avg: 6871.93 / Max: 6957.51Min: 6907.71 / Avg: 6954.8 / Max: 7005.82Min: 7941.34 / Avg: 7954.15 / Max: 7995.56Min: 7986.3 / Avg: 8024.37 / Max: 8053.74Min: 12361.8 / Avg: 12840.64 / Max: 13103.71Min: 13879.03 / Avg: 13938.25 / Max: 14067.14Min: 12632.05 / Avg: 13099.62 / Max: 13300.62Min: 13262.26 / Avg: 13435.3 / Max: 13545.43Min: 13885.31 / Avg: 14015.66 / Max: 14137.76Min: 14340.08 / Avg: 14603.33 / Max: 14827.83Min: 13024.64 / Avg: 14612.21 / Max: 14895.69Min: 13961.68 / Avg: 14778.64 / Max: 15005.98Min: 8666.52 / Avg: 8720.88 / Max: 8753.04Min: 6733.72 / Avg: 6839.17 / Max: 6954.281. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522M4M6M8M10MSE +/- 8448.82, N = 3SE +/- 4235.69, N = 3SE +/- 18409.69, N = 3SE +/- 16881.14, N = 3SE +/- 5954.89, N = 3SE +/- 2921.91, N = 3SE +/- 8887.25, N = 3SE +/- 15198.80, N = 3SE +/- 15133.11, N = 3SE +/- 10773.68, N = 3SE +/- 6430.53, N = 3SE +/- 4763.31, N = 3SE +/- 6345.73, N = 3SE +/- 9056.97, N = 3SE +/- 3201.01, N = 36414609655135165476906787692688076068043056545211698394566748816652696675390268999486936092793988379400471. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.4M2.8M4.2M5.6M7MMin: 6399116 / Avg: 6414609 / Max: 6428197Min: 6543694 / Avg: 6551351.33 / Max: 6558318Min: 6524614 / Avg: 6547690 / Max: 6584075Min: 6763111 / Avg: 6787691.67 / Max: 6820026Min: 6872751 / Avg: 6880759.67 / Max: 6892398Min: 6798906 / Avg: 6804305.33 / Max: 6808941Min: 6532471 / Avg: 6545211 / Max: 6562315Min: 6962515 / Avg: 6983944.67 / Max: 7013330Min: 6646698 / Avg: 6674880.67 / Max: 6698529Min: 6632241 / Avg: 6652695.67 / Max: 6668790Min: 6743220 / Avg: 6753902.33 / Max: 6765446Min: 6892591 / Avg: 6899948 / Max: 6908868Min: 6928402 / Avg: 6936092.33 / Max: 6948681Min: 7924088 / Avg: 7939883 / Max: 7955460Min: 7934298 / Avg: 7940047 / Max: 79453611. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521326395265SE +/- 0.40, N = 3SE +/- 0.15, N = 3SE +/- 0.27, N = 3SE +/- 0.44, N = 3SE +/- 0.03, N = 3SE +/- 0.37, N = 3SE +/- 0.26, N = 3SE +/- 0.22, N = 3SE +/- 0.52, N = 3SE +/- 0.37, N = 3SE +/- 0.19, N = 3SE +/- 0.09, N = 3SE +/- 0.19, N = 3SE +/- 0.45, N = 3SE +/- 0.23, N = 359.560.059.157.657.256.958.356.758.158.458.657.556.849.150.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521224364860Min: 58.7 / Avg: 59.5 / Max: 59.9Min: 59.8 / Avg: 60.03 / Max: 60.3Min: 58.6 / Avg: 59.13 / Max: 59.4Min: 56.8 / Avg: 57.63 / Max: 58.3Min: 57.2 / Avg: 57.23 / Max: 57.3Min: 56.2 / Avg: 56.93 / Max: 57.4Min: 57.8 / Avg: 58.27 / Max: 58.7Min: 56.4 / Avg: 56.67 / Max: 57.1Min: 57.2 / Avg: 58.1 / Max: 59Min: 57.9 / Avg: 58.37 / Max: 59.1Min: 58.2 / Avg: 58.57 / Max: 58.8Min: 57.3 / Avg: 57.47 / Max: 57.6Min: 56.6 / Avg: 56.83 / Max: 57.2Min: 48.2 / Avg: 49.07 / Max: 49.7Min: 50.2 / Avg: 50.43 / Max: 50.9

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521326395265SE +/- 0.24, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 4SE +/- 0.04, N = 4SE +/- 0.03, N = 4SE +/- 0.02, N = 4SE +/- 0.04, N = 4SE +/- 0.02, N = 3SE +/- 0.02, N = 355.7838.4130.5128.7020.4617.1417.6715.9413.2713.2711.5011.5010.6845.4724.181. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521122334455Min: 55.44 / Avg: 55.78 / Max: 56.24Min: 38.36 / Avg: 38.41 / Max: 38.49Min: 30.42 / Avg: 30.51 / Max: 30.57Min: 28.69 / Avg: 28.7 / Max: 28.72Min: 20.41 / Avg: 20.46 / Max: 20.56Min: 17.12 / Avg: 17.14 / Max: 17.17Min: 17.65 / Avg: 17.67 / Max: 17.7Min: 15.92 / Avg: 15.94 / Max: 15.96Min: 13.17 / Avg: 13.27 / Max: 13.35Min: 13.19 / Avg: 13.27 / Max: 13.35Min: 11.4 / Avg: 11.5 / Max: 11.56Min: 11.45 / Avg: 11.5 / Max: 11.55Min: 10.6 / Avg: 10.68 / Max: 10.78Min: 45.44 / Avg: 45.47 / Max: 45.5Min: 24.15 / Avg: 24.18 / Max: 24.231. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.19, N = 9SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 334.5125.6624.9023.1522.4022.4222.0922.3022.5422.0122.0522.0521.9829.6121.431. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835Min: 34.46 / Avg: 34.51 / Max: 34.53Min: 25.58 / Avg: 25.66 / Max: 25.79Min: 24.75 / Avg: 24.9 / Max: 25.01Min: 22.45 / Avg: 23.15 / Max: 23.78Min: 22.31 / Avg: 22.4 / Max: 22.53Min: 22.33 / Avg: 22.42 / Max: 22.59Min: 22.03 / Avg: 22.09 / Max: 22.22Min: 22.23 / Avg: 22.3 / Max: 22.43Min: 22.49 / Avg: 22.54 / Max: 22.64Min: 21.89 / Avg: 22.01 / Max: 22.17Min: 22 / Avg: 22.05 / Max: 22.15Min: 21.86 / Avg: 22.05 / Max: 22.28Min: 21.88 / Avg: 21.98 / Max: 22.15Min: 29.47 / Avg: 29.61 / Max: 29.69Min: 21.32 / Avg: 21.43 / Max: 21.651. (CC) gcc options: -pthread -fvisibility=hidden -O2

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230060090012001500SE +/- 9.68, N = 3SE +/- 8.01, N = 3SE +/- 1.86, N = 3SE +/- 1.20, N = 3SE +/- 10.82, N = 3SE +/- 7.33, N = 3SE +/- 1.20, N = 3SE +/- 9.35, N = 3SE +/- 7.55, N = 3SE +/- 9.54, N = 3SE +/- 6.84, N = 3SE +/- 6.89, N = 3SE +/- 8.33, N = 3SE +/- 5.46, N = 31213120812211172115911731180113711781184118411661166998999
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000Min: 1202 / Avg: 1212.67 / Max: 1232Min: 1209 / Avg: 1220.67 / Max: 1236Min: 1168 / Avg: 1171.67 / Max: 1174Min: 1157 / Avg: 1159.33 / Max: 1161Min: 1152 / Avg: 1173 / Max: 1188Min: 1173 / Avg: 1180.33 / Max: 1195Min: 1135 / Avg: 1137.33 / Max: 1139Min: 1168 / Avg: 1178.33 / Max: 1197Min: 1175 / Avg: 1184 / Max: 1199Min: 1165 / Avg: 1184 / Max: 1195Min: 1159 / Avg: 1166.33 / Max: 1180Min: 1156 / Avg: 1165.67 / Max: 1179Min: 990 / Avg: 998.33 / Max: 1015Min: 992 / Avg: 999.33 / Max: 1010

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240SE +/- 0.41, N = 3SE +/- 0.25, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.22, N = 3SE +/- 0.07, N = 333.9229.9526.7926.0223.8723.1123.5522.6422.6522.5522.1321.9721.6428.1024.421. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835Min: 33.49 / Avg: 33.92 / Max: 34.73Min: 29.5 / Avg: 29.95 / Max: 30.34Min: 26.76 / Avg: 26.79 / Max: 26.84Min: 25.95 / Avg: 26.02 / Max: 26.08Min: 23.81 / Avg: 23.87 / Max: 23.93Min: 23.07 / Avg: 23.11 / Max: 23.16Min: 23.54 / Avg: 23.55 / Max: 23.58Min: 22.58 / Avg: 22.64 / Max: 22.73Min: 22.58 / Avg: 22.65 / Max: 22.7Min: 22.48 / Avg: 22.55 / Max: 22.63Min: 22.09 / Avg: 22.13 / Max: 22.15Min: 21.94 / Avg: 21.97 / Max: 21.99Min: 21.63 / Avg: 21.64 / Max: 21.65Min: 27.88 / Avg: 28.1 / Max: 28.53Min: 24.31 / Avg: 24.42 / Max: 24.541. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.33, N = 3SE +/- 0.33, N = 3141141142137135135137133137138137135133116116
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 141 / Avg: 141.67 / Max: 142Min: 137 / Avg: 137.67 / Max: 138

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.33, N = 3SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 3SE +/- 0.58, N = 3143138139135133132134135134135134133130115114
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 131 / Avg: 131.67 / Max: 132Min: 134 / Avg: 135 / Max: 136Min: 133 / Avg: 133.67 / Max: 135Min: 132 / Avg: 133 / Max: 134Min: 129 / Avg: 130 / Max: 131

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200M400M600M800M1000MSE +/- 295157.17, N = 5SE +/- 252656.85, N = 4SE +/- 314598.13, N = 3SE +/- 405072.19, N = 4SE +/- 643499.89, N = 3SE +/- 998206.04, N = 3SE +/- 422202.49, N = 3SE +/- 142607.92, N = 3SE +/- 989060.38, N = 3SE +/- 857688.98, N = 3SE +/- 229126.62, N = 3SE +/- 507394.80, N = 3SE +/- 1151151.66, N = 3SE +/- 322962.48, N = 6SE +/- 3149876.45, N = 44499164604578553504557606337882668757784594337743048009095326677743294678568934008922488338837352008783753338809985678098445836431809251. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52160M320M480M640M800MMin: 448913300 / Avg: 449916460 / Max: 450629100Min: 457151100 / Avg: 457855350 / Max: 458299300Min: 455133300 / Avg: 455760633.33 / Max: 456116200Min: 787578900 / Avg: 788266875 / Max: 789313600Min: 777565100 / Avg: 778459433.33 / Max: 779708100Min: 772483400 / Avg: 774304800 / Max: 775923400Min: 908914700 / Avg: 909532666.67 / Max: 910340000Min: 774119000 / Avg: 774329466.67 / Max: 774601400Min: 854937100 / Avg: 856893400 / Max: 858125300Min: 890552000 / Avg: 892248833.33 / Max: 893315100Min: 883287900 / Avg: 883735200 / Max: 884045100Min: 877829000 / Avg: 878375333.33 / Max: 879389100Min: 878726400 / Avg: 880998566.67 / Max: 882456200Min: 808702900 / Avg: 809844583.33 / Max: 810642700Min: 634035200 / Avg: 643180925 / Max: 6474285001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.33, N = 3135134135130131128131129133134133129127111110
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 110 / Avg: 110.33 / Max: 111

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 4SE +/- 0.01, N = 4SE +/- 0.04, N = 3SE +/- 0.05, N = 4SE +/- 0.02, N = 3SE +/- 0.01, N = 312.9817.8321.7022.9426.6133.2532.7235.0635.8137.3939.1536.0738.6515.5626.311. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 12.97 / Avg: 12.98 / Max: 12.99Min: 17.82 / Avg: 17.83 / Max: 17.84Min: 21.66 / Avg: 21.7 / Max: 21.75Min: 22.92 / Avg: 22.94 / Max: 22.97Min: 26.56 / Avg: 26.61 / Max: 26.7Min: 33.24 / Avg: 33.25 / Max: 33.26Min: 32.69 / Avg: 32.72 / Max: 32.73Min: 35.04 / Avg: 35.06 / Max: 35.09Min: 35.77 / Avg: 35.81 / Max: 35.84Min: 37.31 / Avg: 37.39 / Max: 37.44Min: 39.12 / Avg: 39.15 / Max: 39.19Min: 36 / Avg: 36.07 / Max: 36.11Min: 38.55 / Avg: 38.65 / Max: 38.8Min: 15.54 / Avg: 15.56 / Max: 15.61Min: 26.29 / Avg: 26.31 / Max: 26.331. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Swet

Swet is a synthetic CPU/RAM benchmark, includes multi-processor test cases. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52150M300M450M600M750MSE +/- 6957648.38, N = 3SE +/- 2920022.24, N = 3SE +/- 5177314.46, N = 3SE +/- 3828695.03, N = 3SE +/- 2164773.96, N = 3SE +/- 4052005.46, N = 3SE +/- 1400113.05, N = 3SE +/- 8506660.76, N = 3SE +/- 807930.96, N = 3SE +/- 3622759.74, N = 3SE +/- 5706451.14, N = 3SE +/- 2469610.76, N = 3SE +/- 4045781.60, N = 3SE +/- 2325206.90, N = 3SE +/- 2209414.80, N = 35618167725811945885718255805985892286000655725973611975826843336122184256056443645920492256022038426024348856065671226863954196851888101. (CC) gcc options: -lm -lpthread -lcurses -lrt
OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52120M240M360M480M600MMin: 547924209 / Avg: 561816771.67 / Max: 569451631Min: 575619977 / Avg: 581194588 / Max: 585489333Min: 566389680 / Avg: 571825580.33 / Max: 582175830Min: 593041731 / Avg: 598589227.67 / Max: 605934155Min: 595836568 / Avg: 600065571.67 / Max: 602983432Min: 589487365 / Avg: 597361197.33 / Max: 602958932Min: 579892857 / Avg: 582684332.67 / Max: 584271636Min: 596416785 / Avg: 612218425.33 / Max: 625580102Min: 604227306 / Avg: 605644364.33 / Max: 607025364Min: 586617205 / Avg: 592049224.67 / Max: 598917708Min: 590802604 / Avg: 602203842.33 / Max: 608351202Min: 597611652 / Avg: 602434885.33 / Max: 605768048Min: 600677355 / Avg: 606567122 / Max: 614317006Min: 682854921 / Avg: 686395419.33 / Max: 690776878Min: 680933266 / Avg: 685188809.67 / Max: 6883472621. (CC) gcc options: -lm -lpthread -lcurses -lrt

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F329K18K27K36K45KSE +/- 97.58, N = 3SE +/- 28.15, N = 4SE +/- 24.48, N = 5SE +/- 46.30, N = 6SE +/- 11.72, N = 3256831379898397880416791. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F327K14K21K28K35KMin: 25513 / Avg: 25683 / Max: 25851Min: 13745 / Avg: 13798 / Max: 13860Min: 9772 / Avg: 9839 / Max: 9907Min: 7742 / Avg: 7879.83 / Max: 8016Min: 41656 / Avg: 41679.33 / Max: 416931. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F329K18K27K36K45KSE +/- 4.48, N = 3SE +/- 17.05, N = 4SE +/- 11.42, N = 5SE +/- 8.17, N = 6SE +/- 1.20, N = 3253881392698927991414221. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F327K14K21K28K35KMin: 25379 / Avg: 25387.67 / Max: 25394Min: 13904 / Avg: 13925.75 / Max: 13976Min: 9863 / Avg: 9891.6 / Max: 9930Min: 7962 / Avg: 7990.83 / Max: 8010Min: 41420 / Avg: 41421.67 / Max: 414241. (CXX) g++ options: -O3 -lpthread

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5211002200330044005500SE +/- 13.87, N = 3SE +/- 7.02, N = 3SE +/- 0.80, N = 3SE +/- 0.26, N = 3SE +/- 0.83, N = 3SE +/- 3.31, N = 3SE +/- 1.33, N = 3SE +/- 2.98, N = 3SE +/- 3.88, N = 3SE +/- 3.58, N = 3SE +/- 0.85, N = 3SE +/- 1.00, N = 3SE +/- 2.46, N = 3SE +/- 1.69, N = 3SE +/- 1.30, N = 34311.064290.854293.874430.724487.904497.774429.474561.194421.414425.924429.194499.504549.535238.765226.751. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500Min: 4293.42 / Avg: 4311.06 / Max: 4338.43Min: 4276.91 / Avg: 4290.85 / Max: 4299.27Min: 4292.82 / Avg: 4293.87 / Max: 4295.45Min: 4430.21 / Avg: 4430.72 / Max: 4431.05Min: 4486.88 / Avg: 4487.9 / Max: 4489.54Min: 4492.53 / Avg: 4497.77 / Max: 4503.89Min: 4426.94 / Avg: 4429.47 / Max: 4431.46Min: 4555.36 / Avg: 4561.19 / Max: 4565.19Min: 4413.66 / Avg: 4421.41 / Max: 4425.33Min: 4421.28 / Avg: 4425.91 / Max: 4432.96Min: 4427.63 / Avg: 4429.19 / Max: 4430.55Min: 4497.86 / Avg: 4499.5 / Max: 4501.3Min: 4544.76 / Avg: 4549.53 / Max: 4552.93Min: 5235.4 / Avg: 5238.76 / Max: 5240.69Min: 5224.38 / Avg: 5226.75 / Max: 5228.881. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 5EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F521428425670SE +/- 0.16, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.18, N = 353.8458.5954.9758.6854.7555.7460.5760.251. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 5EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F521224364860Min: 53.64 / Avg: 53.84 / Max: 54.15Min: 58.4 / Avg: 58.59 / Max: 58.93Min: 54.84 / Avg: 54.97 / Max: 55.11Min: 58.62 / Avg: 58.68 / Max: 58.75Min: 54.72 / Avg: 54.75 / Max: 54.81Min: 55.68 / Avg: 55.74 / Max: 55.79Min: 60.37 / Avg: 60.57 / Max: 60.77Min: 59.93 / Avg: 60.25 / Max: 60.571. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F329K18K27K36K45KSE +/- 18.66, N = 3SE +/- 29.85, N = 4SE +/- 10.01, N = 5SE +/- 10.89, N = 6SE +/- 8.41, N = 3254261361996527573415731. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F327K14K21K28K35KMin: 25402 / Avg: 25426.33 / Max: 25463Min: 13567 / Avg: 13618.5 / Max: 13689Min: 9633 / Avg: 9651.6 / Max: 9690Min: 7545 / Avg: 7573 / Max: 7614Min: 41561 / Avg: 41572.67 / Max: 415891. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F329K18K27K36K45KSE +/- 162.21, N = 3SE +/- 60.14, N = 4SE +/- 87.69, N = 5SE +/- 56.97, N = 6SE +/- 233.59, N = 3254111355697237517416061. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBEPYC 7282EPYC 7532EPYC 7642EPYC 7742EPYC 7F327K14K21K28K35KMin: 25214 / Avg: 25411.33 / Max: 25733Min: 13434 / Avg: 13555.5 / Max: 13696Min: 9473 / Avg: 9722.8 / Max: 9928Min: 7367 / Avg: 7516.83 / Max: 7745Min: 41361 / Avg: 41606 / Max: 420731. (CXX) g++ options: -O3 -lpthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000SE +/- 0.40, N = 3SE +/- 0.49, N = 3SE +/- 1.22, N = 3SE +/- 0.19, N = 3SE +/- 0.40, N = 3SE +/- 0.42, N = 3SE +/- 0.79, N = 3SE +/- 0.71, N = 3SE +/- 0.82, N = 3SE +/- 1.18, N = 3SE +/- 0.53, N = 3SE +/- 1.46, N = 3SE +/- 1.50, N = 3SE +/- 1.50, N = 3SE +/- 0.51, N = 3876.45919.01928.43943.29962.88974.40949.15983.45956.82963.00966.04954.20969.311039.601062.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000Min: 876.05 / Avg: 876.45 / Max: 877.26Min: 918.04 / Avg: 919.01 / Max: 919.66Min: 926.64 / Avg: 928.43 / Max: 930.77Min: 942.97 / Avg: 943.29 / Max: 943.63Min: 962.13 / Avg: 962.88 / Max: 963.5Min: 973.64 / Avg: 974.4 / Max: 975.08Min: 947.85 / Avg: 949.15 / Max: 950.57Min: 982.72 / Avg: 983.45 / Max: 984.87Min: 955.86 / Avg: 956.82 / Max: 958.45Min: 961.75 / Avg: 963 / Max: 965.37Min: 965.08 / Avg: 966.04 / Max: 966.9Min: 951.37 / Avg: 954.2 / Max: 956.21Min: 966.48 / Avg: 969.31 / Max: 971.59Min: 1037.63 / Avg: 1039.6 / Max: 1042.55Min: 1061.29 / Avg: 1062.32 / Max: 1062.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521020304050SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 4SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.02, N = 4SE +/- 0.02, N = 3SE +/- 0.07, N = 345.5931.1426.2724.0618.5916.4616.3815.6913.8213.7112.9612.9912.3335.2520.57
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645Min: 45.54 / Avg: 45.59 / Max: 45.62Min: 31.1 / Avg: 31.14 / Max: 31.21Min: 26.23 / Avg: 26.27 / Max: 26.29Min: 24.05 / Avg: 24.06 / Max: 24.07Min: 18.5 / Avg: 18.59 / Max: 18.68Min: 16.42 / Avg: 16.46 / Max: 16.52Min: 16.35 / Avg: 16.38 / Max: 16.42Min: 15.6 / Avg: 15.69 / Max: 15.9Min: 13.75 / Avg: 13.82 / Max: 13.87Min: 13.64 / Avg: 13.71 / Max: 13.81Min: 12.87 / Avg: 12.96 / Max: 13.01Min: 12.91 / Avg: 12.99 / Max: 13.09Min: 12.28 / Avg: 12.33 / Max: 12.38Min: 35.21 / Avg: 35.25 / Max: 35.28Min: 20.43 / Avg: 20.57 / Max: 20.67

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521020304050SE +/- 0.039126, N = 3SE +/- 0.292336, N = 3SE +/- 0.019691, N = 3SE +/- 0.036573, N = 3SE +/- 0.019214, N = 3SE +/- 0.026648, N = 4SE +/- 0.014397, N = 5SE +/- 0.043495, N = 4SE +/- 0.095577, N = 5SE +/- 0.071740, N = 5SE +/- 0.054901, N = 6SE +/- 0.058701, N = 6SE +/- 0.035543, N = 6SE +/- 0.037928, N = 3SE +/- 0.007758, N = 445.15460033.80852028.99392021.95188016.65375014.78157010.88576014.1648208.9885388.1766026.9953217.0339826.63924034.58003016.1392901. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645Min: 45.1 / Avg: 45.15 / Max: 45.23Min: 33.47 / Avg: 33.81 / Max: 34.39Min: 28.96 / Avg: 28.99 / Max: 29.03Min: 21.9 / Avg: 21.95 / Max: 22.02Min: 16.63 / Avg: 16.65 / Max: 16.69Min: 14.73 / Avg: 14.78 / Max: 14.84Min: 10.85 / Avg: 10.89 / Max: 10.92Min: 14.12 / Avg: 14.16 / Max: 14.3Min: 8.8 / Avg: 8.99 / Max: 9.33Min: 7.98 / Avg: 8.18 / Max: 8.41Min: 6.85 / Avg: 7 / Max: 7.2Min: 6.86 / Avg: 7.03 / Max: 7.24Min: 6.51 / Avg: 6.64 / Max: 6.75Min: 34.52 / Avg: 34.58 / Max: 34.65Min: 16.12 / Avg: 16.14 / Max: 16.161. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 4SE +/- 0.01, N = 3SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 4SE +/- 0.00, N = 3SE +/- 0.00, N = 339.6729.2424.2423.3418.1716.3116.7015.6414.2414.1913.2113.3012.8432.6319.861. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 39.66 / Avg: 39.67 / Max: 39.67Min: 29.23 / Avg: 29.24 / Max: 29.27Min: 24.19 / Avg: 24.24 / Max: 24.27Min: 23.32 / Avg: 23.34 / Max: 23.38Min: 18.16 / Avg: 18.17 / Max: 18.17Min: 16.28 / Avg: 16.31 / Max: 16.38Min: 16.69 / Avg: 16.7 / Max: 16.71Min: 15.63 / Avg: 15.64 / Max: 15.64Min: 14.23 / Avg: 14.24 / Max: 14.25Min: 14.17 / Avg: 14.19 / Max: 14.21Min: 13.2 / Avg: 13.21 / Max: 13.22Min: 13.28 / Avg: 13.3 / Max: 13.33Min: 12.79 / Avg: 12.83 / Max: 12.88Min: 32.63 / Avg: 32.63 / Max: 32.64Min: 19.86 / Avg: 19.86 / Max: 19.871. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5215003000450060007500SE +/- 6.46, N = 4SE +/- 10.02, N = 4SE +/- 10.27, N = 4SE +/- 37.73, N = 3SE +/- 35.55, N = 4SE +/- 0.00, N = 4SE +/- 38.66, N = 4SE +/- 0.00, N = 4SE +/- 36.86, N = 4SE +/- 40.59, N = 4SE +/- 42.67, N = 4SE +/- 47.34, N = 4SE +/- 22.42, N = 15SE +/- 30.11, N = 2SE +/- 3.57, N = 42616.813257.043317.935509.305726.605788.176455.405788.176302.576615.816699.077148.776679.714004.081406.931. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212002400360048006000Min: 2610.35 / Avg: 2616.81 / Max: 2636.2Min: 3247.02 / Avg: 3257.04 / Max: 3287.11Min: 3287.11 / Avg: 3317.93 / Max: 3328.2Min: 5433.8 / Avg: 5509.27 / Max: 5547Min: 5665.02 / Avg: 5726.6 / Max: 5788.17Min: 5788.17 / Avg: 5788.17 / Max: 5788.17Min: 6339.43 / Avg: 6455.4 / Max: 6494.05Min: 5788.17 / Avg: 5788.17 / Max: 5788.17Min: 6192 / Avg: 6302.57 / Max: 6339.43Min: 6494.05 / Avg: 6615.81 / Max: 6656.4Min: 6656.4 / Avg: 6699.07 / Max: 6827.08Min: 7006.74 / Avg: 7148.77 / Max: 7196.11Min: 6494.05 / Avg: 6679.71 / Max: 6827.08Min: 3973.97 / Avg: 4004.08 / Max: 4034.18Min: 1401.35 / Avg: 1406.93 / Max: 1416.261. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522K4K6K8K10KSE +/- 7.61, N = 4SE +/- 17.38, N = 4SE +/- 0.00, N = 4SE +/- 57.68, N = 5SE +/- 72.46, N = 4SE +/- 79.54, N = 4SE +/- 63.04, N = 4SE +/- 67.99, N = 4SE +/- 77.48, N = 4SE +/- 81.97, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 211.93, N = 15SE +/- 35.30, N = 4SE +/- 0.00, N = 42840.134004.084294.455503.596379.886617.908257.476658.488454.709427.179509.149509.149177.924035.112113.141. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5217003400510068008500Min: 2832.51 / Avg: 2840.13 / Max: 2862.97Min: 3973.97 / Avg: 4004.08 / Max: 4034.18Min: 4294.45 / Avg: 4294.45 / Max: 4294.45Min: 5325.12 / Avg: 5503.59 / Max: 5665.02Min: 6192 / Avg: 6379.88 / Max: 6494.05Min: 6494.05 / Avg: 6617.9 / Max: 6827.08Min: 8068.36 / Avg: 8257.47 / Max: 8320.5Min: 6494.05 / Avg: 6658.48 / Max: 6827.08Min: 8320.5 / Avg: 8454.7 / Max: 8588.9Min: 9181.24 / Avg: 9427.17 / Max: 9509.14Min: 9509.14 / Avg: 9509.14 / Max: 9509.14Min: 9509.14 / Avg: 9509.14 / Max: 9509.14Min: 7006.74 / Avg: 9177.92 / Max: 9861.33Min: 3973.97 / Avg: 4035.11 / Max: 4096.25Min: 2113.14 / Avg: 2113.14 / Max: 2113.141. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- TriadEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774220K40K60K80K100KSE +/- 9.56, N = 3SE +/- 22.34, N = 3SE +/- 9.40, N = 3SE +/- 46.49, N = 3SE +/- 22.26, N = 357372.73104119.1690450.72103873.44104095.681. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- TriadEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774220K40K60K80K100KMin: 57357.79 / Avg: 57372.73 / Max: 57390.53Min: 104077.01 / Avg: 104119.16 / Max: 104153.07Min: 90432.15 / Avg: 90450.72 / Max: 90462.55Min: 103785.36 / Avg: 103873.44 / Max: 103943.29Min: 104053.45 / Avg: 104095.68 / Max: 104128.991. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- AddEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774220K40K60K80K100KSE +/- 10.53, N = 3SE +/- 8.82, N = 3SE +/- 6.47, N = 3SE +/- 40.00, N = 3SE +/- 34.53, N = 357323.38103789.9990238.63103556.13103818.191. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- AddEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774220K40K60K80K100KMin: 57311.27 / Avg: 57323.38 / Max: 57344.35Min: 103778.66 / Avg: 103789.99 / Max: 103807.37Min: 90231.4 / Avg: 90238.63 / Max: 90251.53Min: 103485.19 / Avg: 103556.13 / Max: 103623.61Min: 103749.97 / Avg: 103818.19 / Max: 103861.631. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- ScaleEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774220K40K60K80K100KSE +/- 11.66, N = 3SE +/- 25.57, N = 3SE +/- 12.91, N = 3SE +/- 32.33, N = 3SE +/- 28.79, N = 353610.3592368.4181529.0692049.4892337.611. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- ScaleEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774216K32K48K64K80KMin: 53588 / Avg: 53610.35 / Max: 53627.31Min: 92337.02 / Avg: 92368.41 / Max: 92419.07Min: 81504.71 / Avg: 81529.06 / Max: 81548.69Min: 91997.69 / Avg: 92049.48 / Max: 92108.89Min: 92280.83 / Avg: 92337.61 / Max: 92374.331. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- CopyEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774220K40K60K80K100KSE +/- 10.71, N = 3SE +/- 15.65, N = 3SE +/- 16.48, N = 3SE +/- 35.22, N = 3SE +/- 32.55, N = 353581.2794075.6782081.4393594.6193965.381. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- CopyEPYC 7282EPYC 7532EPYC 7542EPYC 7642EPYC 774216K32K48K64K80KMin: 53561.74 / Avg: 53581.27 / Max: 53598.65Min: 94054.25 / Avg: 94075.67 / Max: 94106.13Min: 82062.83 / Avg: 82081.43 / Max: 82114.3Min: 93528.24 / Avg: 93594.61 / Max: 93648.23Min: 93901.29 / Avg: 93965.38 / Max: 94007.331. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.06622.13243.19864.26485.331SE +/- 0.00331, N = 4SE +/- 0.00316, N = 4SE +/- 0.00488, N = 4SE +/- 0.00144, N = 4SE +/- 0.00090, N = 4SE +/- 0.00185, N = 4SE +/- 0.00891, N = 4SE +/- 0.00341, N = 4SE +/- 0.00439, N = 4SE +/- 0.00906, N = 4SE +/- 0.01406, N = 4SE +/- 0.00178, N = 4SE +/- 0.01153, N = 10SE +/- 0.00382, N = 4SE +/- 0.01053, N = 44.738663.809553.446682.410821.829511.593001.615021.472221.599381.467751.489811.703601.515283.560201.98035MIN: 4.55MIN: 3.44MIN: 2.92MIN: 2.2MIN: 1.74MIN: 1.51MIN: 1.5MIN: 1.41MIN: 1.51MIN: 1.39MIN: 1.33MIN: 1.48MIN: 1.35MIN: 3.44MIN: 1.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 4.73 / Avg: 4.74 / Max: 4.74Min: 3.8 / Avg: 3.81 / Max: 3.81Min: 3.43 / Avg: 3.45 / Max: 3.45Min: 2.41 / Avg: 2.41 / Max: 2.41Min: 1.83 / Avg: 1.83 / Max: 1.83Min: 1.59 / Avg: 1.59 / Max: 1.6Min: 1.6 / Avg: 1.62 / Max: 1.64Min: 1.46 / Avg: 1.47 / Max: 1.48Min: 1.59 / Avg: 1.6 / Max: 1.61Min: 1.45 / Avg: 1.47 / Max: 1.49Min: 1.45 / Avg: 1.49 / Max: 1.52Min: 1.7 / Avg: 1.7 / Max: 1.71Min: 1.46 / Avg: 1.52 / Max: 1.57Min: 3.55 / Avg: 3.56 / Max: 3.57Min: 1.96 / Avg: 1.98 / Max: 2.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645SE +/- 0.17, N = 3SE +/- 0.02, N = 3SE +/- 0.16, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.10, N = 3SE +/- 0.15, N = 337.7228.2225.4623.5520.4319.4119.2718.9418.0617.6717.1917.1216.7628.6021.63
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240Min: 37.4 / Avg: 37.72 / Max: 38.01Min: 28.17 / Avg: 28.22 / Max: 28.26Min: 25.14 / Avg: 25.46 / Max: 25.64Min: 23.47 / Avg: 23.55 / Max: 23.6Min: 20.3 / Avg: 20.43 / Max: 20.52Min: 19.38 / Avg: 19.4 / Max: 19.45Min: 19.16 / Avg: 19.27 / Max: 19.35Min: 18.83 / Avg: 18.93 / Max: 19.02Min: 17.88 / Avg: 18.06 / Max: 18.19Min: 17.49 / Avg: 17.67 / Max: 17.79Min: 16.99 / Avg: 17.19 / Max: 17.31Min: 16.99 / Avg: 17.11 / Max: 17.27Min: 16.66 / Avg: 16.76 / Max: 16.81Min: 28.39 / Avg: 28.6 / Max: 28.73Min: 21.4 / Avg: 21.63 / Max: 21.91

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 3SE +/- 0.01, N = 333.1522.4017.7316.6811.719.739.999.087.427.406.386.355.8927.1614.021. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835Min: 33.13 / Avg: 33.15 / Max: 33.18Min: 22.39 / Avg: 22.4 / Max: 22.42Min: 17.72 / Avg: 17.73 / Max: 17.74Min: 16.67 / Avg: 16.68 / Max: 16.69Min: 11.71 / Avg: 11.71 / Max: 11.72Min: 9.73 / Avg: 9.73 / Max: 9.74Min: 9.99 / Avg: 9.99 / Max: 9.99Min: 9.08 / Avg: 9.08 / Max: 9.08Min: 7.41 / Avg: 7.42 / Max: 7.42Min: 7.39 / Avg: 7.4 / Max: 7.41Min: 6.36 / Avg: 6.38 / Max: 6.4Min: 6.35 / Avg: 6.35 / Max: 6.36Min: 5.87 / Avg: 5.89 / Max: 5.9Min: 27.15 / Avg: 27.16 / Max: 27.17Min: 14 / Avg: 14.02 / Max: 14.031. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 3SE +/- 0.05, N = 3SE +/- 0.20, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 333.0226.9024.2323.3920.8719.3819.9418.6618.8619.0918.7918.7318.1427.2019.86
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835Min: 32.94 / Avg: 33.02 / Max: 33.14Min: 26.81 / Avg: 26.9 / Max: 26.96Min: 24.11 / Avg: 24.23 / Max: 24.33Min: 23.21 / Avg: 23.39 / Max: 23.54Min: 20.74 / Avg: 20.87 / Max: 21.08Min: 19.15 / Avg: 19.38 / Max: 19.51Min: 19.92 / Avg: 19.94 / Max: 19.97Min: 18.45 / Avg: 18.66 / Max: 18.82Min: 18.73 / Avg: 18.85 / Max: 18.96Min: 18.76 / Avg: 19.09 / Max: 19.39Min: 18.7 / Avg: 18.79 / Max: 18.88Min: 18.39 / Avg: 18.73 / Max: 19.1Min: 17.95 / Avg: 18.14 / Max: 18.35Min: 26.98 / Avg: 27.2 / Max: 27.33Min: 19.76 / Avg: 19.86 / Max: 19.94

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025SE +/- 0.040, N = 4SE +/- 0.079, N = 3SE +/- 0.159, N = 8SE +/- 0.034, N = 3SE +/- 0.042, N = 4SE +/- 0.007, N = 4SE +/- 0.016, N = 5SE +/- 0.022, N = 4SE +/- 0.119, N = 4SE +/- 0.074, N = 15SE +/- 0.030, N = 5SE +/- 0.017, N = 5SE +/- 0.008, N = 5SE +/- 0.017, N = 3SE +/- 0.049, N = 415.54016.89518.18917.72914.76114.3209.91314.31612.0279.1858.9058.9288.79119.70715.0491. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025Min: 15.47 / Avg: 15.54 / Max: 15.65Min: 16.76 / Avg: 16.9 / Max: 17.03Min: 17.99 / Avg: 18.19 / Max: 19.3Min: 17.66 / Avg: 17.73 / Max: 17.77Min: 14.65 / Avg: 14.76 / Max: 14.83Min: 14.31 / Avg: 14.32 / Max: 14.34Min: 9.88 / Avg: 9.91 / Max: 9.97Min: 14.25 / Avg: 14.32 / Max: 14.36Min: 11.69 / Avg: 12.03 / Max: 12.26Min: 9.06 / Avg: 9.19 / Max: 10.22Min: 8.85 / Avg: 8.91 / Max: 9.02Min: 8.9 / Avg: 8.93 / Max: 8.99Min: 8.78 / Avg: 8.79 / Max: 8.82Min: 19.68 / Avg: 19.71 / Max: 19.74Min: 14.94 / Avg: 15.05 / Max: 15.151. (CXX) g++ options: -O2 -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.02294, N = 3SE +/- 0.00535, N = 3SE +/- 0.02527, N = 3SE +/- 0.00946, N = 3SE +/- 0.02587, N = 3SE +/- 0.00464, N = 3SE +/- 0.00309, N = 3SE +/- 0.00531, N = 3SE +/- 0.00201, N = 3SE +/- 0.00608, N = 3SE +/- 0.00798, N = 3SE +/- 0.00645, N = 3SE +/- 0.00691, N = 3SE +/- 0.00611, N = 3SE +/- 0.02104, N = 38.671545.253316.063336.034975.922042.151942.205432.069831.997071.930421.798741.888421.790657.237635.49605MIN: 8.41MIN: 5.07MIN: 5.87MIN: 5.92MIN: 5.79MIN: 2.02MIN: 2.05MIN: 2MIN: 1.91MIN: 1.82MIN: 1.69MIN: 1.8MIN: 1.71MIN: 7.01MIN: 5.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 8.63 / Avg: 8.67 / Max: 8.71Min: 5.25 / Avg: 5.25 / Max: 5.26Min: 6.02 / Avg: 6.06 / Max: 6.1Min: 6.02 / Avg: 6.03 / Max: 6.04Min: 5.89 / Avg: 5.92 / Max: 5.97Min: 2.15 / Avg: 2.15 / Max: 2.16Min: 2.2 / Avg: 2.21 / Max: 2.21Min: 2.06 / Avg: 2.07 / Max: 2.08Min: 1.99 / Avg: 2 / Max: 2Min: 1.92 / Avg: 1.93 / Max: 1.94Min: 1.78 / Avg: 1.8 / Max: 1.81Min: 1.88 / Avg: 1.89 / Max: 1.9Min: 1.78 / Avg: 1.79 / Max: 1.8Min: 7.23 / Avg: 7.24 / Max: 7.25Min: 5.46 / Avg: 5.5 / Max: 5.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521326395265SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.05, N = 4SE +/- 0.05, N = 5SE +/- 0.03, N = 5SE +/- 0.05, N = 5SE +/- 0.48, N = 5SE +/- 0.16, N = 5SE +/- 0.44, N = 5SE +/- 0.54, N = 5SE +/- 0.47, N = 15SE +/- 0.04, N = 3SE +/- 0.02, N = 423.1131.3939.6541.5248.3354.9254.0756.9855.9959.5958.6453.5257.4027.1647.171. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521224364860Min: 23.08 / Avg: 23.11 / Max: 23.14Min: 31.34 / Avg: 31.39 / Max: 31.49Min: 39.58 / Avg: 39.65 / Max: 39.69Min: 41.47 / Avg: 41.52 / Max: 41.57Min: 48.24 / Avg: 48.33 / Max: 48.45Min: 54.78 / Avg: 54.92 / Max: 55.03Min: 54 / Avg: 54.07 / Max: 54.14Min: 56.85 / Avg: 56.98 / Max: 57.12Min: 55.17 / Avg: 55.99 / Max: 57.85Min: 59.13 / Avg: 59.59 / Max: 59.95Min: 57.87 / Avg: 58.64 / Max: 60.27Min: 52.03 / Avg: 53.52 / Max: 54.9Min: 54.58 / Avg: 57.4 / Max: 61.01Min: 27.11 / Avg: 27.16 / Max: 27.23Min: 47.11 / Avg: 47.17 / Max: 47.211. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.008, N = 3SE +/- 0.008, N = 3SE +/- 0.006, N = 3SE +/- 0.017, N = 3SE +/- 0.043, N = 3SE +/- 0.032, N = 3SE +/- 0.048, N = 4SE +/- 0.032, N = 4SE +/- 0.012, N = 3SE +/- 0.039, N = 4SE +/- 0.060, N = 3SE +/- 0.003, N = 3SE +/- 0.026, N = 3SE +/- 0.023, N = 3SE +/- 0.035, N = 42.5363.5934.2254.5065.5635.8665.9586.0916.4946.6786.9666.7526.9853.1845.3331. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 2.52 / Avg: 2.54 / Max: 2.55Min: 3.58 / Avg: 3.59 / Max: 3.61Min: 4.22 / Avg: 4.23 / Max: 4.24Min: 4.47 / Avg: 4.51 / Max: 4.53Min: 5.48 / Avg: 5.56 / Max: 5.61Min: 5.8 / Avg: 5.87 / Max: 5.91Min: 5.88 / Avg: 5.96 / Max: 6.09Min: 6.04 / Avg: 6.09 / Max: 6.18Min: 6.47 / Avg: 6.49 / Max: 6.51Min: 6.57 / Avg: 6.68 / Max: 6.75Min: 6.88 / Avg: 6.97 / Max: 7.08Min: 6.75 / Avg: 6.75 / Max: 6.76Min: 6.95 / Avg: 6.99 / Max: 7.04Min: 3.14 / Avg: 3.18 / Max: 3.21Min: 5.23 / Avg: 5.33 / Max: 5.391. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.05, N = 4SE +/- 0.15, N = 4SE +/- 0.08, N = 4SE +/- 0.06, N = 4SE +/- 0.08, N = 4SE +/- 0.13, N = 4SE +/- 0.10, N = 4SE +/- 0.03, N = 4SE +/- 0.05, N = 4SE +/- 0.09, N = 3SE +/- 0.08, N = 423.5619.1118.1017.1816.1215.8516.3315.7215.9715.8915.9715.9415.5718.8814.60
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430Min: 23.35 / Avg: 23.56 / Max: 23.91Min: 18.77 / Avg: 19.11 / Max: 19.31Min: 17.96 / Avg: 18.1 / Max: 18.29Min: 17 / Avg: 17.18 / Max: 17.52Min: 16.01 / Avg: 16.12 / Max: 16.23Min: 15.61 / Avg: 15.85 / Max: 16.23Min: 16.13 / Avg: 16.33 / Max: 16.51Min: 15.6 / Avg: 15.72 / Max: 15.88Min: 15.82 / Avg: 15.97 / Max: 16.13Min: 15.65 / Avg: 15.89 / Max: 16.17Min: 15.81 / Avg: 15.97 / Max: 16.23Min: 15.88 / Avg: 15.94 / Max: 16.04Min: 15.48 / Avg: 15.57 / Max: 15.7Min: 18.71 / Avg: 18.88 / Max: 19.01Min: 14.38 / Avg: 14.6 / Max: 14.74

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5290180270360450SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.25, N = 3SE +/- 0.15, N = 3SE +/- 0.40, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.20, N = 3SE +/- 0.16, N = 3SE +/- 0.16, N = 3SE +/- 0.22, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 3352.29352.07351.14362.20368.38368.69363.10374.52363.50361.80362.99368.93374.20429.41429.141. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280160240320400Min: 352.1 / Avg: 352.29 / Max: 352.53Min: 351.75 / Avg: 352.07 / Max: 352.26Min: 350.66 / Avg: 351.14 / Max: 351.45Min: 362.03 / Avg: 362.2 / Max: 362.49Min: 367.96 / Avg: 368.38 / Max: 369.17Min: 368.54 / Avg: 368.69 / Max: 368.97Min: 362.85 / Avg: 363.1 / Max: 363.25Min: 374.32 / Avg: 374.52 / Max: 374.67Min: 363.36 / Avg: 363.5 / Max: 363.69Min: 361.43 / Avg: 361.8 / Max: 362.1Min: 362.68 / Avg: 362.99 / Max: 363.15Min: 368.71 / Avg: 368.93 / Max: 369.25Min: 373.85 / Avg: 374.2 / Max: 374.6Min: 429.18 / Avg: 429.41 / Max: 429.7Min: 429.06 / Avg: 429.14 / Max: 429.271. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280160240320400SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.18, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.37, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3288.70288.62287.93297.65302.12302.25297.71306.62297.10296.91297.81302.14306.48351.99351.631. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300Min: 288.49 / Avg: 288.7 / Max: 288.83Min: 288.49 / Avg: 288.62 / Max: 288.69Min: 287.77 / Avg: 287.93 / Max: 288.1Min: 297.5 / Avg: 297.65 / Max: 297.86Min: 301.76 / Avg: 302.12 / Max: 302.35Min: 302.19 / Avg: 302.25 / Max: 302.32Min: 297.66 / Avg: 297.7 / Max: 297.77Min: 305.88 / Avg: 306.62 / Max: 307Min: 297.02 / Avg: 297.1 / Max: 297.2Min: 296.81 / Avg: 296.91 / Max: 296.96Min: 297.64 / Avg: 297.81 / Max: 297.92Min: 302.1 / Avg: 302.14 / Max: 302.21Min: 306.27 / Avg: 306.48 / Max: 306.76Min: 351.85 / Avg: 351.99 / Max: 352.11Min: 351.56 / Avg: 351.63 / Max: 351.711. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521020304050SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 5SE +/- 0.17, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 6SE +/- 0.00, N = 6SE +/- 0.00, N = 7SE +/- 0.08, N = 6SE +/- 0.06, N = 36.9910.4213.8914.0820.8326.0525.0027.7834.4834.4840.0040.0043.488.1613.45MIN: 5.95 / MAX: 7.04MIN: 10.1 / MAX: 10.64MIN: 13.33 / MAX: 14.08MIN: 12.99 / MAX: 14.29MIN: 20.41 / MAX: 21.28MIN: 25.64 / MAX: 26.32MIN: 23.81 / MAX: 25.64MIN: 27.03MIN: 33.33 / MAX: 35.71MIN: 33.33 / MAX: 35.71MIN: 38.46 / MAX: 41.67MIN: 34.48 / MAX: 41.67MIN: 35.71 / MAX: 45.45MIN: 6.85 / MAX: 8.4MIN: 12.82 / MAX: 13.89
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52918273645Min: 6.99 / Avg: 6.99 / Max: 6.99Min: 10.42 / Avg: 10.42 / Max: 10.42Min: 13.89 / Avg: 13.89 / Max: 13.89Min: 14.08 / Avg: 14.08 / Max: 14.08Min: 20.83 / Avg: 20.83 / Max: 20.83Min: 25.64 / Avg: 26.05 / Max: 26.32Min: 27.78 / Avg: 27.78 / Max: 27.78Min: 34.48 / Avg: 34.48 / Max: 34.48Min: 34.48 / Avg: 34.48 / Max: 34.48Min: 43.48 / Avg: 43.48 / Max: 43.48Min: 7.75 / Avg: 8.16 / Max: 8.26Min: 13.33 / Avg: 13.45 / Max: 13.51

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.28, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3114.59114.69114.47117.88119.99120.06118.29121.92118.32117.99118.23120.00121.71139.83139.711. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52306090120150Min: 114.51 / Avg: 114.59 / Max: 114.64Min: 114.63 / Avg: 114.69 / Max: 114.8Min: 114.34 / Avg: 114.47 / Max: 114.63Min: 117.32 / Avg: 117.88 / Max: 118.21Min: 119.95 / Avg: 119.99 / Max: 120.04Min: 119.97 / Avg: 120.06 / Max: 120.12Min: 118.22 / Avg: 118.29 / Max: 118.36Min: 121.89 / Avg: 121.92 / Max: 121.94Min: 118.29 / Avg: 118.32 / Max: 118.36Min: 117.81 / Avg: 117.99 / Max: 118.19Min: 118.06 / Avg: 118.23 / Max: 118.33Min: 119.99 / Avg: 119.99 / Max: 120.01Min: 121.56 / Avg: 121.71 / Max: 121.89Min: 139.8 / Avg: 139.83 / Max: 139.86Min: 139.63 / Avg: 139.71 / Max: 139.741. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 374.2774.2174.0676.5177.5777.6976.5378.8876.5876.2276.5577.7078.8690.4690.421. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100Min: 74.23 / Avg: 74.27 / Max: 74.29Min: 74.18 / Avg: 74.21 / Max: 74.22Min: 73.98 / Avg: 74.06 / Max: 74.21Min: 76.49 / Avg: 76.51 / Max: 76.52Min: 77.5 / Avg: 77.57 / Max: 77.66Min: 77.68 / Avg: 77.69 / Max: 77.7Min: 76.42 / Avg: 76.53 / Max: 76.6Min: 78.84 / Avg: 78.88 / Max: 78.91Min: 76.57 / Avg: 76.57 / Max: 76.58Min: 76.12 / Avg: 76.22 / Max: 76.31Min: 76.54 / Avg: 76.55 / Max: 76.55Min: 77.66 / Avg: 77.7 / Max: 77.77Min: 78.82 / Avg: 78.86 / Max: 78.88Min: 90.45 / Avg: 90.46 / Max: 90.47Min: 90.37 / Avg: 90.42 / Max: 90.451. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.45410.90821.36231.81642.2705SE +/- 0.005714, N = 4SE +/- 0.000447, N = 4SE +/- 0.000258, N = 4SE +/- 0.000863, N = 4SE +/- 0.000660, N = 4SE +/- 0.000902, N = 4SE +/- 0.005528, N = 6SE +/- 0.000738, N = 4SE +/- 0.004962, N = 15SE +/- 0.003528, N = 4SE +/- 0.005359, N = 5SE +/- 0.002887, N = 4SE +/- 0.002953, N = 4SE +/- 0.001580, N = 4SE +/- 0.001140, N = 42.0183801.0934400.8757490.8022720.5923080.5376360.5534440.5005210.5339610.4842190.5170230.5874430.5369161.1601700.660419MIN: 1.95MIN: 0.96MIN: 0.74MIN: 0.72MIN: 0.56MIN: 0.51MIN: 0.49MIN: 0.48MIN: 0.49MIN: 0.46MIN: 0.47MIN: 0.54MIN: 0.49MIN: 1.13MIN: 0.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 2 / Avg: 2.02 / Max: 2.03Min: 1.09 / Avg: 1.09 / Max: 1.09Min: 0.88 / Avg: 0.88 / Max: 0.88Min: 0.8 / Avg: 0.8 / Max: 0.8Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.55 / Max: 0.58Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.51 / Avg: 0.53 / Max: 0.57Min: 0.48 / Avg: 0.48 / Max: 0.49Min: 0.5 / Avg: 0.52 / Max: 0.53Min: 0.58 / Avg: 0.59 / Max: 0.6Min: 0.53 / Avg: 0.54 / Max: 0.54Min: 1.16 / Avg: 1.16 / Max: 1.16Min: 0.66 / Avg: 0.66 / Max: 0.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430SE +/- 0.030, N = 3SE +/- 0.065, N = 3SE +/- 0.009, N = 4SE +/- 0.022, N = 4SE +/- 0.007, N = 4SE +/- 0.030, N = 5SE +/- 0.057, N = 5SE +/- 0.005, N = 5SE +/- 0.083, N = 6SE +/- 0.073, N = 5SE +/- 0.051, N = 15SE +/- 0.066, N = 8SE +/- 0.061, N = 6SE +/- 0.078, N = 3SE +/- 0.022, N = 425.37418.13414.79215.44511.5779.8929.8589.8068.4058.4347.7477.8197.68322.21813.5301. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52612182430Min: 25.33 / Avg: 25.37 / Max: 25.43Min: 18.02 / Avg: 18.13 / Max: 18.24Min: 14.76 / Avg: 14.79 / Max: 14.8Min: 15.41 / Avg: 15.44 / Max: 15.51Min: 11.57 / Avg: 11.58 / Max: 11.6Min: 9.81 / Avg: 9.89 / Max: 9.96Min: 9.76 / Avg: 9.86 / Max: 10.08Min: 9.79 / Avg: 9.81 / Max: 9.82Min: 8.14 / Avg: 8.41 / Max: 8.71Min: 8.16 / Avg: 8.43 / Max: 8.59Min: 7.37 / Avg: 7.75 / Max: 8.02Min: 7.45 / Avg: 7.82 / Max: 8.05Min: 7.4 / Avg: 7.68 / Max: 7.86Min: 22.12 / Avg: 22.22 / Max: 22.37Min: 13.49 / Avg: 13.53 / Max: 13.591. (CXX) g++ options: -O2 -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F520.80631.61262.41893.22524.0315SE +/- 0.001594, N = 4SE +/- 0.001298, N = 4SE +/- 0.001269, N = 4SE +/- 0.001219, N = 4SE +/- 0.009261, N = 15SE +/- 0.003967, N = 4SE +/- 0.001948, N = 4SE +/- 0.005244, N = 4SE +/- 0.000788, N = 4SE +/- 0.001659, N = 4SE +/- 0.000537, N = 4SE +/- 0.002127, N = 4SE +/- 0.005312, N = 4SE +/- 0.000640, N = 4SE +/- 0.001361, N = 43.5836302.4937201.9130501.8417801.2959901.0739901.1048601.0158200.8680240.8631460.7808030.8006430.7505193.0129201.565030MIN: 3.54MIN: 2.45MIN: 1.87MIN: 1.82MIN: 1.25MIN: 0.98MIN: 0.99MIN: 0.98MIN: 0.81MIN: 0.77MIN: 0.7MIN: 0.76MIN: 0.71MIN: 2.73MIN: 1.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810Min: 3.58 / Avg: 3.58 / Max: 3.59Min: 2.49 / Avg: 2.49 / Max: 2.5Min: 1.91 / Avg: 1.91 / Max: 1.92Min: 1.84 / Avg: 1.84 / Max: 1.85Min: 1.27 / Avg: 1.3 / Max: 1.37Min: 1.07 / Avg: 1.07 / Max: 1.08Min: 1.1 / Avg: 1.1 / Max: 1.11Min: 1 / Avg: 1.02 / Max: 1.03Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.79 / Avg: 0.8 / Max: 0.8Min: 0.74 / Avg: 0.75 / Max: 0.77Min: 3.01 / Avg: 3.01 / Max: 3.01Min: 1.56 / Avg: 1.57 / Max: 1.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.12, N = 6SE +/- 0.01, N = 4SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.02, N = 6SE +/- 0.05, N = 6SE +/- 0.04, N = 6SE +/- 0.03, N = 6SE +/- 0.02, N = 6SE +/- 0.01, N = 3SE +/- 0.02, N = 46.559.0311.3412.1817.1720.2520.1421.5526.1326.6829.4127.6730.257.9414.24
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835Min: 6.54 / Avg: 6.55 / Max: 6.55Min: 9.02 / Avg: 9.03 / Max: 9.03Min: 11.34 / Avg: 11.34 / Max: 11.36Min: 11.56 / Avg: 12.18 / Max: 12.32Min: 17.16 / Avg: 17.17 / Max: 17.19Min: 20.23 / Avg: 20.25 / Max: 20.28Min: 20.11 / Avg: 20.14 / Max: 20.18Min: 21.52 / Avg: 21.55 / Max: 21.6Min: 26.05 / Avg: 26.13 / Max: 26.17Min: 26.53 / Avg: 26.68 / Max: 26.79Min: 29.29 / Avg: 29.41 / Max: 29.54Min: 27.55 / Avg: 27.67 / Max: 27.73Min: 30.17 / Avg: 30.25 / Max: 30.32Min: 7.92 / Avg: 7.94 / Max: 7.96Min: 14.2 / Avg: 14.24 / Max: 14.27

Sysbench

This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: MemoryEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522M4M6M8M10MSE +/- 12382.78, N = 5SE +/- 1918.40, N = 5SE +/- 11575.58, N = 5SE +/- 780.60, N = 5SE +/- 6709.34, N = 5SE +/- 5458.69, N = 5SE +/- 61191.25, N = 15SE +/- 8561.16, N = 5SE +/- 14871.56, N = 5SE +/- 25894.00, N = 5SE +/- 35042.66, N = 5SE +/- 11992.02, N = 5SE +/- 18226.83, N = 5SE +/- 717.93, N = 5SE +/- 17904.29, N = 55942474.377232205.307880986.594502591.455614019.496612746.504720934.266618850.246380400.355536026.076374595.126302136.656334165.634656182.263087955.631. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: MemoryEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521.4M2.8M4.2M5.6M7MMin: 5895950.69 / Avg: 5942474.37 / Max: 5962905.57Min: 7225906.31 / Avg: 7232205.3 / Max: 7236184.18Min: 7856776.88 / Avg: 7880986.59 / Max: 7920212.37Min: 4500022.71 / Avg: 4502591.45 / Max: 4504708.71Min: 5602311.72 / Avg: 5614019.49 / Max: 5638809.43Min: 6604337.31 / Avg: 6612746.5 / Max: 6634214.25Min: 4407088.34 / Avg: 4720934.26 / Max: 5255661.79Min: 6607713.8 / Avg: 6618850.24 / Max: 6652865.09Min: 6349678.61 / Avg: 6380400.35 / Max: 6428305.03Min: 5462715.81 / Avg: 5536026.07 / Max: 5604522.17Min: 6319346.34 / Avg: 6374595.12 / Max: 6505001.22Min: 6269344.44 / Avg: 6302136.65 / Max: 6328806.52Min: 6274890.2 / Avg: 6334165.63 / Max: 6379485.7Min: 4654731.01 / Avg: 4656182.26 / Max: 4658759.57Min: 3067897.69 / Avg: 3087955.63 / Max: 3159400.311. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52816243240SE +/- 0.03115, N = 3SE +/- 0.03918, N = 3SE +/- 0.03890, N = 3SE +/- 0.02745, N = 3SE +/- 0.02622, N = 4SE +/- 0.00983, N = 5SE +/- 0.03618, N = 5SE +/- 0.02895, N = 5SE +/- 0.04134, N = 5SE +/- 0.03455, N = 5SE +/- 0.02077, N = 6SE +/- 0.02278, N = 6SE +/- 0.03180, N = 6SE +/- 0.01111, N = 3SE +/- 0.03705, N = 432.5388021.9958017.6542016.6175012.2623010.5740010.813109.928498.719538.568287.550937.507067.0929726.5161014.132701. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52714212835Min: 32.48 / Avg: 32.54 / Max: 32.59Min: 21.93 / Avg: 22 / Max: 22.07Min: 17.6 / Avg: 17.65 / Max: 17.73Min: 16.56 / Avg: 16.62 / Max: 16.65Min: 12.21 / Avg: 12.26 / Max: 12.33Min: 10.56 / Avg: 10.57 / Max: 10.61Min: 10.71 / Avg: 10.81 / Max: 10.91Min: 9.86 / Avg: 9.93 / Max: 10.01Min: 8.63 / Avg: 8.72 / Max: 8.86Min: 8.43 / Avg: 8.57 / Max: 8.61Min: 7.5 / Avg: 7.55 / Max: 7.63Min: 7.46 / Avg: 7.51 / Max: 7.59Min: 6.99 / Avg: 7.09 / Max: 7.21Min: 26.5 / Avg: 26.52 / Max: 26.54Min: 14.08 / Avg: 14.13 / Max: 14.231. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 5EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.21, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.12, N = 367.2979.5976.1880.0577.7177.3463.5173.831. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 5EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F521530456075Min: 67.24 / Avg: 67.29 / Max: 67.35Min: 79.43 / Avg: 79.59 / Max: 79.84Min: 75.81 / Avg: 76.18 / Max: 76.52Min: 79.94 / Avg: 80.05 / Max: 80.11Min: 77.6 / Avg: 77.71 / Max: 77.78Min: 77.24 / Avg: 77.34 / Max: 77.39Min: 63.47 / Avg: 63.51 / Max: 63.54Min: 73.58 / Avg: 73.83 / Max: 73.981. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 7EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F521326395265SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 4SE +/- 0.24, N = 453.7558.6254.7358.5954.8055.7860.2560.121. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 7EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F521224364860Min: 53.53 / Avg: 53.75 / Max: 53.97Min: 58.33 / Avg: 58.62 / Max: 58.95Min: 54.6 / Avg: 54.73 / Max: 54.97Min: 58.56 / Avg: 58.59 / Max: 58.65Min: 54.64 / Avg: 54.8 / Max: 55.08Min: 55.61 / Avg: 55.78 / Max: 56.08Min: 59.9 / Avg: 60.25 / Max: 60.56Min: 59.67 / Avg: 60.12 / Max: 60.741. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: CopyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KSE +/- 9.94, N = 5SE +/- 7.06, N = 5SE +/- 2.37, N = 5SE +/- 8.02, N = 5SE +/- 5.59, N = 5SE +/- 14.85, N = 5SE +/- 10.31, N = 5SE +/- 5.66, N = 5SE +/- 15.41, N = 5SE +/- 47.63, N = 5SE +/- 13.27, N = 5SE +/- 31.65, N = 5SE +/- 14.28, N = 5SE +/- 52.59, N = 5SE +/- 330.72, N = 552390.051714.451093.180140.179677.279342.990663.279674.188717.291703.290296.590511.890141.782399.766901.11. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: CopyEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5216K32K48K64K80KMin: 52373.2 / Avg: 52390.04 / Max: 52428Min: 51688.2 / Avg: 51714.44 / Max: 51726.5Min: 51087.4 / Avg: 51093.14 / Max: 51100.2Min: 80119.5 / Avg: 80140.14 / Max: 80160.6Min: 79657.3 / Avg: 79677.16 / Max: 79688.5Min: 79306.1 / Avg: 79342.9 / Max: 79388.7Min: 90631.3 / Avg: 90663.16 / Max: 90687.7Min: 79657.3 / Avg: 79674.1 / Max: 79689.4Min: 88676.8 / Avg: 88717.18 / Max: 88755.4Min: 91548.7 / Avg: 91703.18 / Max: 91816.8Min: 90262 / Avg: 90296.46 / Max: 90328.8Min: 90390.8 / Avg: 90511.82 / Max: 90564Min: 90094.7 / Avg: 90141.7 / Max: 90180.7Min: 82189.9 / Avg: 82399.72 / Max: 82465.6Min: 65648.8 / Avg: 66901.12 / Max: 67581.91. (CC) gcc options: -O3 -march=native -fopenmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524K8K12K16K20KSE +/- 22.89, N = 4SE +/- 18.07, N = 4SE +/- 15.81, N = 5SE +/- 69.28, N = 5SE +/- 25.61, N = 5SE +/- 30.70, N = 5SE +/- 27.38, N = 5SE +/- 15.75, N = 5SE +/- 12.30, N = 5SE +/- 18.04, N = 4SE +/- 43.06, N = 39844.959697.8315688.5714759.4018019.0114977.2915230.2315079.0815249.7212350.877779.491. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523K6K9K12K15KMin: 9819.42 / Avg: 9844.95 / Max: 9913.54Min: 9644.91 / Avg: 9697.83 / Max: 9725.86Min: 15654.67 / Avg: 15688.57 / Max: 15744.56Min: 14524.99 / Avg: 14759.4 / Max: 14889.39Min: 17953.88 / Avg: 18019.01 / Max: 18084.24Min: 14873.66 / Avg: 14977.29 / Max: 15062.42Min: 15146.06 / Avg: 15230.23 / Max: 15316.01Min: 15027.23 / Avg: 15079.08 / Max: 15122.17Min: 15214.37 / Avg: 15249.72 / Max: 15282.02Min: 12309.98 / Avg: 12350.87 / Max: 12395.51Min: 7733.94 / Avg: 7779.49 / Max: 7865.571. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5212K24K36K48K60KSE +/- 23.87, N = 3SE +/- 23.10, N = 4SE +/- 19.15, N = 4SE +/- 36.50, N = 5SE +/- 17.78, N = 5SE +/- 17.91, N = 5SE +/- 50.60, N = 5SE +/- 40.27, N = 5SE +/- 93.63, N = 6SE +/- 10.84, N = 4SE +/- 44.41, N = 321822.0629526.9737175.2046206.3349235.9247098.2255668.6355136.5056055.8727284.6922117.171. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5210K20K30K40K50KMin: 21777.28 / Avg: 21822.06 / Max: 21858.76Min: 29462.38 / Avg: 29526.97 / Max: 29571Min: 37140.1 / Avg: 37175.2 / Max: 37216.01Min: 46062.44 / Avg: 46206.33 / Max: 46260.89Min: 49168.13 / Avg: 49235.92 / Max: 49272.59Min: 47048.85 / Avg: 47098.22 / Max: 47153.11Min: 55491.76 / Avg: 55668.63 / Max: 55799.49Min: 55002.69 / Avg: 55136.5 / Max: 55211.78Min: 55606.59 / Avg: 56055.87 / Max: 56253.95Min: 27272.21 / Avg: 27284.69 / Max: 27317.11Min: 22040.65 / Avg: 22117.17 / Max: 22194.481. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620SE +/- 0.032, N = 4SE +/- 0.018, N = 5SE +/- 0.012, N = 5SE +/- 0.020, N = 5SE +/- 0.009, N = 6SE +/- 0.027, N = 6SE +/- 0.010, N = 6SE +/- 0.015, N = 6SE +/- 0.013, N = 6SE +/- 0.006, N = 6SE +/- 0.009, N = 6SE +/- 0.012, N = 6SE +/- 0.015, N = 6SE +/- 0.031, N = 4SE +/- 0.012, N = 615.54711.1319.6549.3547.9747.9728.0557.9498.1628.0748.0528.1258.06112.7887.9731. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5248121620Min: 15.46 / Avg: 15.55 / Max: 15.62Min: 11.09 / Avg: 11.13 / Max: 11.19Min: 9.63 / Avg: 9.65 / Max: 9.69Min: 9.31 / Avg: 9.35 / Max: 9.43Min: 7.94 / Avg: 7.97 / Max: 8Min: 7.91 / Avg: 7.97 / Max: 8.08Min: 8.01 / Avg: 8.06 / Max: 8.08Min: 7.92 / Avg: 7.95 / Max: 8.02Min: 8.12 / Avg: 8.16 / Max: 8.21Min: 8.05 / Avg: 8.07 / Max: 8.09Min: 8.04 / Avg: 8.05 / Max: 8.09Min: 8.1 / Avg: 8.13 / Max: 8.17Min: 8.01 / Avg: 8.06 / Max: 8.11Min: 12.74 / Avg: 12.79 / Max: 12.88Min: 7.94 / Avg: 7.97 / Max: 8.021. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

Sysbench

This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KSE +/- 1.09, N = 5SE +/- 5.53, N = 5SE +/- 10.65, N = 5SE +/- 9.93, N = 5SE +/- 12.89, N = 5SE +/- 9.32, N = 5SE +/- 7.32, N = 5SE +/- 20.39, N = 5SE +/- 23.00, N = 5SE +/- 54.62, N = 5SE +/- 52.65, N = 5SE +/- 97.15, N = 5SE +/- 117.22, N = 5SE +/- 2.64, N = 5SE +/- 7.33, N = 513398.1020091.9626778.6227632.1942008.1156038.7255254.6656066.7980743.8981398.29108892.22106421.95108070.7316323.9232649.361. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KMin: 13395.23 / Avg: 13398.1 / Max: 13400.85Min: 20074.61 / Avg: 20091.96 / Max: 20104.41Min: 26746.06 / Avg: 26778.62 / Max: 26804.47Min: 27594.22 / Avg: 27632.19 / Max: 27651.61Min: 41974.26 / Avg: 42008.11 / Max: 42044Min: 56017.18 / Avg: 56038.72 / Max: 56065.58Min: 55238.42 / Avg: 55254.66 / Max: 55276.83Min: 55989.07 / Avg: 56066.79 / Max: 56109.25Min: 80668.55 / Avg: 80743.89 / Max: 80805.58Min: 81221.7 / Avg: 81398.29 / Max: 81557.28Min: 108721.1 / Avg: 108892.22 / Max: 109052.86Min: 106076.46 / Avg: 106421.95 / Max: 106643.26Min: 107743.97 / Avg: 108070.73 / Max: 108295.29Min: 16316.25 / Avg: 16323.92 / Max: 16331.8Min: 32621.61 / Avg: 32649.36 / Max: 32663.31. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.07, N = 4SE +/- 0.11, N = 4SE +/- 0.09, N = 5SE +/- 0.17, N = 5SE +/- 0.03, N = 6SE +/- 0.25, N = 6SE +/- 0.17, N = 6SE +/- 0.18, N = 6SE +/- 0.20, N = 5SE +/- 0.16, N = 5SE +/- 0.32, N = 5SE +/- 0.27, N = 5SE +/- 0.41, N = 5SE +/- 0.03, N = 4SE +/- 0.21, N = 521.7630.3835.3837.0454.8757.3256.5359.9462.4063.2183.4079.8285.0026.1342.841. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521632486480Min: 21.59 / Avg: 21.76 / Max: 21.91Min: 30.14 / Avg: 30.38 / Max: 30.6Min: 35.07 / Avg: 35.38 / Max: 35.57Min: 36.48 / Avg: 37.04 / Max: 37.5Min: 54.76 / Avg: 54.87 / Max: 54.97Min: 56.32 / Avg: 57.32 / Max: 57.97Min: 56.08 / Avg: 56.53 / Max: 57.07Min: 59.31 / Avg: 59.94 / Max: 60.53Min: 61.9 / Avg: 62.4 / Max: 62.94Min: 62.71 / Avg: 63.21 / Max: 63.66Min: 82.2 / Avg: 83.4 / Max: 84.06Min: 79.13 / Avg: 79.82 / Max: 80.5Min: 84.01 / Avg: 85 / Max: 86.21Min: 26.07 / Avg: 26.13 / Max: 26.22Min: 42.28 / Avg: 42.84 / Max: 43.391. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

rays1bench

This is a test of rays1bench, a simple path-tracer / ray-tracing that supports SSE and AVX instructions, multi-threading, and other features. This test profile is measuring the performance of the "large scene" in rays1bench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300SE +/- 0.08, N = 3SE +/- 0.04, N = 4SE +/- 0.06, N = 4SE +/- 0.08, N = 5SE +/- 0.04, N = 6SE +/- 0.06, N = 6SE +/- 0.06, N = 6SE +/- 0.03, N = 6SE +/- 0.08, N = 7SE +/- 0.11, N = 7SE +/- 0.21, N = 7SE +/- 0.20, N = 7SE +/- 0.16, N = 7SE +/- 0.01, N = 4SE +/- 0.10, N = 548.6168.5184.5490.37134.19167.75163.00182.59217.70218.73243.25243.57269.7459.60109.91
OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250Min: 48.46 / Avg: 48.61 / Max: 48.73Min: 68.41 / Avg: 68.51 / Max: 68.59Min: 84.42 / Avg: 84.54 / Max: 84.72Min: 90.09 / Avg: 90.37 / Max: 90.53Min: 134.04 / Avg: 134.19 / Max: 134.3Min: 167.52 / Avg: 167.75 / Max: 167.95Min: 162.8 / Avg: 163 / Max: 163.17Min: 182.51 / Avg: 182.59 / Max: 182.66Min: 217.39 / Avg: 217.7 / Max: 217.92Min: 218.36 / Avg: 218.73 / Max: 219.13Min: 242.31 / Avg: 243.25 / Max: 243.89Min: 242.82 / Avg: 243.57 / Max: 244.37Min: 269 / Avg: 269.74 / Max: 270.12Min: 59.57 / Avg: 59.6 / Max: 59.62Min: 109.61 / Avg: 109.91 / Max: 110.19

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F52612182430SE +/- 0.02, N = 5SE +/- 0.03, N = 3SE +/- 0.02, N = 5SE +/- 0.03, N = 5SE +/- 0.03, N = 5SE +/- 0.03, N = 5SE +/- 0.02, N = 5SE +/- 0.04, N = 523.0525.0324.0625.2323.9624.4727.4427.591. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8EPYC 7282EPYC 7502PEPYC 7532EPYC 7542EPYC 7642EPYC 7742EPYC 7F32EPYC 7F52612182430Min: 22.99 / Avg: 23.05 / Max: 23.11Min: 24.98 / Avg: 25.03 / Max: 25.07Min: 24.02 / Avg: 24.06 / Max: 24.15Min: 25.13 / Avg: 25.23 / Max: 25.3Min: 23.88 / Avg: 23.96 / Max: 24.04Min: 24.43 / Avg: 24.47 / Max: 24.58Min: 27.38 / Avg: 27.44 / Max: 27.47Min: 27.44 / Avg: 27.59 / Max: 27.671. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.01715, N = 5SE +/- 0.00853, N = 5SE +/- 0.00189, N = 5SE +/- 0.00471, N = 5SE +/- 0.01609, N = 5SE +/- 0.00256, N = 5SE +/- 0.00475, N = 5SE +/- 0.00208, N = 5SE +/- 0.00795, N = 5SE +/- 0.01041, N = 5SE +/- 0.01221, N = 5SE +/- 0.00630, N = 5SE +/- 0.00213, N = 5SE +/- 0.00869, N = 5SE +/- 0.01324, N = 57.150304.975124.472873.763954.133822.962831.251542.958302.286111.220111.125021.159431.148286.682092.77826MIN: 6.45MIN: 4.75MIN: 4.3MIN: 3.68MIN: 3.91MIN: 2.89MIN: 1.2MIN: 2.88MIN: 2.2MIN: 6.52MIN: 2.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 7.11 / Avg: 7.15 / Max: 7.19Min: 4.94 / Avg: 4.98 / Max: 4.99Min: 4.47 / Avg: 4.47 / Max: 4.48Min: 3.75 / Avg: 3.76 / Max: 3.78Min: 4.08 / Avg: 4.13 / Max: 4.18Min: 2.96 / Avg: 2.96 / Max: 2.97Min: 1.24 / Avg: 1.25 / Max: 1.26Min: 2.95 / Avg: 2.96 / Max: 2.97Min: 2.26 / Avg: 2.29 / Max: 2.3Min: 1.19 / Avg: 1.22 / Max: 1.25Min: 1.1 / Avg: 1.13 / Max: 1.17Min: 1.14 / Avg: 1.16 / Max: 1.17Min: 1.14 / Avg: 1.15 / Max: 1.16Min: 6.66 / Avg: 6.68 / Max: 6.71Min: 2.75 / Avg: 2.78 / Max: 2.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220406080100SE +/- 0.00, N = 3SE +/- 0.00, N = 4SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 6SE +/- 0.00, N = 6SE +/- 0.49, N = 6SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 4SE +/- 0.18, N = 514.2921.7427.7829.4141.6750.0047.6255.0766.6766.6776.9276.9283.3317.2429.59MIN: 13.89 / MAX: 14.49MIN: 20.83 / MAX: 22.22MIN: 26.32MIN: 27.78MIN: 38.46 / MAX: 43.48MIN: 45.45 / MAX: 52.63MIN: 45.45 / MAX: 50MIN: 50 / MAX: 55.56MIN: 58.82 / MAX: 71.43MIN: 62.5 / MAX: 71.43MIN: 62.5 / MAX: 83.33MIN: 71.43 / MAX: 83.33MIN: 55.56 / MAX: 90.91MIN: 16.67 / MAX: 17.86MIN: 27.78 / MAX: 31.25
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F521632486480Min: 14.29 / Avg: 14.29 / Max: 14.29Min: 21.74 / Avg: 21.74 / Max: 21.74Min: 27.78 / Avg: 27.78 / Max: 27.78Min: 29.41 / Avg: 29.41 / Max: 29.41Min: 41.67 / Avg: 41.67 / Max: 41.67Min: 47.62 / Avg: 47.62 / Max: 47.62Min: 52.63 / Avg: 55.07 / Max: 55.56Min: 66.67 / Avg: 66.67 / Max: 66.67Min: 66.67 / Avg: 66.67 / Max: 66.67Min: 76.92 / Avg: 76.92 / Max: 76.92Min: 76.92 / Avg: 76.92 / Max: 76.92Min: 83.33 / Avg: 83.33 / Max: 83.33Min: 17.24 / Avg: 17.24 / Max: 17.24Min: 29.41 / Avg: 29.59 / Max: 30.3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000SE +/- 0.34, N = 3SE +/- 1.45, N = 3SE +/- 0.66, N = 3SE +/- 0.72, N = 3SE +/- 1.22, N = 3SE +/- 0.95, N = 3SE +/- 2.20, N = 3SE +/- 0.52, N = 3SE +/- 2.26, N = 3SE +/- 0.45, N = 3SE +/- 2.82, N = 3SE +/- 3.47, N = 3SE +/- 2.57, N = 3SE +/- 0.38, N = 3SE +/- 1.86, N = 3473.82624.79712.26698.08839.06937.73892.42939.901095.401116.831158.08983.611113.87496.11671.07MIN: 365.77 / MAX: 674.59MIN: 483.27 / MAX: 784.91MIN: 546.18 / MAX: 898.36MIN: 539.13 / MAX: 873.36MIN: 646.27 / MAX: 1067.76MIN: 687.68 / MAX: 1200.87MIN: 641.23 / MAX: 1144.96MIN: 689.36 / MAX: 1207.51MIN: 671.87 / MAX: 1406.03MIN: 672.49 / MAX: 1434.66MIN: 644.06 / MAX: 1489.95MIN: 616.84 / MAX: 1260.07MIN: 632.86 / MAX: 1429.43MIN: 390.79 / MAX: 704.31MIN: 524.48 / MAX: 829.641. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000Min: 473.15 / Avg: 473.82 / Max: 474.25Min: 622.12 / Avg: 624.79 / Max: 627.09Min: 711 / Avg: 712.26 / Max: 713.24Min: 697.03 / Avg: 698.08 / Max: 699.47Min: 837.5 / Avg: 839.06 / Max: 841.47Min: 935.96 / Avg: 937.73 / Max: 939.19Min: 888.17 / Avg: 892.42 / Max: 895.56Min: 938.86 / Avg: 939.9 / Max: 940.47Min: 1091.59 / Avg: 1095.4 / Max: 1099.4Min: 1116.14 / Avg: 1116.83 / Max: 1117.68Min: 1153.84 / Avg: 1158.08 / Max: 1163.42Min: 977.12 / Avg: 983.61 / Max: 988.98Min: 1108.89 / Avg: 1113.87 / Max: 1117.47Min: 495.72 / Avg: 496.11 / Max: 496.87Min: 667.49 / Avg: 671.07 / Max: 673.751. (CC) gcc options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.04608, N = 7SE +/- 0.00984, N = 7SE +/- 0.03193, N = 7SE +/- 0.00742, N = 7SE +/- 0.00525, N = 7SE +/- 0.01750, N = 7SE +/- 0.01167, N = 7SE +/- 0.00813, N = 7SE +/- 0.00723, N = 7SE +/- 0.00965, N = 7SE +/- 0.01107, N = 7SE +/- 0.00551, N = 7SE +/- 0.01095, N = 7SE +/- 0.04127, N = 7SE +/- 0.03514, N = 710.958207.605056.762425.454004.251113.970433.612683.976993.840183.543163.504123.558923.531169.151246.91561MIN: 10.24MIN: 7.15MIN: 6.41MIN: 5.12MIN: 4.06MIN: 3.86MIN: 3.49MIN: 3.87MIN: 3.75MIN: 3.43MIN: 3.39MIN: 3.47MIN: 3.44MIN: 8.56MIN: 6.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 10.75 / Avg: 10.96 / Max: 11.12Min: 7.56 / Avg: 7.61 / Max: 7.64Min: 6.64 / Avg: 6.76 / Max: 6.85Min: 5.41 / Avg: 5.45 / Max: 5.47Min: 4.23 / Avg: 4.25 / Max: 4.27Min: 3.92 / Avg: 3.97 / Max: 4.03Min: 3.55 / Avg: 3.61 / Max: 3.64Min: 3.94 / Avg: 3.98 / Max: 4Min: 3.81 / Avg: 3.84 / Max: 3.86Min: 3.52 / Avg: 3.54 / Max: 3.59Min: 3.47 / Avg: 3.5 / Max: 3.54Min: 3.54 / Avg: 3.56 / Max: 3.58Min: 3.5 / Avg: 3.53 / Max: 3.58Min: 9.06 / Avg: 9.15 / Max: 9.39Min: 6.79 / Avg: 6.92 / Max: 7.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.004801, N = 7SE +/- 0.005350, N = 7SE +/- 0.008434, N = 7SE +/- 0.006442, N = 7SE +/- 0.005689, N = 7SE +/- 0.008223, N = 7SE +/- 0.001701, N = 7SE +/- 0.011195, N = 7SE +/- 0.003838, N = 7SE +/- 0.002747, N = 7SE +/- 0.002146, N = 7SE +/- 0.002719, N = 7SE +/- 0.002278, N = 7SE +/- 0.007190, N = 7SE +/- 0.002179, N = 76.3766805.9534805.8645803.7064003.5434703.4623601.4837403.4617401.7196701.1080500.9937321.0784400.9559774.3013002.001830MIN: 6.27MIN: 5.82MIN: 5.63MIN: 3.6MIN: 3.47MIN: 3.38MIN: 1.28MIN: 3.38MIN: 1.63MIN: 4.22MIN: 1.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 6.36 / Avg: 6.38 / Max: 6.4Min: 5.94 / Avg: 5.95 / Max: 5.98Min: 5.83 / Avg: 5.86 / Max: 5.9Min: 3.67 / Avg: 3.71 / Max: 3.73Min: 3.52 / Avg: 3.54 / Max: 3.56Min: 3.43 / Avg: 3.46 / Max: 3.5Min: 1.47 / Avg: 1.48 / Max: 1.49Min: 3.44 / Avg: 3.46 / Max: 3.53Min: 1.7 / Avg: 1.72 / Max: 1.73Min: 1.1 / Avg: 1.11 / Max: 1.12Min: 0.99 / Avg: 0.99 / Max: 1Min: 1.07 / Avg: 1.08 / Max: 1.09Min: 0.95 / Avg: 0.96 / Max: 0.96Min: 4.26 / Avg: 4.3 / Max: 4.32Min: 1.99 / Avg: 2 / Max: 2.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215SE +/- 0.047, N = 4SE +/- 0.043, N = 5SE +/- 0.033, N = 6SE +/- 0.018, N = 6SE +/- 0.041, N = 6SE +/- 0.020, N = 6SE +/- 0.044, N = 6SE +/- 0.016, N = 6SE +/- 0.025, N = 6SE +/- 0.036, N = 6SE +/- 0.038, N = 6SE +/- 0.029, N = 6SE +/- 0.030, N = 6SE +/- 0.034, N = 5SE +/- 0.016, N = 611.9128.7988.1057.6457.1216.9897.2206.9007.0337.0636.9716.9306.8249.7906.586
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 11.83 / Avg: 11.91 / Max: 12.03Min: 8.7 / Avg: 8.8 / Max: 8.93Min: 8 / Avg: 8.1 / Max: 8.23Min: 7.59 / Avg: 7.65 / Max: 7.7Min: 6.99 / Avg: 7.12 / Max: 7.23Min: 6.93 / Avg: 6.99 / Max: 7.06Min: 7.06 / Avg: 7.22 / Max: 7.34Min: 6.84 / Avg: 6.9 / Max: 6.95Min: 6.95 / Avg: 7.03 / Max: 7.12Min: 6.98 / Avg: 7.06 / Max: 7.22Min: 6.85 / Avg: 6.97 / Max: 7.12Min: 6.86 / Avg: 6.93 / Max: 7.04Min: 6.76 / Avg: 6.82 / Max: 6.94Min: 9.66 / Avg: 9.79 / Max: 9.84Min: 6.53 / Avg: 6.59 / Max: 6.62

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280160240320400SE +/- 0.51, N = 7SE +/- 1.04, N = 9SE +/- 1.82, N = 15SE +/- 2.50, N = 15SE +/- 5.48, N = 15SE +/- 5.89, N = 15SE +/- 5.85, N = 15SE +/- 6.75, N = 15SE +/- 6.16, N = 15SE +/- 6.69, N = 15SE +/- 5.77, N = 15SE +/- 4.45, N = 15SE +/- 5.67, N = 15SE +/- 0.53, N = 7SE +/- 2.97, N = 15107.11184.83229.82242.24319.05334.18325.84350.63336.24346.63332.56305.10322.83124.35230.311. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5260120180240300Min: 104.11 / Avg: 107.11 / Max: 107.86Min: 176.68 / Avg: 184.83 / Max: 187.03Min: 204.43 / Avg: 229.82 / Max: 232.47Min: 207.33 / Avg: 242.24 / Max: 245.7Min: 242.72 / Avg: 319.05 / Max: 326.26Min: 252.42 / Avg: 334.18 / Max: 348.23Min: 244.6 / Avg: 325.84 / Max: 336.51Min: 256.63 / Avg: 350.63 / Max: 362.54Min: 250.31 / Avg: 336.24 / Max: 345.82Min: 253.27 / Avg: 346.63 / Max: 357.36Min: 253.16 / Avg: 332.56 / Max: 350.88Min: 248.76 / Avg: 305.1 / Max: 317.63Min: 248.14 / Avg: 322.83 / Max: 336.7Min: 121.26 / Avg: 124.35 / Max: 125.39Min: 188.8 / Avg: 230.31 / Max: 234.561. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52100200300400500SE +/- 0.14, N = 3SE +/- 0.16, N = 3SE +/- 0.22, N = 3SE +/- 0.23, N = 3SE +/- 0.61, N = 3SE +/- 0.16, N = 3SE +/- 0.29, N = 3SE +/- 0.75, N = 3SE +/- 0.66, N = 3SE +/- 0.17, N = 3SE +/- 0.74, N = 3SE +/- 0.34, N = 3SE +/- 0.33, N = 3SE +/- 0.27, N = 3SE +/- 0.46, N = 3150.43203.40239.84245.60328.54361.72349.99367.19406.93416.96457.27437.31454.41168.17244.17MIN: 141.09 / MAX: 170.16MIN: 187.66 / MAX: 232.21MIN: 211 / MAX: 275.93MIN: 210.68 / MAX: 281.84MIN: 249.05 / MAX: 376.65MIN: 287.95 / MAX: 416.23MIN: 266.02 / MAX: 399.72MIN: 288.65 / MAX: 424.55MIN: 257.69 / MAX: 456.39MIN: 251.44 / MAX: 465MIN: 225.33 / MAX: 494.73MIN: 217.67 / MAX: 473.18MIN: 219.4 / MAX: 490.36MIN: 157.91 / MAX: 191.26MIN: 200.55 / MAX: 270.421. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280160240320400Min: 150.28 / Avg: 150.43 / Max: 150.7Min: 203.1 / Avg: 203.4 / Max: 203.66Min: 239.57 / Avg: 239.84 / Max: 240.27Min: 245.22 / Avg: 245.6 / Max: 246Min: 327.42 / Avg: 328.54 / Max: 329.53Min: 361.4 / Avg: 361.72 / Max: 361.91Min: 349.42 / Avg: 349.99 / Max: 350.39Min: 365.83 / Avg: 367.19 / Max: 368.43Min: 405.7 / Avg: 406.93 / Max: 407.97Min: 416.77 / Avg: 416.96 / Max: 417.29Min: 455.84 / Avg: 457.27 / Max: 458.3Min: 436.66 / Avg: 437.31 / Max: 437.79Min: 453.74 / Avg: 454.41 / Max: 454.75Min: 167.67 / Avg: 168.17 / Max: 168.6Min: 243.51 / Avg: 244.17 / Max: 245.051. (CC) gcc options: -pthread

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250SE +/- 0.37, N = 6SE +/- 0.60, N = 7SE +/- 0.71, N = 8SE +/- 0.59, N = 8SE +/- 0.75, N = 9SE +/- 1.00, N = 9SE +/- 0.75, N = 9SE +/- 0.83, N = 9SE +/- 1.18, N = 15SE +/- 1.24, N = 15SE +/- 1.83, N = 15SE +/- 1.78, N = 15SE +/- 2.10, N = 15SE +/- 0.33, N = 7SE +/- 0.76, N = 978.21117.14142.37150.45174.63182.42181.15188.45198.00203.74210.69198.58211.3995.50173.501. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F524080120160200Min: 76.63 / Avg: 78.21 / Max: 79.16Min: 113.91 / Avg: 117.14 / Max: 119.03Min: 137.77 / Avg: 142.37 / Max: 143.62Min: 146.52 / Avg: 150.45 / Max: 151.9Min: 169.05 / Avg: 174.63 / Max: 176.72Min: 175.43 / Avg: 182.42 / Max: 186.12Min: 175.43 / Avg: 181.15 / Max: 183.15Min: 183.08 / Avg: 188.45 / Max: 190.79Min: 182.68 / Avg: 198 / Max: 201.65Min: 187.17 / Avg: 203.74 / Max: 207.43Min: 187.36 / Avg: 210.69 / Max: 218.31Min: 179.11 / Avg: 198.58 / Max: 206.01Min: 186.29 / Avg: 211.39 / Max: 220.81Min: 94.18 / Avg: 95.5 / Max: 96.44Min: 167.66 / Avg: 173.5 / Max: 175.11. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529001800270036004500SE +/- 0.60, N = 4SE +/- 0.17, N = 5SE +/- 0.78, N = 6SE +/- 0.45, N = 6SE +/- 1.48, N = 7SE +/- 3.24, N = 8SE +/- 5.08, N = 8SE +/- 4.60, N = 8SE +/- 9.01, N = 9SE +/- 6.29, N = 9SE +/- 10.02, N = 10SE +/- 13.78, N = 10SE +/- 12.44, N = 10SE +/- 0.27, N = 4SE +/- 0.69, N = 6578.12867.811155.881191.901806.542368.692318.442375.483252.133310.343967.283908.234195.62704.691407.621. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F527001400210028003500Min: 576.33 / Avg: 578.12 / Max: 578.84Min: 867.17 / Avg: 867.81 / Max: 868.12Min: 1152.71 / Avg: 1155.88 / Max: 1157.48Min: 1190.44 / Avg: 1191.9 / Max: 1193.16Min: 1799.65 / Avg: 1806.54 / Max: 1812.17Min: 2349.79 / Avg: 2368.69 / Max: 2379.63Min: 2296.55 / Avg: 2318.44 / Max: 2338.82Min: 2361.46 / Avg: 2375.48 / Max: 2394.12Min: 3210.93 / Avg: 3252.13 / Max: 3290.8Min: 3286.05 / Avg: 3310.34 / Max: 3336.05Min: 3922.41 / Avg: 3967.28 / Max: 4021.83Min: 3840.34 / Avg: 3908.23 / Max: 3966.84Min: 4128.28 / Avg: 4195.62 / Max: 4245.05Min: 703.93 / Avg: 704.69 / Max: 705.19Min: 1404.92 / Avg: 1407.62 / Max: 1409.441. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5211K22K33K44K55KSE +/- 37.32, N = 7SE +/- 23.57, N = 6SE +/- 17.34, N = 8SE +/- 76.06, N = 8SE +/- 218.93, N = 8SE +/- 86.13, N = 8SE +/- 122.43, N = 8SE +/- 224.56, N = 8SE +/- 292.47, N = 8SE +/- 20.16, N = 8SE +/- 80.46, N = 530311.9429776.8147349.2944082.9552022.7644205.8052245.2351795.6151593.6245336.8921714.651. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7232PEPYC 7282EPYC 7302PEPYC 7502PEPYC 7532EPYC 7542EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F529K18K27K36K45KMin: 30136.45 / Avg: 30311.94 / Max: 30404.24Min: 29684.13 / Avg: 29776.81 / Max: 29837.83Min: 47278.3 / Avg: 47349.29 / Max: 47425.88Min: 43772.91 / Avg: 44082.95 / Max: 44328.66Min: 50688.93 / Avg: 52022.76 / Max: 52586.47Min: 43844.26 / Avg: 44205.8 / Max: 44502.94Min: 51776.39 / Avg: 52245.23 / Max: 52860.72Min: 50825.97 / Avg: 51795.61 / Max: 52696.05Min: 50186.95 / Avg: 51593.62 / Max: 52586.54Min: 45233.69 / Avg: 45336.89 / Max: 45424.87Min: 21536.77 / Avg: 21714.65 / Max: 22018.311. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230060090012001500SE +/- 1.98, N = 7SE +/- 1.94, N = 7SE +/- 2.23, N = 7SE +/- 1.74, N = 7SE +/- 1.99, N = 7SE +/- 1.96, N = 7SE +/- 2.27, N = 7SE +/- 1.94, N = 7SE +/- 1.97, N = 7SE +/- 1.84, N = 7SE +/- 2.20, N = 7SE +/- 1.34, N = 7SE +/- 2.22, N = 7SE +/- 0.24, N = 8SE +/- 2.05, N = 8976.59978.85975.221007.951018.971022.481006.921035.231008.241008.511007.221019.211035.371179.301179.021. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000Min: 970.13 / Avg: 976.59 / Max: 981.11Min: 972.88 / Avg: 978.85 / Max: 983.32Min: 970.09 / Avg: 975.22 / Max: 983.12Min: 1001.21 / Avg: 1007.95 / Max: 1011.76Min: 1012.27 / Avg: 1018.97 / Max: 1026.96Min: 1017.68 / Avg: 1022.48 / Max: 1028.23Min: 999.92 / Avg: 1006.92 / Max: 1013.6Min: 1031.59 / Avg: 1035.22 / Max: 1042.84Min: 1002.46 / Avg: 1008.24 / Max: 1013.3Min: 1001.29 / Avg: 1008.51 / Max: 1011.8Min: 1002.3 / Avg: 1007.22 / Max: 1013.81Min: 1017.42 / Avg: 1019.21 / Max: 1027.18Min: 1028.15 / Avg: 1035.37 / Max: 1041.31Min: 1178.67 / Avg: 1179.3 / Max: 1180.64Min: 1173.37 / Avg: 1179.02 / Max: 1186.771. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KSE +/- 139.55, N = 9SE +/- 22.37, N = 11SE +/- 71.59, N = 9SE +/- 80.44, N = 9SE +/- 134.09, N = 10SE +/- 160.95, N = 8SE +/- 154.09, N = 8SE +/- 143.74, N = 9SE +/- 375.36, N = 8SE +/- 641.51, N = 9SE +/- 1366.04, N = 15SE +/- 1033.87, N = 15SE +/- 1018.84, N = 15SE +/- 37.94, N = 10SE +/- 308.97, N = 944242.9861838.5276796.0484316.23111563.84136276.50135786.94144583.87135838.12149967.74155375.98145868.29148358.5754800.9971767.961. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230K60K90K120K150KMin: 43253.52 / Avg: 44242.98 / Max: 44735.98Min: 61730.3 / Avg: 61838.52 / Max: 61950.09Min: 76508.11 / Avg: 76796.04 / Max: 77062.73Min: 83921.86 / Avg: 84316.23 / Max: 84643.19Min: 110544.87 / Avg: 111563.84 / Max: 112037.3Min: 135695.57 / Avg: 136276.5 / Max: 136949.63Min: 135207.86 / Avg: 135786.94 / Max: 136435.92Min: 143957.47 / Avg: 144583.87 / Max: 145193.47Min: 133822.4 / Avg: 135838.12 / Max: 137193.41Min: 148789.93 / Avg: 149967.74 / Max: 154980.74Min: 151616.85 / Avg: 155375.98 / Max: 173605.93Min: 143378.84 / Avg: 145868.29 / Max: 159924Min: 145615.22 / Avg: 148358.57 / Max: 161715.14Min: 54543.54 / Avg: 54800.99 / Max: 54955.54Min: 70448.73 / Avg: 71767.96 / Max: 73371.091. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.00735, N = 9SE +/- 0.00724, N = 9SE +/- 0.00482, N = 9SE +/- 0.00474, N = 9SE +/- 0.00636, N = 9SE +/- 0.00414, N = 9SE +/- 0.00881, N = 9SE +/- 0.00582, N = 9SE +/- 0.00950, N = 9SE +/- 0.01930, N = 15SE +/- 0.02912, N = 15SE +/- 0.03302, N = 15SE +/- 0.02450, N = 15SE +/- 0.00118, N = 9SE +/- 0.00511, N = 98.271846.291355.083284.726083.346333.688333.601303.441182.553352.403802.523632.820152.607646.463983.92737MIN: 8MIN: 6.07MIN: 4.73MIN: 4.52MIN: 3.15MIN: 3.26MIN: 3.22MIN: 3.03MIN: 2.41MIN: 2.23MIN: 2.25MIN: 2.49MIN: 2.34MIN: 6.38MIN: 3.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 8.23 / Avg: 8.27 / Max: 8.29Min: 6.26 / Avg: 6.29 / Max: 6.32Min: 5.06 / Avg: 5.08 / Max: 5.11Min: 4.71 / Avg: 4.73 / Max: 4.74Min: 3.33 / Avg: 3.35 / Max: 3.39Min: 3.67 / Avg: 3.69 / Max: 3.7Min: 3.57 / Avg: 3.6 / Max: 3.64Min: 3.42 / Avg: 3.44 / Max: 3.47Min: 2.52 / Avg: 2.55 / Max: 2.62Min: 2.29 / Avg: 2.4 / Max: 2.52Min: 2.43 / Avg: 2.52 / Max: 2.85Min: 2.71 / Avg: 2.82 / Max: 3.12Min: 2.5 / Avg: 2.61 / Max: 2.79Min: 6.46 / Avg: 6.46 / Max: 6.47Min: 3.89 / Avg: 3.93 / Max: 3.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52246810SE +/- 0.00235, N = 9SE +/- 0.00336, N = 9SE +/- 0.00324, N = 9SE +/- 0.00304, N = 9SE +/- 0.00168, N = 9SE +/- 0.00338, N = 9SE +/- 0.00104, N = 9SE +/- 0.00215, N = 9SE +/- 0.00157, N = 9SE +/- 0.00625, N = 9SE +/- 0.00834, N = 9SE +/- 0.00637, N = 9SE +/- 0.00429, N = 9SE +/- 0.00176, N = 9SE +/- 0.00628, N = 96.794454.527583.452903.346632.383061.948451.988091.896241.554941.539151.431241.494461.396605.582603.02897MIN: 6.75MIN: 4.47MIN: 3.34MIN: 3.25MIN: 2.33MIN: 1.82MIN: 1.83MIN: 1.77MIN: 1.42MIN: 1.37MIN: 1.26MIN: 1.36MIN: 1.27MIN: 5.52MIN: 2.921. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F523691215Min: 6.79 / Avg: 6.79 / Max: 6.81Min: 4.51 / Avg: 4.53 / Max: 4.54Min: 3.44 / Avg: 3.45 / Max: 3.47Min: 3.33 / Avg: 3.35 / Max: 3.36Min: 2.38 / Avg: 2.38 / Max: 2.39Min: 1.94 / Avg: 1.95 / Max: 1.97Min: 1.98 / Avg: 1.99 / Max: 1.99Min: 1.89 / Avg: 1.9 / Max: 1.91Min: 1.55 / Avg: 1.55 / Max: 1.56Min: 1.52 / Avg: 1.54 / Max: 1.57Min: 1.41 / Avg: 1.43 / Max: 1.48Min: 1.48 / Avg: 1.49 / Max: 1.53Min: 1.38 / Avg: 1.4 / Max: 1.42Min: 5.57 / Avg: 5.58 / Max: 5.59Min: 2.99 / Avg: 3.03 / Max: 3.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025SE +/- 0.002, N = 8SE +/- 0.008, N = 9SE +/- 0.026, N = 10SE +/- 0.024, N = 10SE +/- 0.089, N = 15SE +/- 0.286, N = 15SE +/- 0.178, N = 15SE +/- 0.252, N = 15SE +/- 0.286, N = 15SE +/- 0.291, N = 15SE +/- 0.301, N = 15SE +/- 0.257, N = 15SE +/- 0.158, N = 15SE +/- 0.003, N = 9SE +/- 0.028, N = 115.2377.5829.68510.37414.03815.68516.26516.56418.50619.32821.76319.99721.3206.47011.5211. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52510152025Min: 5.23 / Avg: 5.24 / Max: 5.25Min: 7.55 / Avg: 7.58 / Max: 7.61Min: 9.59 / Avg: 9.69 / Max: 9.82Min: 10.19 / Avg: 10.37 / Max: 10.46Min: 13.39 / Avg: 14.04 / Max: 14.46Min: 13.34 / Avg: 15.69 / Max: 17.35Min: 14.97 / Avg: 16.27 / Max: 17.08Min: 14.32 / Avg: 16.56 / Max: 18.32Min: 17.37 / Avg: 18.51 / Max: 20.7Min: 18.26 / Avg: 19.33 / Max: 21.71Min: 20.67 / Avg: 21.76 / Max: 24.47Min: 19.07 / Avg: 20 / Max: 22.21Min: 20.58 / Avg: 21.32 / Max: 22.8Min: 6.45 / Avg: 6.47 / Max: 6.49Min: 11.4 / Avg: 11.52 / Max: 11.651. (CXX) g++ options: -O3 -pthread -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52100200300400500SE +/- 0.15, N = 8SE +/- 0.45, N = 9SE +/- 0.35, N = 10SE +/- 0.29, N = 10SE +/- 0.86, N = 11SE +/- 0.76, N = 11SE +/- 1.03, N = 11SE +/- 0.74, N = 11SE +/- 0.97, N = 10SE +/- 0.61, N = 10SE +/- 0.96, N = 10SE +/- 3.60, N = 15SE +/- 2.40, N = 10SE +/- 0.26, N = 8SE +/- 0.36, N = 10144.32239.47295.80316.49413.09437.88426.52459.14445.92458.24448.08409.36437.25167.08263.131. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5280160240320400Min: 143.85 / Avg: 144.32 / Max: 145.1Min: 237.06 / Avg: 239.47 / Max: 241.16Min: 293.54 / Avg: 295.8 / Max: 297.47Min: 315.62 / Avg: 316.49 / Max: 317.97Min: 407.89 / Avg: 413.09 / Max: 416.67Min: 434.47 / Avg: 437.88 / Max: 443.13Min: 419.29 / Avg: 426.52 / Max: 430.73Min: 454.55 / Avg: 459.14 / Max: 462.25Min: 441.83 / Avg: 445.92 / Max: 451.13Min: 456.27 / Avg: 458.24 / Max: 462.61Min: 442.15 / Avg: 448.08 / Max: 452.83Min: 379.51 / Avg: 409.36 / Max: 426.74Min: 424.03 / Avg: 437.25 / Max: 446.43Min: 165.84 / Avg: 167.08 / Max: 168.07Min: 260.87 / Avg: 263.13 / Max: 264.671. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52300K600K900K1200K1500KSE +/- 504.67, N = 11SE +/- 863.61, N = 11SE +/- 1355.09, N = 11SE +/- 609.53, N = 11SE +/- 1181.68, N = 11SE +/- 1366.07, N = 11SE +/- 986.25, N = 11SE +/- 372.53, N = 12SE +/- 587.46, N = 11SE +/- 661.25, N = 11SE +/- 1511.23, N = 11SE +/- 658.02, N = 11SE +/- 584.52, N = 12SE +/- 1531.45, N = 12SE +/- 791.31, N = 1296937196906696814999713310133201014342999442103029599746199680599780910158581028054118068211781401. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F52200K400K600K800K1000KMin: 966277 / Avg: 969371.36 / Max: 971389Min: 961218 / Avg: 969066.36 / Max: 971389Min: 956211 / Avg: 968148.55 / Max: 971389Min: 994184 / Avg: 997132.73 / Max: 999597Min: 1003238 / Avg: 1013319.55 / Max: 1016195Min: 1001414 / Avg: 1014342.27 / Max: 1018073Min: 990607 / Avg: 999441.64 / Max: 1003238Min: 1029491 / Avg: 1030294.92 / Max: 1033354Min: 994184 / Avg: 997460.73 / Max: 999597Min: 994184 / Avg: 996805.09 / Max: 1001414Min: 987057 / Avg: 997809 / Max: 1001414Min: 1010601 / Avg: 1015858.45 / Max: 1018073Min: 1025657 / Avg: 1028053.5 / Max: 1031419Min: 1166902 / Avg: 1180681.92 / Max: 1187021Min: 1174366 / Avg: 1178140.42 / Max: 11819271. (CC) gcc options: -O3 -march=native

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5230060090012001500SE +/- 0.82, N = 3SE +/- 0.90, N = 3SE +/- 0.46, N = 3SE +/- 0.73, N = 3SE +/- 2.68, N = 3SE +/- 1.87, N = 3SE +/- 1.15, N = 3SE +/- 0.47, N = 3SE +/- 3.10, N = 3SE +/- 1.10, N = 3SE +/- 3.75, N = 3SE +/- 1.60, N = 3SE +/- 3.10, N = 3SE +/- 0.28, N = 3SE +/- 1.44, N = 3419.19555.11644.23634.51847.40932.75880.57937.441044.251070.091193.661050.991166.42453.36651.36MIN: 370.7 / MAX: 456.84MIN: 463.4 / MAX: 605.59MIN: 503.48 / MAX: 704.1MIN: 482.67 / MAX: 693.38MIN: 566.75 / MAX: 939.62MIN: 659.65 / MAX: 1047.61MIN: 567.79 / MAX: 985.57MIN: 659.51 / MAX: 1052.41MIN: 554.63 / MAX: 1157.48MIN: 574.58 / MAX: 1186.8MIN: 500.62 / MAX: 1329.43MIN: 478.51 / MAX: 1166.67MIN: 498.33 / MAX: 1299.42MIN: 400.76 / MAX: 488.59MIN: 480.87 / MAX: 706.631. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F522004006008001000Min: 417.79 / Avg: 419.19 / Max: 420.62Min: 553.68 / Avg: 555.11 / Max: 556.76Min: 643.31 / Avg: 644.23 / Max: 644.7Min: 633.06 / Avg: 634.51 / Max: 635.27Min: 842.07 / Avg: 847.4 / Max: 850.64Min: 929.58 / Avg: 932.75 / Max: 936.06Min: 879.23 / Avg: 880.57 / Max: 882.86Min: 936.52 / Avg: 937.44 / Max: 938.02Min: 1038.12 / Avg: 1044.25 / Max: 1048.12Min: 1068.71 / Avg: 1070.09 / Max: 1072.27Min: 1186.16 / Avg: 1193.66 / Max: 1197.47Min: 1048.15 / Avg: 1050.99 / Max: 1053.7Min: 1162.99 / Avg: 1166.42 / Max: 1172.62Min: 452.94 / Avg: 453.36 / Max: 453.9Min: 648.52 / Avg: 651.36 / Max: 653.251. (CC) gcc options: -pthread

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5250100150200250217203196180168150144174132138120120135185175

Stream

OpenBenchmarking.orgMB/s Per Watt, More Is BetterStream 2013-01-17Type: TriadEPYC 7232PEPYC 7302PEPYC 7402P60012001800240030002060.052743.302788.51

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: TriadEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KSE +/- 65.26, N = 5SE +/- 2.27, N = 5SE +/- 8.84, N = 5SE +/- 5.88, N = 5SE +/- 7.48, N = 5SE +/- 19.43, N = 5SE +/- 16.47, N = 5SE +/- 36.22, N = 5SE +/- 11.91, N = 5SE +/- 33.62, N = 5SE +/- 21.10, N = 5SE +/- 26.33, N = 5SE +/- 20.07, N = 5SE +/- 59.32, N = 5SE +/- 194.23, N = 556788.656066.655596.388105.787308.487239.899248.287057.096497.197940.098343.298034.898248.089567.272315.61. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: TriadEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KMin: 56528 / Avg: 56788.62 / Max: 56863.9Min: 56061.4 / Avg: 56066.58 / Max: 56074.5Min: 55564.4 / Avg: 55596.28 / Max: 55617.5Min: 88086.3 / Avg: 88105.74 / Max: 88121.8Min: 87291.9 / Avg: 87308.42 / Max: 87333.6Min: 87168 / Avg: 87239.76 / Max: 87282.1Min: 99194.2 / Avg: 99248.22 / Max: 99293.1Min: 86972.9 / Avg: 87057.04 / Max: 87161.9Min: 96474.4 / Avg: 96497.14 / Max: 96541Min: 97851.1 / Avg: 97940.04 / Max: 98039.8Min: 98280 / Avg: 98343.22 / Max: 98393.4Min: 97939.6 / Avg: 98034.8 / Max: 98096.1Min: 98207.1 / Avg: 98247.98 / Max: 98320.3Min: 89468.9 / Avg: 89567.18 / Max: 89800.2Min: 71942.4 / Avg: 72315.56 / Max: 73068.31. (CC) gcc options: -O3 -march=native -fopenmp

OpenBenchmarking.orgMB/s Per Watt, More Is BetterStream 2013-01-17Type: AddEPYC 7702EPYC 7F5250010001500200025002144.891709.94

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: AddEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KSE +/- 61.95, N = 5SE +/- 9.00, N = 5SE +/- 6.11, N = 5SE +/- 17.97, N = 5SE +/- 9.19, N = 5SE +/- 13.36, N = 5SE +/- 7.96, N = 5SE +/- 16.89, N = 5SE +/- 16.17, N = 5SE +/- 23.20, N = 5SE +/- 29.12, N = 5SE +/- 25.81, N = 5SE +/- 23.05, N = 5SE +/- 18.23, N = 5SE +/- 39.00, N = 556795.155911.655437.287568.886805.686677.297912.586737.795807.097944.297180.997206.397124.389255.472752.21. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: AddEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KMin: 56547.6 / Avg: 56795.1 / Max: 56866.8Min: 55883.1 / Avg: 55911.58 / Max: 55937.4Min: 55417 / Avg: 55437.24 / Max: 55454.2Min: 87527.2 / Avg: 87568.78 / Max: 87617.1Min: 86780.2 / Avg: 86805.64 / Max: 86837.1Min: 86645.7 / Avg: 86677.24 / Max: 86711.4Min: 97891 / Avg: 97912.52 / Max: 97938.6Min: 86695.7 / Avg: 86737.72 / Max: 86780.2Min: 95762.2 / Avg: 95807.04 / Max: 95857.9Min: 97867.2 / Avg: 97944.18 / Max: 97991.1Min: 97111 / Avg: 97180.94 / Max: 97268.6Min: 97146.6 / Avg: 97206.28 / Max: 97299.6Min: 97079.1 / Avg: 97124.28 / Max: 97213.2Min: 89196.2 / Avg: 89255.38 / Max: 89299.1Min: 72639.1 / Avg: 72752.2 / Max: 72844.71. (CC) gcc options: -O3 -march=native -fopenmp

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: ScaleEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5220K40K60K80K100KSE +/- 61.32, N = 5SE +/- 4.61, N = 5SE +/- 3.08, N = 5SE +/- 10.74, N = 5SE +/- 7.47, N = 5SE +/- 12.22, N = 5SE +/- 18.59, N = 5SE +/- 14.00, N = 5SE +/- 32.31, N = 5SE +/- 55.03, N = 5SE +/- 21.55, N = 5SE +/- 27.01, N = 5SE +/- 19.79, N = 5SE +/- 30.51, N = 5SE +/- 31.64, N = 552630.151978.251044.279703.579147.378638.689487.678399.986805.687579.987848.187714.687905.681753.067011.91. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: ScaleEPYC 7232PEPYC 7272EPYC 7282EPYC 7302PEPYC 7402PEPYC 7502PEPYC 7532EPYC 7542EPYC 7552EPYC 7642EPYC 7662EPYC 7702EPYC 7742EPYC 7F32EPYC 7F5216K32K48K64K80KMin: 52385 / Avg: 52630.06 / Max: 52697.2Min: 51965.2 / Avg: 51978.22 / Max: 51992.1Min: 51038.4 / Avg: 51044.22 / Max: 51055.9Min: 79669.6 / Avg: 79703.46 / Max: 79733Min: 79125.7 / Avg: 79147.34 / Max: 79168.6Min: 78592.9 / Avg: 78638.56 / Max: 78666.6Min: 89445.1 / Avg: 89487.56 / Max: 89550.1Min: 78366.2 / Avg: 78399.86 / Max: 78439.4Min: 86725.2 / Avg: 86805.6 / Max: 86880.2Min: 87440.5 / Avg: 87579.9 / Max: 87776.8Min: 87786 / Avg: 87848.06 / Max: 87907.9Min: 87618.6 / Avg: 87714.62 / Max: 87786Min: 87840 / Avg: 87905.6 / Max: 87960.9Min: 81720.5 / Avg: 81753 / Max: 81875Min: 66926.2 / Avg: 67011.88 / Max: 67120.31. (CC) gcc options: -O3 -march=native -fopenmp

272 Results Shown

Quantum ESPRESSO
NWChem
LeelaChessZero:
  Eigen
  BLAS
LAMMPS Molecular Dynamics Simulator
Caffe
OpenFOAM
Timed LLVM Compilation
BlogBench
OpenVKL
Hierarchical INTegration
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
Crypto++
Blender
JPEG XL
ONNX Runtime
Ngspice
WebP2 Image Encode
Numpy Benchmark
Tinymembench
BRL-CAD
Blender
Crypto++
ONNX Runtime
Incompact3D
ONNX Runtime
Blender
Monte Carlo Simulations of Ionised Nebulae
High Performance Conjugate Gradient
Ngspice
Mobile Neural Network:
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Darmstadt Automotive Parallel Heterogeneous Suite
Caffe
asmFish
WebP2 Image Encode
Appleseed
YafaRay
ONNX Runtime
Rodinia
OSPray
GPAW
ONNX Runtime
Rodinia
PlaidML
C-Blosc
Stress-NG
TensorFlow Lite
Chaos Group V-RAY
ASTC Encoder
Blender
Cpuminer-Opt:
  Garlicoin
  Deepcoin
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
Apache Cassandra
PlaidML
Numenta Anomaly Benchmark
dav1d
FinanceBench
JPEG XL
Timed MrBayes Analysis
Apache CouchDB
Chaos Group V-RAY
Montage Astronomical Image Mosaic Engine
NAS Parallel Benchmarks
oneDNN
InfluxDB
KeyDB
Facebook RocksDB
Timed Godot Game Engine Compilation
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
OSPray
oneDNN
InfluxDB
TensorFlow Lite
Build2
GROMACS
Blender
TensorFlow Lite
Kripke
Appleseed
Cpuminer-Opt
oneDNN
JPEG XL Decoding
GROMACS
Stockfish
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
oneDNN:
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
Perl Benchmarks
Himeno Benchmark
OpenVINO:
  Person Detection 0106 FP16 - CPU:
    ms
    FPS
Cpuminer-Opt
OpenVINO:
  Person Detection 0106 FP32 - CPU:
    ms
    FPS
PostgreSQL pgbench:
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
Coremark
OpenVINO:
  Face Detection 0106 FP16 - CPU:
    ms
    FPS
  Face Detection 0106 FP32 - CPU:
    ms
    FPS
Crypto++
Facebook RocksDB
OpenVINO:
  Age Gender Recognition Retail 0013 FP32 - CPU:
    ms
    FPS
IndigoBench
Timed Linux Kernel Compilation
IndigoBench
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
TensorFlow Lite:
  SqueezeNet
  Mobilenet Float
  Mobilenet Quant
Facebook RocksDB
John The Ripper
simdjson
Kvazaar
FinanceBench
RawTherapee
MBW
PostgreSQL pgbench:
  100 - 250 - Read Write - Average Latency
  100 - 250 - Read Write
NAMD
simdjson:
  PartialTweets
  DistinctUserID
Hugin
simdjson
OpenFOAM
JPEG XL Decoding
Perl Benchmarks
Timed PHP Compilation
7-Zip Compression
OSPray
eSpeak-NG Speech Engine
Redis:
  SET
  GET
Tachyon
Zstd Compression
ACES DGEMM
PyPerformance
ASKAP:
  tConvolve MPI - Gridding
  tConvolve MPI - Degridding
OSPray
QuantLib
Parboil
PHPBench
ebizzy
LibRaw
m-queens
Timed FFmpeg Compilation
Etcpak
PostgreSQL pgbench:
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
Numenta Anomaly Benchmark
PyPerformance
Stress-NG
Basis Universal
miniFE
C-Ray
NAS Parallel Benchmarks
Stress-NG
OSPray
x265
oneDNN
CloverLeaf
Zstd Compression
Stress-NG
Pennant
John The Ripper
Aircrack-ng
Stress-NG:
  Crypto
  Matrix Math
  Vector Math
Google SynthMark
NAS Parallel Benchmarks
Darmstadt Automotive Parallel Heterogeneous Suite
LULESH
Crafty
PyPerformance
POV-Ray
XZ Compression
PyBench
Tungsten Renderer
PyPerformance:
  nbody
  float
Algebraic Multi-Grid Benchmark
PyPerformance
Kvazaar
Swet
toyBrot Fractal Generator:
  C++ Tasks
  OpenMP
Botan
JPEG XL
toyBrot Fractal Generator:
  C++ Threads
  TBB
Darmstadt Automotive Parallel Heterogeneous Suite
Timed MPlayer Compilation
Pennant
Basis Universal
ASKAP:
  tConvolve OpenMP - Degridding
  tConvolve OpenMP - Gridding
Stream-Dynamic:
  - Triad
  - Add
  - Scale
  - Copy
oneDNN
Timed ImageMagick Compilation
ASTC Encoder
OCRMyPDF
Rodinia
oneDNN
Kvazaar
SVT-AV1
Numenta Anomaly Benchmark
Botan:
  Blowfish
  Twofish
OSPray
Botan:
  CAST-256
  KASUMI
oneDNN
Rodinia
oneDNN
Intel Open Image Denoise
Sysbench
Tungsten Renderer
JPEG XL:
  PNG - 5
  JPEG - 7
Stream
NAS Parallel Benchmarks:
  CG.C
  FT.C
WebP2 Image Encode
Sysbench
SVT-AV1
rays1bench
JPEG XL
oneDNN
OSPray
dav1d
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
Numenta Anomaly Benchmark
SVT-VP9
dav1d
x264
NAS Parallel Benchmarks:
  EP.C
  MG.C
Etcpak
FFTE
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
LAMMPS Molecular Dynamics Simulator
SVT-VP9
TSCP
dav1d
ctx_clock
Stream
Stream
Stream
Stream:
  Add
  Scale