EPYC 2021 Benchmarks

Tests for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2102219-HA-EB716339316
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 5 Tests
C++ Boost Tests 5 Tests
Chess Test Suite 6 Tests
Timed Code Compilation 8 Tests
C/C++ Compiler Tests 30 Tests
Compression Tests 5 Tests
CPU Massive 51 Tests
Creator Workloads 38 Tests
Cryptography 5 Tests
Database Test Suite 7 Tests
Encoding 6 Tests
Finance 2 Tests
Fortran Tests 9 Tests
Game Development 7 Tests
HPC - High Performance Computing 36 Tests
Imaging 7 Tests
Common Kernel Benchmarks 6 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Linear Algebra 2 Tests
Machine Learning 11 Tests
Memory Test Suite 3 Tests
Molecular Dynamics 10 Tests
MPI Benchmarks 11 Tests
Multi-Core 54 Tests
NVIDIA GPU Compute 10 Tests
Intel oneAPI 6 Tests
OpenCL 2 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 20 Tests
Programmer / Developer System Benchmarks 15 Tests
Python 4 Tests
Quantum Mechanics 2 Tests
Raytracing 6 Tests
Renderers 12 Tests
Scientific Computing 19 Tests
Server 12 Tests
Server CPU Tests 33 Tests
Single-Threaded 9 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 6 Tests
Common Workstation Benchmarks 8 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable
Show Perf Per RAM Channel Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7702
February 01 2021
  19 Hours, 11 Minutes
EPYC 7402P
February 03 2021
  19 Hours
EPYC 7302P
February 04 2021
  23 Hours, 11 Minutes
EPYC 7232P
February 06 2021
  1 Day, 1 Hour, 30 Minutes
EPYC 7552
February 07 2021
  21 Hours, 8 Minutes
EPYC 7272
February 08 2021
  23 Hours, 46 Minutes
EPYC 7662
February 10 2021
  19 Hours, 2 Minutes
EPYC 7502P
February 11 2021
  21 Hours, 2 Minutes
EPYC 7F52
February 12 2021
  22 Hours, 53 Minutes
EPYC 7542
February 13 2021
  20 Hours, 7 Minutes
EPYC 7282
February 15 2021
  23 Hours, 31 Minutes
EPYC 7F32
February 16 2021
  1 Day, 1 Hour, 3 Minutes
EPYC 7532
February 17 2021
  19 Hours, 59 Minutes
EPYC 7642
February 19 2021
  22 Hours, 55 Minutes
EPYC 7742
February 20 2021
  21 Hours, 24 Minutes
Invert Hiding All Results Option
  21 Hours, 51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 2021 BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionEPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 7742AMD EPYC 7702 64-Core @ 2.00GHz (64 Cores / 128 Threads)ASRockRack EPYCD8 (P2.40 BIOS)AMD Starship/Matisse8 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E13841GB Micron_9300_MTFDHAL3T8TDPllvmpipeVE2282 x Intel I350Ubuntu 20.045.11.0-051100rc6daily20210201-generic (x86_64) 20210131GNOME Shell 3.36.4X Server 1.20.8llvmpipe4.5 Mesa 20.2.6 (LLVM 11.0.0 256 bits)GCC 9.3.0ext41920x1080AMD EPYC 7402P 24-Core @ 2.80GHz (24 Cores / 48 Threads)AMD EPYC 7302P 16-Core @ 3.00GHz (16 Cores / 32 Threads)AMD EPYC 7232P 8-Core @ 3.10GHz (8 Cores / 16 Threads)AMD EPYC 7552 48-Core @ 2.20GHz (48 Cores / 96 Threads)AMD EPYC 7272 12-Core @ 2.90GHz (12 Cores / 24 Threads)AMD EPYC 7662 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD EPYC 7502P 32-Core @ 2.50GHz (32 Cores / 64 Threads)AMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads)7 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E1AMD EPYC 7542 32-Core @ 2.90GHz (32 Cores / 64 Threads)8 x 16384 MB DDR4-3200MT/s 18ASF2G72PDZ-3G2E1AMD EPYC 7282 16-Core @ 2.80GHz (16 Cores / 32 Threads)AMD EPYC 7F32 8-Core @ 3.70GHz (8 Cores / 16 Threads)AMD EPYC 7532 32-Core @ 2.40GHz (32 Cores / 64 Threads)AMD EPYC 7642 48-Core @ 2.30GHz (48 Cores / 96 Threads)AMD EPYC 7742 64-Core @ 2.25GHz (64 Cores / 128 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x8301034Java Details- EPYC 7702: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7402P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7302P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7232P: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7552: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7272: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7662: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- EPYC 7502P: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7F52: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7542: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7282: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7F32: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7532: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7642: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)- EPYC 7742: OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.20.04)Python Details- Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 7742Logarithmic Result OverviewPhoronix Test SuiteCpuminer-Optm-queensStockfishCoremarkAircrack-ngC-RayJohn The RipperIndigoBenchBRL-CADTachyonPennantOSPrayasmFishNAMD7-Zip CompressionChaos Group V-RAYASTC EncoderBlenderLuxCoreRenderOpenVKLrays1benchNAS Parallel BenchmarksPOV-RayStress-NGCloverLeafFacebook RocksDBTensorFlow LiteGROMACSIntel Open Image DenoiseebizzyApache CassandraLAMMPS Molecular Dynamics SimulatorPostgreSQL pgbenchASKAPoneDNNTimed Linux Kernel CompilationAppleseedLeelaChessZeroTimed MPlayer CompilationBasis UniversalOpenFOAMFFTEGPAWTimed LLVM CompilationPlaidMLSVT-AV1KripkeSVT-VP9RodiniaSysbenchKvazaarTimed FFmpeg CompilationYafaRayminiFETimed Godot Game Engine Compilationx265x264Tungsten Rendererdav1dZstd CompressionHigh Performance Conjugate GradientIncompact3DTimed ImageMagick CompilationBuild2LULESHParboilOpenVINOAlgebraic Multi-Grid BenchmarkWebP2 Image EncodeFinanceBenchACES DGEMMOCRMyPDFctx_clockTimed PHP CompilationStreamNumenta Anomaly BenchmarkAI Benchmark AlphaXZ CompressionTimed MrBayes AnalysisMobile Neural NetworkBlogBenchQuantum ESPRESSOMonte Carlo Simulations of Ionised NebulaeRawTherapeeCaffeC-BloscNumpy BenchmarkApache CouchDBInfluxDBCraftyHierarchical INTegrationGoogle SynthMarkSwetBotansimdjsoneSpeak-NG Speech EnginePyPerformanceTSCPPyBenchQuantLibMontage Astronomical Image Mosaic EngineCrypto++Himeno BenchmarkPHPBenchEtcpakPerl BenchmarksDarmstadt Automotive Parallel Heterogeneous SuiteLibRawONNX RuntimeTinymembenchRedisHuginKeyDBLZ4 CompressionMBW

EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 7742Logarithmic Per Watt Result OverviewPhoronix Test SuiteACES DGEMMCpuminer-Optrays1benchOSPrayLuxCoreRenderKripkeOpenVKLBRL-CADIntel Open Image DenoiseStockfishminiFECoremarkAircrack-ngSVT-VP97-Zip CompressionJohn The RipperApache CassandraGROMACSIndigoBenchChaos Group V-RAYasmFishBlogBenchNAS Parallel BenchmarksHigh Performance Conjugate GradientebizzyPlaidMLLAMMPS Molecular Dynamics SimulatorStress-NGASKAPSysbenchSVT-AV1Facebook RocksDBFFTEC-Bloscdav1dONNX Runtimex265x264Darmstadt Automotive Parallel Heterogeneous SuiteAI Benchmark AlphaAlgebraic Multi-Grid BenchmarkKvazaarKeyDBLULESHStreamLeelaChessZeroLZ4 CompressionMBWZstd CompressionHierarchical INTegrationTinymembenchSwetCrypto++QuantLibPHPBenchBotanHimeno BenchmarkCraftyGoogle SynthMarkNumpy BenchmarkEtcpakTSCPRedisInfluxDBLibRawsimdjsonP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

EPYC 2021 Benchmarkscpuminer-opt: Skeincoinsysbench: CPUstress-ng: CPU Stressnpb: EP.Dstress-ng: Vector Mathnpb: EP.Copenvino: Age Gender Recognition Retail 0013 FP16 - CPUospray: San Miguel - Path Tracerjohn-the-ripper: Blowfishstress-ng: Cryptoopenvino: Age Gender Recognition Retail 0013 FP32 - CPUm-queens: Time To Solvepennant: leblancbigstockfish: Total Timecoremark: CoreMark Size 666 - Iterations Per Secondonednn: Convolution Batch Shapes Auto - f32 - CPUaircrack-ng: c-ray: Total Time - 4K, 16 Rays Per Pixelindigobench: CPU - Supercarindigobench: CPU - Bedroombrl-cad: VGR Performance Metricospray: XFrog Forest - Path Tracerstress-ng: Context Switchingospray: XFrog Forest - SciVisospray: NASA Streamlines - Path Traceraskap: tConvolve MPI - Degriddingtachyon: Total Timejohn-the-ripper: MD5onednn: IP Shapes 3D - f32 - CPUv-ray: CPUastcenc: Exhaustiveasmfish: 1024 Hash Memory, 26 Depthstress-ng: Matrix Mathospray: Magnetic Reconnection - SciVisblender: Pabellon Barcelona - CPU-Onlynamd: ATPase Simulation - 327,506 Atomsblender: Classroom - CPU-Onlycompress-7zip: Compress Speed Testrocksdb: Rand Readospray: San Miguel - SciVisblender: Barbershop - CPU-Onlyv-ray: CPUpennant: sedovbigaskap: tConvolve MPI - Griddingrodinia: OpenMP LavaMDpgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencyrocksdb: Read While Writingospray: NASA Streamlines - SciVisluxcorerender: Rainbow Colors and Prismluxcorerender: DLSCastcenc: Thoroughopenvkl: vklBenchmarkrays1bench: Large Scenetoybrot: TBBtoybrot: C++ Threadsblender: BMW27 - CPU-Onlyopenvino: Person Detection 0106 FP32 - CPUpgbench: 100 - 100 - Read Onlypgbench: 100 - 100 - Read Only - Average Latencyopenvino: Person Detection 0106 FP16 - CPUtoybrot: C++ Taskspovray: Trace Timeblender: Fishy Cat - CPU-Onlytoybrot: OpenMPopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP16 - CPUcloverleaf: Lagrangian-Eulerian Hydrodynamicsaskap: tConvolve OpenMP - Degriddingopenfoam: Motorbike 30Mtensorflow-lite: Mobilenet Quanttensorflow-lite: Mobilenet Floattensorflow-lite: Inception V4gromacs: Water Benchmarktensorflow-lite: Inception ResNet V2onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUlammps: 20k Atomsonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUoidn: Memorialtungsten: Hairebizzy: cassandra: Writesaskap: tConvolve OpenMP - Griddingtensorflow-lite: SqueezeNetappleseed: Disney Materialstress-ng: Socket Activityonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUnwchem: C240 Buckyballlammps: Rhodopsin Proteinbasis: UASTC Level 3compress-zstd: 19lczero: Eigensvt-av1: Enc Mode 8 - 1080pbuild-linux-kernel: Time To Compilebuild-mplayer: Time To Compilegromacs: water_GMX50_barelczero: BLASonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUrocksdb: Rand Fill Syncffte: N=256, 3D Complex FFT Routinekvazaar: Bosphorus 4K - Mediumgpaw: Carbon Nanotubeonednn: Deconvolution Batch shapes_3d - f32 - CPUplaidml: No - Inference - VGG19 - CPUbuild-llvm: Time To Compileappleseed: Emilypgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencyrodinia: OpenMP CFD Solverplaidml: No - Inference - VGG16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 1D - f32 - CPUpgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencyrodinia: OpenMP Leukocytesvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080ponednn: Convolution Batch Shapes Auto - u8s8f32 - CPUnpb: LU.Cbasis: UASTC Level 2onednn: Deconvolution Batch shapes_1d - f32 - CPUdav1d: Summer Nature 4Kkvazaar: Bosphorus 4K - Very Fastbuild-ffmpeg: Time To Compileyafaray: Total Time For Sample Sceneminife: Smallbuild-godot: Time To Compiledav1d: Summer Nature 1080px265: Bosphorus 4Ksvt-av1: Enc Mode 4 - 1080px264: H.264 Video Encodingtensorflow-lite: NASNet Mobilekvazaar: Bosphorus 4K - Ultra Fastnpb: FT.Csysbench: Memoryhpcg: openfoam: Motorbike 60Mdav1d: Chimera 1080pnpb: MG.Cincompact3d: Cylindernpb: CG.Cbuild-imagemagick: Time To Compilerodinia: OpenMP Streamclusterbuild2: Time To Compileopenvino: Face Detection 0106 FP16 - CPUlulesh: openvino: Face Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP16 - CPUnpb: IS.Dopenvino: Person Detection 0106 FP32 - CPUparboil: OpenMP LBMwebp2: Quality 75, Compression Effort 7mnn: mobilenet-v1-1.0webp2: Quality 95, Compression Effort 7amg: dav1d: Chimera 1080p 10-bitmt-dgemm: Sustained Floating-Point Ratewebp2: Quality 100, Compression Effort 5ai-benchmark: Device Inference Scorenumenta-nab: Earthgecko Skylineocrmypdf: Processing 60 Page PDF Documentstream-dynamic: - Triadstream-dynamic: - Addnumenta-nab: Windowed Gaussianctx-clock: Context Switch Timebuild-php: Time To Compilestream: Copystream: Triadmnn: MobileNetV2_224ngspice: C7552stream: Addstream-dynamic: - Copystream: Scalestream-dynamic: - Scalecompress-zstd: 3ai-benchmark: Device AI Scorenumenta-nab: Relative Entropycompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9mrbayes: Primate Phylogeny Analysisopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUmnn: resnet-v2-50tungsten: Water Causticnumenta-nab: Bayesian Changepointjpegxl-decode: Allonnx: shufflenet-v2-10 - OpenMP CPUblogbench: Readdaphne: OpenMP - Points2Imageqe: AUSURF112ai-benchmark: Device Training Scorecaffe: AlexNet - CPU - 200mocassin: Dust 2D tau100.0ngspice: C2670mnn: SqueezeNetV1.0rawtherapee: Total Benchmark Timeonnx: fcn-resnet101-11 - OpenMP CPUblosc: blosclzonnx: bertsquad-10 - OpenMP CPUcaffe: GoogleNet - CPU - 200onnx: super-resolution-10 - OpenMP CPUinfluxdb: 64 - 10000 - 2,5000,1 - 10000numpy: couchdb: 100 - 1000 - 24jpegxl: PNG - 5pyperformance: floatdaphne: OpenMP - NDT Mappingsimdjson: PartialTweetspyperformance: pathlibcrafty: Elapsed Timesimdjson: DistinctUserIDcompress-lz4: 9 - Compression Speedpyperformance: crypto_pyaessimdjson: LargeRandcryptopp: Integer + Elliptic Curve Public Key Algorithmspyperformance: nbodypybench: Total For Average Test Timescryptopp: Keyed Algorithmsfinancebench: Repo OpenMPbotan: Blowfishhint: FLOATbotan: Twofishsynthmark: VoiceMark_100pyperformance: django_templatefinancebench: Bonds OpenMPcompress-lz4: 3 - Compression Speedswet: Averagebotan: CAST-256botan: KASUMIespeak: Text-To-Speech Synthesisperl-benchmark: Pod2htmlbotan: AES-256etcpak: ETC2tscp: AI Chess Performancequantlib: perl-benchmark: Interpretermontage: Mosaic of M17, K band, 1.5 deg x 1.5 deghimeno: Poisson Pressure Solverpyperformance: regex_compilephpbench: PHP Benchmark Suitedaphne: OpenMP - Euclidean Clusterinfluxdb: 4 - 10000 - 2,5000,1 - 10000etcpak: DXT1cryptopp: Unkeyed Algorithmssimdjson: Kostyajpegxl: PNG - 8jpegxl: JPEG - 8jpegxl-decode: 1libraw: Post-Processing Benchmarkredis: GETtinymembench: Standard Memsetonnx: yolov4 - OpenMP CPUredis: SETjpegxl: PNG - 7hugin: Panorama Photo Assistant + Stitching Timekeydb: jpegxl: JPEG - 5jpegxl: JPEG - 7compress-lz4: 9 - Decompression Speedcompress-lz4: 3 - Decompression Speedmbw: Memory Copy - 8192 MiBstress-ng: CPU Cachesvt-vp9: Visual Quality Optimized - Bosphorus 1080pcpuminer-opt: Deepcoincpuminer-opt: Garlicoincpuminer-opt: x25xkripke: EPYC 7702EPYC 7402PEPYC 7302PEPYC 7232PEPYC 7552EPYC 7272EPYC 7662EPYC 7502PEPYC 7F52EPYC 7542EPYC 7282EPYC 7F32EPYC 7532EPYC 7642EPYC 7742605852106421.951119926.613989.79397326.453908.2325067.044.927003311996.5925012.9813.8397.0339821009084531845888.7971831.07844143169.12012.84919.0958.8116600795.9220917619.7611.0716.6720185.519.121841830001.159434529245.80122849461179226.2440119.170.49264103.5526490821846982455.56154.426275411.9424720382.254.7118999920.279823247476.927.827.046.35492243.5740.076.058974530.1126.0211.50055.407.887.8814.707148.7723.3536468.835037.47642694.3737026071.4944624.8181.888420.80064327.677.5070627017672275649509.1461679.667.73754919577.030.587443224719.99718.869147.5268679.82428.13012.9904.37126992296.532314.362306.33287115145868.2946485014.9481.0602.8201526.49234.205154.436733556991.7987.81931.89874.440878.199875.7081.70360521814.80448.716409.363.55892102548.9513.2972.33519437.3136.0724.03467.25219155.462.5571050.9925.046.752198.5895873.753.5255136.506302136.653717.3075233.34983.6151795.61224.61637415079.0817.1158.92867.9664029.9814612.2084030.535170.991971.505153.9923.322929139.3623.187257.005878375333185.3115.6148128.125196584.83318.7326.93012042.56190511.898034.85.067132.61697206.387714.68248.3308915.94422.047115.2651.221.2126.92421.968333.2686478137295519062.9240249051208.311124128965242177.6168.37556.030748372.739034748440871308517.2306.5097.954133861.350.5220.268999480.5444.541290.334152.5975331351166539.45489051015.865885368.930297478250.42946302.135635.92257.588553.78385445.69602434885119.99577.69935.9270.149543394499.500137.28410158581970.10.0009708494.7633762.488930198529609954.201212918.21019.205298.7292510.4634.501482738.9217329.82401228831.7955.676371448.2110219.510162.015599.89843.70305.10461139581.061360.0619968396021053942008.11468541.121813.29182716.391806.5412800.592.45337555847.7013220.9528.19416.6537548804732902070.1513343.5434771372.79225.6839.6304.4943176613.0710443329.125.768.7712173.737.733721273334.1338289.076258177592207.0720.83219.750.93705196.7013857511439143430.30291.523318425.8891113892.5102.9955216530.480439769541.674.173.8311.71279134.1969.843.514991380.2013.5320.46491.814.734.7620.355726.6033.8162904.161419.913664102.74112067902.3830614.9075.922041.2959917.1712.262317218542007056379.8893404.1105.69197213570.830.5923084709.514.03829.16096.9143954.87341.49718.58815211674.661673.311672.24216427111563.8440011411.22116.4093.3463323.34312.698194.525247476622.10111.57727.81972.927974.124972.7721.82951412736.06759.652413.094.2511174421.6118.1672.48977328.5426.6132.06580.49616743.676.162847.4023.845.563174.6310504648.335614019.491715.5629314.06839.06200.21011820.43014.76176.6502507.588024.37062522.613310.913315.6329.020656138.0443.213254.240778459433135.697.2082357.974192685.75820.8747.12116847.41379677.287308.45.27086805.679147.38033.0347216.12422.39985.9960.880.9229.02223.870934.3118810195961521653.9100136711342.1415461546932178.97853.9297210178.350039101549461290897.8307.4786.682133899.450.5320.268807600.5345.091310.334151.2915971351159538.08567350474.697917368.377297827555.09073302.120636.42957.288634.30989645.98600065572119.99377.57035.6940.145735974487.898137.08910133201977.30.0009558693.9233765.848387200542521962.881248284.11018.971298.4906040.4634.031466557.0314961.42801147451.3853.356415583.9610256.310209.315510.92141.11319.05193634490.86862.8120981115015103027632.18545622.821190.48120826.811191.909316.801.65223143871.579314.8242.38121.9518832973749603381.3330023.7064048284.82638.4886.5753.1292190552.157003110.184.026.0710427.254.840514603333.76395129.264211221963930.7214.08317.271.35058287.81951377697209020.83411.562348634.0255412204.4147.3823684490.679308623729.412.912.6816.6819290.37102.292.513577810.2802.5128.702129.813.323.3125.215509.340.8782490.480320.517428902.01415670703.3466310.6026.034971.8417812.1816.617512084661352605503.59123947149.3357199499.330.8022726676.410.37439.75473.8123337.04453.39924.05812532410.232407.932412.7916324884316.2310613779.15139.1524.7260817.39404.326260.983099384582.60615.44521.241229.691227.561228.502.41082303138.25898.186316.495.4540061041.2723.3423.28377245.6022.9439.737110.93917082.496.772634.5120.444.506150.4514444541.5237175.204502591.448215.6329313.64698.0847349.29262.63329115688.5723.54617.72987.6572414.747954.15182413.453145.891647.783155.8524.924913164.8053.856301.520788266875116.684.9637159.354166088.32323.3927.64518052.74980140.188105.75.77287568.879703.57699.4311917.18323.15385.9320.840.8431.54326.020134.8438975189842822339.8467016891403.5214591525392289.50458.5066910476.845638717345221249264.6301.2889.433135898.210.5220.467876920.5343.981300.334086.0757961371172530.36259351629.102865362.204292597759.78158297.647627.74357.689863.26041744.53598589228117.87776.51036.3560.149784894430.718135.1809971331952.50.0009547895.4483792.808321202524189943.291188387.21007.950294.4371280.4533.711438157.7314820.32771173570.0055.773405905.7810132.110172.715480.29032.72242.24126972965.29447.011628983876364813398.09842708.90579.1858583.38578.124012.400.76108251878.744115.4487.19445.1546016034994293387.3962696.3766823406.39779.1683.1171.4561060730.993438510.351.892.823350.22112.39357171757.150307864262.142102779531690.606.99664.292.72553558.87459413920615610.31866.851113666.418483879.10296.6861698321.473147062114.291.501.3833.159648.61198.451.281842510.5431.2955.775267.241.741.7569.912616.81113.1616542616071334729400.98531369206.794455.4068.671543.583636.5532.5388623272517342840.13242480265.5440824965.962.018385.23773.43237.368121.761101.09045.5867474347.664348.884345.129437144242.9828328154.72271.1578.271849.30763.556487.372372213344.69025.37411.412397.562397.222398.984.738661770914.123148.199144.3210.958233816.4839.6666.12261150.4312.9868.502186.18010156.6172.131419.199.202.53678.2121225323.1121822.065942474.37288.64661573.04473.8230311.94450.8456939844.9537.71615.540145.4432298.636757.98932314.663071.911083.363090.5845.762531277.6723.917507.74144991646094.361.58370415.5471112141.24533.01511.91221775.32952390.056788.64.944212.89456795.152630.15123.2220623.56334.50988.6660.960.9831.43533.916445.2128836194449623230.1858257731656.881094151559276211.4939.65072.473599386.840936902739411113861.3270.83107.009143861.070.5021.164146090.5243.171350.313949.4362671411213513.65399753237.682291352.291284332324.15296288.698608.60559.592426.49479244.93561816772114.58874.26537.3960.156532084311.057131.1829693711892.00.0010060898.4143589.933591208511906876.451041996.8976.586286.1889340.4432.481421564.3514571.72431126925.5059.310399110.8810057.810063.015523.14032.53107.116207.891473.31216.5810089875846822780743.888815958.163292.84328019.703252.1323441.914.17599639992.1523559.1716.6108.988538823971321530338.7875471.71967119683.65115.21215.9847.5555405585.1017727598.849.5214.2916990.122.810035286672.286113784354.34105304578152001.9534.48139.210.57484122.0322974718509500850180.945357015.3474017300.864.2077914560.316718357966.676.756.137.42441217.7046.595.538325090.1205.5313.27362.727.227.2015.286302.5725.1750401.848507.49788283.8638935171.5549422.0461.997070.86802426.138.7195324565112308718454.768430.473.33387118192.320.5339612716.718.50620.796125.7192762.39530.20613.82019691352.621350.131350.55331300135838.1205616315.5086.7392.5533527.50245.397154.731284569451.7598.40532.68787.426784.815782.6521.59938514444.87247.425445.923.8401895079.4114.2362.15794406.9335.8125.49964.88218424.162.4841044.2525.366.494198.0092153.255.996380400.350816.6494260.781095.40206.58356218.06312.02769.8593325.9913435.3033327.504197.604203.8126.474040140.1803.022258.367856893400178.7012.7350318.162204986.06218.8557.03313243.84188717.296497.14.852131.53095807.086805.68172.0339715.97222.540104.5180.990.9925.44322.647133.9557373177162420634.1998868731329.901348171885223170.2168.14254.492799155.044039961046421288738.6303.4592.589134895.520.5220.566748810.5244.491330.324085.5278931371178530.79552851630.503906363.503293216264.80405297.102626.56658.190310.90364645.38605644364118.32476.57536.3970.151249764421.410135.1399974611945.30.0009615395.5313793.166466202523575956.821208480.81008.239293.8811570.4533.641450496.7116494.42641170719.4655.419394639.2310156.510159.715459.16449.78336.24335897811.991139.3821131030710514020091.95914078.21860.0187851.58867.816689.951.22162242821.196803.0958.21833.8085224314443440879.6417375.9534835086.95152.8064.8552.2891553471.555095560.162.934.396100.1875.035410723334.9751212198176.043179181346780.9410.42426.591.83895387.23696555692896715.625554.641755049.112097130.89200.3482756770.908235792821.742.141.9822.4014268.51136.201.812785100.3591.8138.411177.652.472.4748.293257.0467.2312011911706024937471.40922429504.527587.7375.253312.493729.0321.9958883965931554004.08173807195.0816247332.821.093448844.17.58251.77155.292530.37769.61231.1431.4079463057.083059.193056.0513158361838.5158416056.49205.8916.2913513.67529.616333.246095318963.13718.13416.792022.452032.882029.283.80955251709.939107.967239.477.6050543321.3829.2444.44077203.4017.8349.919131.02710066.6122.410555.1117.093.593117.1417754031.397232205.29559.09743530.10624.79325.84412728.21816.895105.7072434.176871.93032406.223264.493265.9342.947500199.5415.322361.481457855350107.253.46400311.1311372101.58726.8988.79820360.20351714.456066.65.930137.58555911.651978.26498.4267919.10525.65687.6000.870.8938.10229.954538.0658721202318023807.6676548421520.431307154871250174.2009.76862.758709442.940337836739431185857.8298.2792.348138864.620.5021.065513510.5142.941340.323965.0335961411208514.84574352936.979167352.072284710986.65814288.622608.05060.092387.25260443.23581194588114.68874.20737.4000.154578904290.852131.1459690661893.20.0009807498.3053634.174299208513356919.011143360.5978.850286.1818200.4433.121372468.8814776.42591156241.7457.242418080.4710070.210101.015482.71125.17184.839361.062192.82322.83121256433631038108892.224020174.764019.09400773.383967.2828284.874.997357912091.5628316.5013.7526.995321988474691867023.4567600.993732143292.78612.81519.0628.8986526026.0020981193.8811.2416.9520993.119.084141953331.125024465845.84126290077179496.7840118.260.48908101.6827014021471776058.82154.236229511.8002320991.754.6549397500.267817538376.927.927.216.38504243.2540.336.609155230.1096.6211.50155.828.988.9714.036699.0722.8333024.032085.57008334.5416552791.4312425.2061.798740.78080329.417.5509327626472193809509.1456195.567.32634619990.750.5170232220.721.76318.656149.6240883.39927.97712.9564.53323762230.182203.162221.33333412155375.9841112816.4478.3932.5236331.84233.513153.460392609491.6437.74737.38793.199793.228792.5461.48981543184.61747.369448.083.50412103985.9113.2062.02231457.2739.1523.97866.48219256.762.2721193.6625.656.966210.6988093.758.6455668.636374595.120917.3863232.981158.0852245.23206.95286615230.2317.1878.90568.2913562.6314603.3333558.094731.722006.814732.7323.819394139.5583.073256.563883735200190.1916.8811688.052212284.79418.7936.97112043.06090296.598343.24.920136.86697180.987848.18287.7334115.96822.047116.1461.071.0625.22122.132233.4066686139747519914.9978328581216.501219127699239172.3717.54255.042798328.239534562641751294664.5303.2197.295134897.590.5220.567539020.5344.801330.324091.7839781371184529.61944451437.791667362.990293169027.14643297.808624.72658.690672.44791745.19602203842118.22576.54536.1970.149767954429.185135.2359978091955.30.0009745595.5603837.559318202524751966.041209698.01007.223293.8097300.4534.011417246.6715640.12431200527.3155.945376241.1110249.310170.715616.89744.36332.56450519507.951398.7721577122731635056038.719411121.592380.27237596.662368.6918612.543.03437037273.7118514.6022.63214.78157608646161147733.9487313.4623687391.79420.90711.8545.4773877923.7512896969.647.0110.6012067.330.896325963332.962832844873.3878084684112205.0126.05182.760.77439162.2917136213933805637.04237.653708523.0287612698.285.0056141830.4075486066505.044.659.73337167.7560.754.366106670.1644.3717.14078.015.725.7319.075788.1730.6247911.747083.410063473.1288978841.9484517.6142.151941.0739920.2510.574019478362337306617.9075080.087.51144716396.570.5376363678.815.68525.284114.3153957.31536.38816.4623.14015592743.712740.522738.42259353136276.4993319313.97105.6663.6883326.51289.359170.484345517941.9359.89232.05902.259919.607917.6141.59300456715.48157.979437.883.9704378846.9616.3102.19727361.7233.2529.12470.70416649.569.419932.7525.135.866182.4285462.854.9246206.336612746.496215.2920320.20937.7344082.95189.65938314759.4019.40514.32074.1962794.2512840.6422791.543578.091884.313582.5627.598317137.4422.927253.096774304800152.159.3269097.972202285.72919.3786.98915045.44079342.987239.84.820128.92386677.278638.67903.6351015.84722.42094.2620.840.8425.05823.112632.6458421195008021272.0439640021386.781488168728214173.8957.49352.744649808.045942796747551303437.6309.1086.23379.59132898.840.5219.968043050.5344.681280.334139.9849671351173539.17487050609.036458368.692297816529.27817302.254637.37856.988487.56770845.77597361197120.06077.68835.7340.148592734497.765137.34410143421984.50.0009608293.8553853.502233197531740974.401262530.41022.480298.7084410.460.6925.0334.111419600.1315097.12771180459.9110.0253.722413354.5958.5958.6210184.510232.815621.64145.23334.18265466104.27883.8717679886718240332649.35716629.061410.59142756.351407.629663.431.48263454580.089866.0835.92016.1392939043410716393.8849032.0018356973.84932.5807.6843.4612325072.348225794.404.456.8313692.346.594217316672.7782619521109.174634266576923.3913.45265.131.14375237.791090699388313022.56356.902713225.8783016399.7125.5333970380.630301238729.593.573.2714.02222109.9183.452.983811840.2632.9924.183108.404.024.0422.831406.9339.8970154.568637.614991831.99513461533.0289711.7575.496051.5650314.2414.132714752801441232113.14107391118.4778598415.190.6604195684.611.52133.80176.1169942.83946.91420.5712.31317582013.241997.392013.7517258571767.96040743410.66165.4073.9273720.00343.386226.803596371522.69713.53024.29997.853998.668999.1421.98035281938.87690.523263.136.9156144891.6919.8612.75743244.1726.3133.981127.4466787.0684.206651.3620.005.333173.5012674347.1722117.173087955.62737.08925454.84671.0721714.65230.9420637779.4921.63215.04975.8531976.706839.17061980.562643.39934.002649.3535.422343140.6115.778254.764643180925129.774.8976537.973145675.22519.8566.58617546.54066901.172315.68.171122.44072752.267011.97899.7261514.59621.43472.4290.790.8133.49924.418829.448207.639348167651624205.5432274111357.301159151327197151.46010.20953.1496910910.137238202052121438050.4348.6683.27973.83114773.860.6117.579400470.6252.021100.384834.046071116999628.20792943527.278646429.140346505619.64441351.633739.72250.476410.10677152.82685188810139.70590.41730.7530.129242855226.754159.60311781402300.70.0008257080.9394367.1185661746140391062.321247037.61179.017342.9318210.530.7927.5937.9438.011635794.5715585.72481336546.2110.6950.59433091.7360.2560.1210068.510099.614958.19346.39230.31153383522.44524.317141339432442056066.787311234.602389.76241722.092375.4819922.013.23445967720.5819958.4121.37214.16482641529291203386.7200633.4617494237.65119.54612.7385.9504120764.0413858394.617.5811.3611976.928.590127986672.958303063267.9582716262122762.9627.78167.990.71549148.5017614814748828040223.083890422.2330412699.579.1146557740.382556443355.075.545.039.08359182.5955.324.706549140.1534.7415.94372.276.136.1718.615788.1729.6245118.143977.99432293.3238406041.8962418.1562.069831.0158221.559.9284921368502365246658.4870622.981.82517717266.210.5005213653.916.56423.968114.4161759.93934.55415.6943.31716662721.872716.542718.68258759144583.8720633214.94103.2963.4411827.67275.350161.635101523251.9159.80633.33890.477886.876888.5351.47222466555.36957.683459.143.9769979644.8815.6382.04866367.1935.0627.85664.92316649.766.567937.4425.216.091188.4579313.656.9847098.226618850.239515.2794319.87939.9044205.80187.60481314977.2918.93514.31671.4642579.7913099.6152572.363276.521885.983271.2627.639163135.9732.836250.407774329467152.788.8633367.949210783.83018.65590450.72290238.6346.90017444.54879674.187057.04.713130.08686737.782081.42878399.981529.0577885.0364115.72322.30086.7220.770.7724.30322.635132.872188.068698192336821441.9959811771317.271534162544215165.6637.62552.125699878.047541168149861206497.8311.9786.57280.05135908.460.5319.969839450.5445.131290.334207.9103431331137546.71756450076.274740374.521302406827.18194306.615645.86956.786946.61458345.90612218425121.92378.88135.2650.148508124561.185139.25210302952016.40.0009548592.7703988.942646198539060983.451162690.91035.225302.9271150.460.7125.2334.2234.381508082.4115872.02851189270.8710.0952.987422944.4658.6858.5910292.510222.415641.09149.19350.63264936242.21891.0218739700715028726778.61895463.451156.26117080.861155.8810017.981.59214543760.669884.6243.73828.9939231246137586659.0982155.8645846017.20339.6556.2142.9322081501.986612368.673.725.656587.5758.469113540004.4728715246137.784158384459431.0313.89338.521.45145304.34905887481656019.61432.922202241.687677358.19157.1683432290.729302323927.782.742.5317.7318684.542541125426108.342.383445980.2902.372568330.506139.46253883.233.1645.463317.9356.5786277.684723.518201531.67716391673.452909.8896.063331.9130511.3417.654210219901364424294.45129293151.8593799134.720.87574970569.68541.24066.4105135.38357.90826.2651.67510422834.092835.352836.6816353576796.0449199128.61177.9055.0832816.61439.996275.805876385862.59614.79220.341882.641886.301884.703.44668321157.79199.719295.806.7624249617.9224.2393.50175239.8421.7042.861102.52810003.6103.741644.2320.034.225142.3715123539.6529526.977880986.58589.05154520.45712.2629776.81270.7231759697.8325.45918.18994.3452527.866954.79642446.023344.391422.073342.1346.073278173.7174.081309.984455760633114.494.4524199.654153896.84924.23357372.72957323.3808.10519655.22251093.155596.35.855135.67955437.253581.26551044.253610.3516706.2292218.10324.90189.2040.790.7833.58726.789536.349197.768611204303722574.8037399751456.821384155763239183.2189.61061.539779400.740038869940441211354.3296.9687.93667.29139858.540.5021.365476900.5142.251350.323948.9987381421221513.59886552977.602865351.140283588722.34226287.932605.82559.192896.61718844.38571825580114.47474.05937.5520.153417054293.869130.8819681491887.00.0009921098.2383756.959809208511228928.431172021.1975.215285.2536860.440.6623.0532.1633.081414840.4114921.32671143594.639.0856.098420324.9653.8453.7510110.910112.215482.73530.12229.82127932991.73429.511124897227936516323.91543306.39705.2171366.17704.694953.650.95131832288.334984.3571.57934.5800319191199356570.4067764.3013028528.34464.9693.8831.8081206221.274265815.032.393.656907.5492.15758747006.6820910044214.442467575338378.088.16508.262.22949454.49569554833173512.5683.811420452.788438265.99243.8432133451.173160367317.241.841.6827.1611659.604160641573156.311.642156170.4641.664167945.468207.12414222.162.1636.804004.0860.5013602313323628722271.34525929705.582606.7057.237633.012927.9426.5161776880581184035.11201258225.087965638.531.160176.47060.40347.0110426.13179.66435.2451.34810523345.673348.063345.1010084354800.9923541795.73200.4136.4639812.06581.298381.370711241374.14522.21814.671745.541742.791744.753.560202054112.176130.975167.089.1512443172.5132.6335.03236168.1715.5654.149172.35317351.2136.623453.3610.203.18495.5017792827.1627284.694656182.257712.9560362.74496.1145336.89375.70367412350.8728.59719.707113.2601836.518720.88221844.312399.191277.152409.0327.245155227.1303.625416.273809844583109.553.01159512.788130198.52227.2029.79018559.56482399.789567.24.578119.44489255.481753.06573.9247418.88129.60772.8910.780.7927.44528.100035.669183.959823177203727135.6325862341356.261173140558221158.7919.35762.2365911206.848034016845181279851.8345.3693.79263.51115967.370.6217.279398830.6351.951110.384835.015067116998627.43590843548.631510429.405346733990.00480351.989740.57849.176027.97656252.79686395419139.83390.45930.8390.128195455238.762159.78711806822285.20.0008356380.8484336.4055731716216721039.601155871.81179.304343.6068080.530.7927.4438.4238.591601070.5016357.22621315900.469.7852.987424600.0760.5760.2510561.910567.215666.70024.85124.357644.101769.46262.9012953692031725055254.656910901.472333.54232455.682318.4416446.882.97428007105.7316443.7323.05010.88576583840261123555.3010761.4837485303.35221.42911.5315.4153786603.7012627261.936.9010.4218311.731.565725340001.251542840175.2774422699110766.7925189.090.79079168.4017124213303855035.71243.583622318.0618220585.387.5565875480.426521430247.624.924.549.99328163.00135561361961.254.195783400.1734.201379817.67078.94139265.685.6815.296455.4026.5349078.548084.710343673.2679218651.9880917.5252.205431.1048620.1410.813119773252113618257.4777264.789.01144415568.240.5534443653.616.26525.884122.7176956.52836.84516.3833.25617352230.352221.372214.79253689135786.9376541413.6593.2363.6013025.35280.778172.914563476152.1039.85830.36813.120812.987814.0271.61502443435.64859.438426.523.6126886547.2816.6982.22242349.9932.7229.22885.56119645.469.624880.5723.715.958181.1584263.554.0749235.924720934.255817.9677236.99892.4252022.76193.92387918019.0119.2669.91374.4812815.8013938.2522814.153757.351992.763754.8522.944527142.2003.003260.463909532667146.299.8288718.055204386.69819.942104119.156103789.9857.22014446.09890663.299248.24.821132.04197912.594075.67089487.692368.4088476.3351816.33222.09296.8130.950.9525.26623.553433.433171.738070163609220967.4792972081403.871475172508215175.8477.33854.7737910253.346644197848641281490.5303.3391.31576.18134895.330.5120.465452110.5243.691310.324089.1196511371180531.13832051428.891927363.098293437386.10939297.705627.05958.390157.28125044.69582684333118.29076.52836.2430.149882934429.474135.0069994421922.30.0009675395.4793830.202651202532957949.151197778.91006.923294.2737150.450.6924.0632.6933.061481804.0014786.92711186674.329.7055.571404313.1554.9754.7310188.610118.015503.04851.39325.84230135725.79807.0221652513347050281398.289616152.083322.99332363.943310.3422986.054.216111910112.2222996.9316.2998.176602835681661550727.1454791.10805120520.68215.03916.2307.5965417755.1517595501.819.5814.4921472.322.564335576671.220113921554.04103932785127853.0934.48137.830.57048121.2123406218571298250179.185444513.8695823040.763.4727756870.323713160066.676.816.197.40446218.739723965245.785.678354570.125.67983913.27061.9398927.797.8113.666615.8123.4345818.845436.69386914.1178307301.5391522.4421.930420.86314626.688.5682827193882155769427.1765104.470.48538918451.910.4842192648.519.32820.687130.1231163.20530.07513.7124.10622031210.041211.541213.58331791149967.7395286315.9479.9772.4038029.37243.915153.975645588891.7018.43435.25732.808734.403735.8191.46775503814.97246.532458.243.5431699557.2314.1881.98726416.9637.3925.21367.65819436.361.8141070.0925.236.678203.7486140.659.595536026.070617.6850233.731116.83195.34316017.6729.18569.4333059.0514015.6603069.384132.134138.6921.939827138.0773.056254.877892248833179.7213.9424658.074212585.03219.090103873.438103556.1327.06313843.81791703.297940.04.825160.82097944.293594.60787579.992049.4838499.2352815.88722.006102.4501.021.0224.78222.554133.604136.317221158335020036.1943826001372.731403173812219176.5307.47954.480809301.043142987545061295588.8303.7591.51277.71135903.300.5220.566526960.5243.861340.324072.7598801381184529.60412351397.360677361.797292412973.62287296.907623.84258.490409.58593844.84592049225117.98776.21736.2770.148682534425.915135.2559968051944.50.0009657995.3873816.078707203521572963.001215056.11008.514292.5985200.450.6923.9632.7632.311437534.2316261.52631183717.459.8855.303386824.5454.7554.8010229.610123.015528.65351.13346.63343637961.641147.36230851783629368108070.729221092.564269.26427545.664195.6228361.735.357523413026.1728270.2112.7826.6392401085431251985242.5041670.955977155994.02611.89620.7439.6717021666.5222318900.9912.2018.1821495.217.538045680001.148284936441.80131041886197426.5743.48107.790.4462791.6727965623823973662.5143.946663911.1283621311.950.3379976520.251859745283.338.647.775.89535269.747517757336.606.879813820.1026.83788010.68451.4079918.928.8914.406679.7122.3633434.632117.97236994.8386431501.3966026.2811.790650.75051930.257.0929728537832151969177.9257509.961.67771920955.930.5369162123.121.32017.978148.8265685.00226.72212.3344.87226232263.322251.632283.91331439148358.5729507916.5078.0052.6076429.49224.898146.842773706901.4177.68334.99810.996808.616810.1831.51528571024.40347.173437.253.53116105428.6712.8352.06884454.4138.6523.02563.27019245.960.3721166.4225.576.985211.3991245.857.4056055.876334165.634417.3763232.401113.8751593.62213.08629315249.7216.7578.79166.2533589.6914778.6353584.774536.002007.424525.3223.333266138.0903.127254.503880998567190.0416.2988528.061206983.39618.142104095.678103818.1866.82413541.87290141.798248.04.919132.93897124.393965.37587905.692337.6058270.4324815.57221.979108.2241.081.0725.45521.642932.622145.076527137315718909.1012073431168.321179123771228171.8387.68455.612768355.937933262842291294605.5308.3997.23477.34130902.380.5319.969360920.5445.031270.334199.9125541331166546.24868249870.815104374.198302249577.83011306.484643.81756.887489.37500046.20606567122121.70578.86135.3030.145541434549.533138.89410280542004.40.0009611693.1703937.229977195540797969.311217027.71035.372301.5386020.460.7024.4733.1534.731493222.2116300.82461185927.0410.0655.599380625.6955.7455.7810249.510233.115534.97548.86322.8348794101941429.25212054200OpenBenchmarking.org

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: SkeincoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P140K280K420K560K700KSE +/- 335.77, N = 3SE +/- 654.80, N = 15SE +/- 9139.66, N = 12SE +/- 12582.41, N = 12SE +/- 5351.86, N = 13SE +/- 9107.04, N = 12SE +/- 6158.44, N = 12SE +/- 2411.69, N = 3SE +/- 2127.92, N = 3SE +/- 4313.15, N = 3SE +/- 3944.26, N = 15SE +/- 981.65, N = 3SE +/- 1328.09, N = 3SE +/- 728.58, N = 3SE +/- 611.43, N = 618240379365629368605852631038470502468227324420317250316350210539151030150287105140636481. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: SkeincoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P110K220K330K440K550KMin: 181900 / Avg: 182403.33 / Max: 183040Min: 75330 / Avg: 79365.33 / Max: 84270Min: 533720 / Avg: 629367.5 / Max: 655420Min: 470320 / Avg: 605851.67 / Max: 630960Min: 570830 / Avg: 631038.46 / Max: 649140Min: 373060 / Avg: 470501.67 / Max: 494480Min: 404270 / Avg: 468226.67 / Max: 483970Min: 319750 / Avg: 324420 / Max: 327800Min: 313810 / Avg: 317250 / Max: 321140Min: 310740 / Avg: 316350 / Max: 324830Min: 187570 / Avg: 210538.67 / Max: 242310Min: 149150 / Avg: 151030 / Max: 152460Min: 148430 / Avg: 150286.67 / Max: 152860Min: 103990 / Avg: 105140 / Max: 106490Min: 61790 / Avg: 63648.33 / Max: 661401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Sysbench

This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KSE +/- 7.33, N = 5SE +/- 2.64, N = 5SE +/- 117.22, N = 5SE +/- 97.15, N = 5SE +/- 52.65, N = 5SE +/- 54.62, N = 5SE +/- 23.00, N = 5SE +/- 20.39, N = 5SE +/- 7.32, N = 5SE +/- 9.32, N = 5SE +/- 12.89, N = 5SE +/- 9.93, N = 5SE +/- 10.65, N = 5SE +/- 5.53, N = 5SE +/- 1.09, N = 532649.3616323.92108070.73106421.95108892.2281398.2980743.8956066.7955254.6656038.7242008.1127632.1926778.6220091.9613398.101. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KMin: 32621.61 / Avg: 32649.36 / Max: 32663.3Min: 16316.25 / Avg: 16323.92 / Max: 16331.8Min: 107743.97 / Avg: 108070.73 / Max: 108295.29Min: 106076.46 / Avg: 106421.95 / Max: 106643.26Min: 108721.1 / Avg: 108892.22 / Max: 109052.86Min: 81221.7 / Avg: 81398.29 / Max: 81557.28Min: 80668.55 / Avg: 80743.89 / Max: 80805.58Min: 55989.07 / Avg: 56066.79 / Max: 56109.25Min: 55238.42 / Avg: 55254.66 / Max: 55276.83Min: 56017.18 / Avg: 56038.72 / Max: 56065.58Min: 41974.26 / Avg: 42008.11 / Max: 42044Min: 27594.22 / Avg: 27632.19 / Max: 27651.61Min: 26746.06 / Avg: 26778.62 / Max: 26804.47Min: 20074.61 / Avg: 20091.96 / Max: 20104.41Min: 13395.23 / Avg: 13398.1 / Max: 13400.851. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KSE +/- 1.01, N = 3SE +/- 4.39, N = 3SE +/- 63.30, N = 3SE +/- 24.42, N = 3SE +/- 45.79, N = 3SE +/- 30.57, N = 3SE +/- 23.53, N = 3SE +/- 39.53, N = 3SE +/- 14.79, N = 3SE +/- 8.85, N = 3SE +/- 3.25, N = 3SE +/- 1.11, N = 3SE +/- 4.51, N = 3SE +/- 5.85, N = 3SE +/- 8.87, N = 36629.063306.3921092.5619926.6120174.7616152.0815958.1611234.6010901.4711121.598541.125622.825463.454078.212708.901. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KMin: 6628.02 / Avg: 6629.06 / Max: 6631.08Min: 3301.92 / Avg: 3306.39 / Max: 3315.16Min: 20966.66 / Avg: 21092.56 / Max: 21167.03Min: 19890.13 / Avg: 19926.61 / Max: 19972.98Min: 20083.31 / Avg: 20174.76 / Max: 20224.79Min: 16091.15 / Avg: 16152.08 / Max: 16186.88Min: 15919.41 / Avg: 15958.16 / Max: 16000.66Min: 11159.1 / Avg: 11234.6 / Max: 11292.66Min: 10874.37 / Avg: 10901.47 / Max: 10925.3Min: 11109.84 / Avg: 11121.59 / Max: 11138.93Min: 8535.73 / Avg: 8541.12 / Max: 8546.95Min: 5620.68 / Avg: 5622.82 / Max: 5624.41Min: 5456.78 / Avg: 5463.45 / Max: 5472.04Min: 4067.37 / Avg: 4078.21 / Max: 4087.43Min: 2691.46 / Avg: 2708.9 / Max: 2720.481. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 12.08, N = 3SE +/- 9.06, N = 3SE +/- 9.36, N = 3SE +/- 8.46, N = 3SE +/- 6.05, N = 3SE +/- 5.63, N = 3SE +/- 1.11, N = 3SE +/- 4.32, N = 3SE +/- 1.11, N = 3SE +/- 3.28, N = 3SE +/- 1.25, N = 3SE +/- 7.84, N = 3SE +/- 0.03, N = 31410.59705.214269.263989.794019.093322.993292.842389.762333.542380.271813.291190.481156.26860.01579.181. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P7001400210028003500Min: 1410.49 / Avg: 1410.59 / Max: 1410.66Min: 705.16 / Avg: 705.21 / Max: 705.27Min: 4252.23 / Avg: 4269.26 / Max: 4292.63Min: 3972.06 / Avg: 3989.79 / Max: 4001.86Min: 4001.46 / Avg: 4019.09 / Max: 4033.33Min: 3306.1 / Avg: 3322.99 / Max: 3332.1Min: 3282.03 / Avg: 3292.84 / Max: 3302.95Min: 2378.57 / Avg: 2389.76 / Max: 2396.35Min: 2331.32 / Avg: 2333.54 / Max: 2334.74Min: 2371.63 / Avg: 2380.27 / Max: 2384.67Min: 1811.43 / Avg: 1813.29 / Max: 1815.27Min: 1183.95 / Avg: 1190.48 / Max: 1194.25Min: 1153.76 / Avg: 1156.26 / Max: 1157.7Min: 844.33 / Avg: 860.01 / Max: 868.26Min: 579.13 / Avg: 579.18 / Max: 579.231. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P90K180K270K360K450KSE +/- 3.54, N = 3SE +/- 4.12, N = 3SE +/- 31.36, N = 3SE +/- 205.94, N = 3SE +/- 137.59, N = 3SE +/- 9.93, N = 3SE +/- 189.45, N = 3SE +/- 32.95, N = 3SE +/- 47.74, N = 3SE +/- 66.32, N = 3SE +/- 100.32, N = 3SE +/- 3.30, N = 3SE +/- 4.86, N = 3SE +/- 1.61, N = 3SE +/- 4.97, N = 3142756.3571366.17427545.66397326.45400773.38332363.94328019.70241722.09232455.68237596.66182716.39120826.81117080.8687851.5858583.381. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P70K140K210K280K350KMin: 142752.27 / Avg: 142756.35 / Max: 142763.4Min: 71360.29 / Avg: 71366.17 / Max: 71374.11Min: 427483.29 / Avg: 427545.66 / Max: 427582.63Min: 397034.87 / Avg: 397326.45 / Max: 397724.18Min: 400607.54 / Avg: 400773.38 / Max: 401046.47Min: 332347.41 / Avg: 332363.94 / Max: 332381.73Min: 327828.63 / Avg: 328019.7 / Max: 328398.59Min: 241657.95 / Avg: 241722.09 / Max: 241767.28Min: 232394.78 / Avg: 232455.68 / Max: 232549.81Min: 237474.07 / Avg: 237596.66 / Max: 237701.83Min: 182574.85 / Avg: 182716.39 / Max: 182910.32Min: 120822.19 / Avg: 120826.81 / Max: 120833.2Min: 117074.78 / Avg: 117080.86 / Max: 117090.47Min: 87849.93 / Avg: 87851.58 / Max: 87854.8Min: 58573.44 / Avg: 58583.38 / Max: 58588.421. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 0.69, N = 6SE +/- 0.27, N = 4SE +/- 12.44, N = 10SE +/- 13.78, N = 10SE +/- 10.02, N = 10SE +/- 6.29, N = 9SE +/- 9.01, N = 9SE +/- 4.60, N = 8SE +/- 5.08, N = 8SE +/- 3.24, N = 8SE +/- 1.48, N = 7SE +/- 0.45, N = 6SE +/- 0.78, N = 6SE +/- 0.17, N = 5SE +/- 0.60, N = 41407.62704.694195.623908.233967.283310.343252.132375.482318.442368.691806.541191.901155.88867.81578.121. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P7001400210028003500Min: 1404.92 / Avg: 1407.62 / Max: 1409.44Min: 703.93 / Avg: 704.69 / Max: 705.19Min: 4128.28 / Avg: 4195.62 / Max: 4245.05Min: 3840.34 / Avg: 3908.23 / Max: 3966.84Min: 3922.41 / Avg: 3967.28 / Max: 4021.83Min: 3286.05 / Avg: 3310.34 / Max: 3336.05Min: 3210.93 / Avg: 3252.13 / Max: 3290.8Min: 2361.46 / Avg: 2375.48 / Max: 2394.12Min: 2296.55 / Avg: 2318.44 / Max: 2338.82Min: 2349.79 / Avg: 2368.69 / Max: 2379.63Min: 1799.65 / Avg: 1806.54 / Max: 1812.17Min: 1190.44 / Avg: 1191.9 / Max: 1193.16Min: 1152.71 / Avg: 1155.88 / Max: 1157.48Min: 867.17 / Avg: 867.81 / Max: 868.12Min: 576.33 / Avg: 578.12 / Max: 578.841. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P6K12K18K24K30KSE +/- 43.76, N = 3SE +/- 52.69, N = 3SE +/- 136.51, N = 3SE +/- 28.57, N = 3SE +/- 46.32, N = 3SE +/- 31.70, N = 3SE +/- 55.79, N = 3SE +/- 85.72, N = 3SE +/- 22.86, N = 3SE +/- 42.04, N = 3SE +/- 95.61, N = 3SE +/- 21.57, N = 3SE +/- 50.79, N = 3SE +/- 89.67, N = 3SE +/- 41.71, N = 159663.434953.6528361.7325067.0428284.8722986.0523441.9119922.0116446.8818612.5412800.599316.8010017.986689.954012.40
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KMin: 9575.98 / Avg: 9663.43 / Max: 9710.01Min: 4855.38 / Avg: 4953.65 / Max: 5035.74Min: 28090.37 / Avg: 28361.73 / Max: 28523.46Min: 25025.67 / Avg: 25067.04 / Max: 25121.85Min: 28192.24 / Avg: 28284.87 / Max: 28331.27Min: 22927.38 / Avg: 22986.05 / Max: 23036.21Min: 23347.44 / Avg: 23441.91 / Max: 23540.57Min: 19751.03 / Avg: 19922.01 / Max: 20018.47Min: 16403.4 / Avg: 16446.88 / Max: 16480.83Min: 18567.53 / Avg: 18612.54 / Max: 18696.54Min: 12615.39 / Avg: 12800.59 / Max: 12934.43Min: 9293.16 / Avg: 9316.8 / Max: 9359.87Min: 9917.01 / Avg: 10017.98 / Max: 10078.03Min: 6524.74 / Avg: 6689.95 / Max: 6832.97Min: 3827.5 / Avg: 4012.4 / Max: 4339.41

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.20382.40763.61144.81526.019SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.480.955.354.924.994.214.173.232.973.032.451.651.591.220.76MIN: 1.47 / MAX: 1.49MIN: 5.26 / MAX: 5.38MIN: 4.88 / MAX: 4.95MIN: 4.95 / MAX: 5.03MIN: 4.1 / MAX: 4.24MIN: 4.12 / MAX: 4.18MIN: 3.19 / MAX: 3.24MIN: 2.93 / MAX: 2.99MIN: 2.99 / MAX: 3.04MIN: 2.43 / MAX: 2.46MIN: 1.63MIN: 1.58 / MAX: 1.6MIN: 1.21MAX: 0.77
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path TracerEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 1.48 / Avg: 1.48 / Max: 1.48Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 5.35 / Avg: 5.35 / Max: 5.35Min: 4.9 / Avg: 4.92 / Max: 4.93Min: 4.98 / Avg: 4.99 / Max: 5Min: 4.2 / Avg: 4.21 / Max: 4.22Min: 4.17 / Avg: 4.17 / Max: 4.17Min: 3.23 / Avg: 3.23 / Max: 3.23Min: 2.97 / Avg: 2.97 / Max: 2.97Min: 3.03 / Avg: 3.03 / Max: 3.03Min: 2.44 / Avg: 2.45 / Max: 2.45Min: 1.65 / Avg: 1.65 / Max: 1.65Min: 1.59 / Avg: 1.59 / Max: 1.59Min: 1.22 / Avg: 1.22 / Max: 1.22Min: 0.76 / Avg: 0.76 / Max: 0.76

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P16K32K48K64K80KSE +/- 3.33, N = 3SE +/- 1.67, N = 3SE +/- 34.82, N = 3SE +/- 15.77, N = 3SE +/- 4.67, N = 3SE +/- 3.00, N = 3SE +/- 72.25, N = 3SE +/- 17.06, N = 3SE +/- 9.84, N = 3SE +/- 8.37, N = 3SE +/- 1.00, N = 3SE +/- 3.06, N = 3SE +/- 12.17, N = 3SE +/- 4.10, N = 3SE +/- 2.31, N = 32634513183752347003373579611195996344596428004370333755223142145416224108251. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P13K26K39K52K65KMin: 26342 / Avg: 26345.33 / Max: 26352Min: 13181 / Avg: 13182.67 / Max: 13186Min: 75187 / Avg: 75234 / Max: 75302Min: 70003 / Avg: 70033.33 / Max: 70056Min: 73574 / Avg: 73578.67 / Max: 73588Min: 61113 / Avg: 61119 / Max: 61122Min: 59835 / Avg: 59963.33 / Max: 60085Min: 44563 / Avg: 44596 / Max: 44620Min: 42782 / Avg: 42799.67 / Max: 42816Min: 43689 / Avg: 43703.33 / Max: 43718Min: 33753 / Avg: 33755 / Max: 33756Min: 22310 / Avg: 22314 / Max: 22320Min: 21436 / Avg: 21453.67 / Max: 21477Min: 16216 / Avg: 16223.67 / Max: 16230Min: 10821 / Avg: 10825 / Max: 108291. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3K6K9K12K15KSE +/- 2.53, N = 3SE +/- 2.21, N = 3SE +/- 14.83, N = 3SE +/- 5.99, N = 3SE +/- 2.16, N = 3SE +/- 11.28, N = 3SE +/- 6.05, N = 3SE +/- 2.98, N = 3SE +/- 2.43, N = 3SE +/- 6.32, N = 3SE +/- 0.38, N = 3SE +/- 5.77, N = 3SE +/- 0.60, N = 3SE +/- 0.42, N = 3SE +/- 0.98, N = 34580.082288.3313026.1711996.5912091.5610112.229992.157720.587105.737273.715847.703871.573760.662821.191878.741. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KMin: 4577.39 / Avg: 4580.08 / Max: 4585.14Min: 2285.96 / Avg: 2288.33 / Max: 2292.74Min: 12998.72 / Avg: 13026.17 / Max: 13049.62Min: 11984.94 / Avg: 11996.59 / Max: 12004.82Min: 12087.35 / Avg: 12091.56 / Max: 12094.48Min: 10089.84 / Avg: 10112.22 / Max: 10125.85Min: 9980.12 / Avg: 9992.15 / Max: 9999.28Min: 7715.51 / Avg: 7720.58 / Max: 7725.83Min: 7100.89 / Avg: 7105.73 / Max: 7108.61Min: 7261.15 / Avg: 7273.71 / Max: 7281.24Min: 5846.99 / Avg: 5847.7 / Max: 5848.3Min: 3864.76 / Avg: 3871.57 / Max: 3883.04Min: 3759.79 / Avg: 3760.66 / Max: 3761.8Min: 2820.54 / Avg: 2821.19 / Max: 2821.99Min: 1877.16 / Avg: 1878.74 / Max: 1880.551. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P6K12K18K24K30KSE +/- 47.83, N = 3SE +/- 6.26, N = 3SE +/- 87.64, N = 3SE +/- 48.27, N = 3SE +/- 30.27, N = 3SE +/- 4.60, N = 3SE +/- 49.13, N = 3SE +/- 75.75, N = 3SE +/- 24.60, N = 3SE +/- 60.57, N = 3SE +/- 157.24, N = 3SE +/- 6.65, N = 3SE +/- 103.67, N = 4SE +/- 21.76, N = 3SE +/- 46.05, N = 49866.084984.3528270.2125012.9828316.5022996.9323559.1719958.4116443.7318514.6013220.959314.829884.626803.094115.44
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KMin: 9816.8 / Avg: 9866.08 / Max: 9961.72Min: 4971.91 / Avg: 4984.35 / Max: 4991.8Min: 28157.11 / Avg: 28270.21 / Max: 28442.73Min: 24963.18 / Avg: 25012.98 / Max: 25109.5Min: 28279.61 / Avg: 28316.5 / Max: 28376.51Min: 22990.44 / Avg: 22996.93 / Max: 23005.82Min: 23467.34 / Avg: 23559.17 / Max: 23635.37Min: 19820.95 / Avg: 19958.41 / Max: 20082.32Min: 16395.26 / Avg: 16443.73 / Max: 16475.33Min: 18416.86 / Avg: 18514.6 / Max: 18625.46Min: 12906.62 / Avg: 13220.95 / Max: 13386.45Min: 9307.85 / Avg: 9314.82 / Max: 9328.12Min: 9618.02 / Avg: 9884.62 / Max: 10112.13Min: 6767.48 / Avg: 6803.09 / Max: 6842.56Min: 4003.8 / Avg: 4115.44 / Max: 4226.83

m-queens

A solver for the N-queens problem with multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.01, N = 4SE +/- 0.06, N = 4SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 335.9271.5812.7813.8413.7516.3016.6121.3723.0522.6328.1942.3843.7458.2287.191. (CXX) g++ options: -fopenmp -O2 -march=native
OpenBenchmarking.orgSeconds, Fewer Is Betterm-queens 1.2Time To SolveEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 35.9 / Avg: 35.92 / Max: 35.93Min: 71.55 / Avg: 71.58 / Max: 71.6Min: 12.73 / Avg: 12.78 / Max: 12.87Min: 13.77 / Avg: 13.84 / Max: 13.89Min: 13.72 / Avg: 13.75 / Max: 13.84Min: 16.26 / Avg: 16.3 / Max: 16.33Min: 16.48 / Avg: 16.61 / Max: 16.78Min: 21.31 / Avg: 21.37 / Max: 21.46Min: 23.01 / Avg: 23.05 / Max: 23.09Min: 22.57 / Avg: 22.63 / Max: 22.73Min: 28.07 / Avg: 28.19 / Max: 28.33Min: 42.35 / Avg: 42.38 / Max: 42.41Min: 43.69 / Avg: 43.74 / Max: 43.81Min: 58.19 / Avg: 58.22 / Max: 58.26Min: 87.19 / Avg: 87.19 / Max: 87.21. (CXX) g++ options: -fopenmp -O2 -march=native

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1020304050SE +/- 0.007758, N = 4SE +/- 0.037928, N = 3SE +/- 0.035543, N = 6SE +/- 0.058701, N = 6SE +/- 0.054901, N = 6SE +/- 0.071740, N = 5SE +/- 0.095577, N = 5SE +/- 0.043495, N = 4SE +/- 0.014397, N = 5SE +/- 0.026648, N = 4SE +/- 0.019214, N = 3SE +/- 0.036573, N = 3SE +/- 0.019691, N = 3SE +/- 0.292336, N = 3SE +/- 0.039126, N = 316.13929034.5800306.6392407.0339826.9953218.1766028.98853814.16482010.88576014.78157016.65375021.95188028.99392033.80852045.1546001. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645Min: 16.12 / Avg: 16.14 / Max: 16.16Min: 34.52 / Avg: 34.58 / Max: 34.65Min: 6.51 / Avg: 6.64 / Max: 6.75Min: 6.86 / Avg: 7.03 / Max: 7.24Min: 6.85 / Avg: 7 / Max: 7.2Min: 7.98 / Avg: 8.18 / Max: 8.41Min: 8.8 / Avg: 8.99 / Max: 9.33Min: 14.12 / Avg: 14.16 / Max: 14.3Min: 10.85 / Avg: 10.89 / Max: 10.92Min: 14.73 / Avg: 14.78 / Max: 14.84Min: 16.63 / Avg: 16.65 / Max: 16.69Min: 21.9 / Avg: 21.95 / Max: 22.02Min: 28.96 / Avg: 28.99 / Max: 29.03Min: 33.47 / Avg: 33.81 / Max: 34.39Min: 45.1 / Avg: 45.15 / Max: 45.231. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20M40M60M80M100MSE +/- 416432.88, N = 3SE +/- 79798.13, N = 3SE +/- 935189.20, N = 15SE +/- 597478.91, N = 3SE +/- 1212155.43, N = 4SE +/- 261273.92, N = 3SE +/- 82765.80, N = 3SE +/- 88880.09, N = 3SE +/- 708563.50, N = 4SE +/- 436034.06, N = 15SE +/- 304616.08, N = 3SE +/- 332801.18, N = 6SE +/- 236966.05, N = 10SE +/- 71263.56, N = 3SE +/- 150250.14, N = 3390434101919119910854312510090845398847469835681668239713264152929583840266086461648804732329737493124613724314443160349941. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20M40M60M80M100MMin: 38530677 / Avg: 39043410.33 / Max: 39868176Min: 19064701 / Avg: 19191199.33 / Max: 19338721Min: 103465683 / Avg: 108543124.73 / Max: 118339568Min: 99825755 / Avg: 100908453 / Max: 101887714Min: 95322228 / Avg: 98847468.5 / Max: 100858323Min: 83220250 / Avg: 83568166.33 / Max: 84079775Min: 82260216 / Avg: 82397132 / Max: 82546157Min: 63992140 / Avg: 64152928.67 / Max: 64298968Min: 56735799 / Avg: 58384025.75 / Max: 60090226Min: 57227030 / Avg: 60864616.13 / Max: 63501145Min: 48420286 / Avg: 48804732.33 / Max: 49406252Min: 32141990 / Avg: 32973748.83 / Max: 34537511Min: 30342075 / Avg: 31246136.8 / Max: 32525396Min: 24195092 / Avg: 24314443.33 / Max: 24441587Min: 15735361 / Avg: 16034994 / Max: 162045681. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400K800K1200K1600K2000KSE +/- 1470.27, N = 3SE +/- 818.34, N = 3SE +/- 1593.22, N = 3SE +/- 1046.01, N = 3SE +/- 473.83, N = 3SE +/- 8320.13, N = 3SE +/- 10873.39, N = 3SE +/- 1150.50, N = 3SE +/- 707.15, N = 3SE +/- 609.31, N = 3SE +/- 1542.09, N = 3SE +/- 1815.81, N = 3SE +/- 538.09, N = 3SE +/- 270.65, N = 3SE +/- 102.44, N = 3716393.88356570.411985242.501845888.801867023.461550727.151530338.791203386.721123555.301147733.95902070.15603381.33586659.10440879.64293387.401. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P300K600K900K1200K1500KMin: 713469.52 / Avg: 716393.88 / Max: 718122.78Min: 355244.72 / Avg: 356570.41 / Max: 358064.49Min: 1982728.58 / Avg: 1985242.5 / Max: 1988195.09Min: 1844712.66 / Avg: 1845888.8 / Max: 1847975.17Min: 1866161.25 / Avg: 1867023.46 / Max: 1867795.13Min: 1537352.87 / Avg: 1550727.15 / Max: 1565988.68Min: 1517351.75 / Avg: 1530338.79 / Max: 1551938.41Min: 1201201.2 / Avg: 1203386.72 / Max: 1205102.86Min: 1122462.4 / Avg: 1123555.3 / Max: 1124879.16Min: 1146516.78 / Avg: 1147733.95 / Max: 1148394.04Min: 899666.14 / Avg: 902070.15 / Max: 904945.39Min: 599995.31 / Avg: 603381.33 / Max: 606211.3Min: 585752.04 / Avg: 586659.1 / Max: 587614.19Min: 440356.87 / Avg: 440879.64 / Max: 441262.65Min: 293258.49 / Avg: 293387.4 / Max: 293589.771. (CC) gcc options: -O2 -lrt" -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.002179, N = 7SE +/- 0.007190, N = 7SE +/- 0.002278, N = 7SE +/- 0.002719, N = 7SE +/- 0.002146, N = 7SE +/- 0.002747, N = 7SE +/- 0.003838, N = 7SE +/- 0.011195, N = 7SE +/- 0.001701, N = 7SE +/- 0.008223, N = 7SE +/- 0.005689, N = 7SE +/- 0.006442, N = 7SE +/- 0.008434, N = 7SE +/- 0.005350, N = 7SE +/- 0.004801, N = 72.0018304.3013000.9559771.0784400.9937321.1080501.7196703.4617401.4837403.4623603.5434703.7064005.8645805.9534806.376680MIN: 1.91MIN: 4.22MIN: 1.63MIN: 3.38MIN: 1.28MIN: 3.38MIN: 3.47MIN: 3.6MIN: 5.63MIN: 5.82MIN: 6.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 1.99 / Avg: 2 / Max: 2.01Min: 4.26 / Avg: 4.3 / Max: 4.32Min: 0.95 / Avg: 0.96 / Max: 0.96Min: 1.07 / Avg: 1.08 / Max: 1.09Min: 0.99 / Avg: 0.99 / Max: 1Min: 1.1 / Avg: 1.11 / Max: 1.12Min: 1.7 / Avg: 1.72 / Max: 1.73Min: 3.44 / Avg: 3.46 / Max: 3.53Min: 1.47 / Avg: 1.48 / Max: 1.49Min: 3.43 / Avg: 3.46 / Max: 3.5Min: 3.52 / Avg: 3.54 / Max: 3.56Min: 3.67 / Avg: 3.71 / Max: 3.73Min: 5.83 / Avg: 5.86 / Max: 5.9Min: 5.94 / Avg: 5.95 / Max: 5.98Min: 6.36 / Avg: 6.38 / Max: 6.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KSE +/- 28.27, N = 3SE +/- 28.30, N = 3SE +/- 39.30, N = 3SE +/- 31.81, N = 3SE +/- 32.87, N = 3SE +/- 45.55, N = 3SE +/- 87.07, N = 3SE +/- 28.55, N = 3SE +/- 7.45, N = 3SE +/- 15.49, N = 3SE +/- 16.32, N = 3SE +/- 34.38, N = 3SE +/- 30.64, N = 3SE +/- 28.65, N = 3SE +/- 6.91, N = 356973.8528528.34155994.03143169.12143292.79120520.68119683.6594237.6585303.3587391.7971372.7948284.8346017.2035086.9523406.401. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KMin: 56938.61 / Avg: 56973.85 / Max: 57029.76Min: 28484.88 / Avg: 28528.34 / Max: 28581.47Min: 155919.59 / Avg: 155994.03 / Max: 156053.11Min: 143136.28 / Avg: 143169.12 / Max: 143232.73Min: 143233.45 / Avg: 143292.79 / Max: 143346.95Min: 120433.36 / Avg: 120520.68 / Max: 120586.84Min: 119543.75 / Avg: 119683.65 / Max: 119843.41Min: 94189.9 / Avg: 94237.65 / Max: 94288.64Min: 85294.1 / Avg: 85303.35 / Max: 85318.1Min: 87370.61 / Avg: 87391.79 / Max: 87421.95Min: 71350.26 / Avg: 71372.79 / Max: 71404.51Min: 48218.5 / Avg: 48284.83 / Max: 48333.69Min: 45958.39 / Avg: 46017.2 / Max: 46061.5Min: 35057.87 / Avg: 35086.95 / Max: 35144.26Min: 23398.23 / Avg: 23406.4 / Max: 23420.131. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 4SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 332.5864.9711.9012.8512.8215.0415.2119.5521.4320.9125.6838.4939.6652.8179.171. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1530456075Min: 32.55 / Avg: 32.58 / Max: 32.61Min: 64.93 / Avg: 64.97 / Max: 64.99Min: 11.82 / Avg: 11.9 / Max: 11.96Min: 12.83 / Avg: 12.85 / Max: 12.87Min: 12.79 / Avg: 12.81 / Max: 12.87Min: 15.01 / Avg: 15.04 / Max: 15.06Min: 15.17 / Avg: 15.21 / Max: 15.24Min: 19.49 / Avg: 19.55 / Max: 19.6Min: 21.4 / Avg: 21.43 / Max: 21.45Min: 20.89 / Avg: 20.91 / Max: 20.94Min: 25.67 / Avg: 25.68 / Max: 25.7Min: 38.48 / Avg: 38.49 / Max: 38.51Min: 39.61 / Avg: 39.66 / Max: 39.7Min: 52.8 / Avg: 52.81 / Max: 52.82Min: 79.13 / Avg: 79.17 / Max: 79.221. (CC) gcc options: -lm -lpthread -O3

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025SE +/- 0.013, N = 3SE +/- 0.002, N = 3SE +/- 0.050, N = 3SE +/- 0.043, N = 3SE +/- 0.063, N = 3SE +/- 0.035, N = 3SE +/- 0.038, N = 3SE +/- 0.008, N = 3SE +/- 0.016, N = 3SE +/- 0.031, N = 3SE +/- 0.014, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.010, N = 3SE +/- 0.002, N = 37.6843.88320.74319.09519.06216.23015.98412.73811.53111.8549.6306.5756.2144.8553.117
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025Min: 7.67 / Avg: 7.68 / Max: 7.71Min: 3.88 / Avg: 3.88 / Max: 3.89Min: 20.64 / Avg: 20.74 / Max: 20.8Min: 19.02 / Avg: 19.1 / Max: 19.16Min: 18.94 / Avg: 19.06 / Max: 19.14Min: 16.18 / Avg: 16.23 / Max: 16.3Min: 15.93 / Avg: 15.98 / Max: 16.06Min: 12.72 / Avg: 12.74 / Max: 12.75Min: 11.52 / Avg: 11.53 / Max: 11.56Min: 11.8 / Avg: 11.85 / Max: 11.9Min: 9.6 / Avg: 9.63 / Max: 9.65Min: 6.57 / Avg: 6.58 / Max: 6.59Min: 6.21 / Avg: 6.21 / Max: 6.22Min: 4.84 / Avg: 4.85 / Max: 4.87Min: 3.11 / Avg: 3.12 / Max: 3.12

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.010, N = 3SE +/- 0.006, N = 3SE +/- 0.013, N = 3SE +/- 0.020, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.008, N = 3SE +/- 0.006, N = 3SE +/- 0.009, N = 3SE +/- 0.006, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 33.4611.8089.6718.8118.8987.5967.5555.9505.4155.4774.4943.1292.9322.2891.456
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 3.46 / Avg: 3.46 / Max: 3.47Min: 1.8 / Avg: 1.81 / Max: 1.81Min: 9.66 / Avg: 9.67 / Max: 9.69Min: 8.8 / Avg: 8.81 / Max: 8.82Min: 8.88 / Avg: 8.9 / Max: 8.93Min: 7.57 / Avg: 7.6 / Max: 7.64Min: 7.54 / Avg: 7.56 / Max: 7.57Min: 5.95 / Avg: 5.95 / Max: 5.96Min: 5.4 / Avg: 5.42 / Max: 5.43Min: 5.47 / Avg: 5.48 / Max: 5.49Min: 4.48 / Avg: 4.49 / Max: 4.51Min: 3.12 / Avg: 3.13 / Max: 3.14Min: 2.93 / Avg: 2.93 / Max: 2.93Min: 2.29 / Avg: 2.29 / Max: 2.29Min: 1.45 / Avg: 1.46 / Max: 1.46

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P150K300K450K600K750K2325071206227021666600796526025417755405584120763786603877923176612190552081501553471060731. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.341.276.525.926.005.155.104.043.703.753.072.151.981.550.99MIN: 2.29 / MAX: 2.38MIN: 1.26 / MAX: 1.28MIN: 6.45 / MAX: 6.58MIN: 5.85 / MAX: 5.99MIN: 5.92 / MAX: 6.06MIN: 5.08 / MAX: 5.21MIN: 5.05 / MAX: 5.15MIN: 3.98 / MAX: 4.07MIN: 3.64 / MAX: 3.73MIN: 3.72 / MAX: 3.77MIN: 3.02 / MAX: 3.11MIN: 2.13 / MAX: 2.16MIN: 1.96 / MAX: 1.99MIN: 1.54 / MAX: 1.57MAX: 1
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path TracerEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 2.33 / Avg: 2.34 / Max: 2.35Min: 1.27 / Avg: 1.27 / Max: 1.27Min: 6.49 / Avg: 6.52 / Max: 6.54Min: 5.92 / Avg: 5.92 / Max: 5.92Min: 5.99 / Avg: 6 / Max: 6.02Min: 5.13 / Avg: 5.15 / Max: 5.15Min: 5.1 / Avg: 5.1 / Max: 5.1Min: 4.03 / Avg: 4.04 / Max: 4.05Min: 3.7 / Avg: 3.7 / Max: 3.7Min: 3.75 / Avg: 3.75 / Max: 3.75Min: 3.06 / Avg: 3.07 / Max: 3.08Min: 2.15 / Avg: 2.15 / Max: 2.15Min: 1.98 / Avg: 1.98 / Max: 1.98Min: 1.55 / Avg: 1.55 / Max: 1.55Min: 0.99 / Avg: 0.99 / Max: 0.99

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5M10M15M20M25MSE +/- 30731.73, N = 3SE +/- 6120.44, N = 3SE +/- 206001.50, N = 7SE +/- 235328.27, N = 3SE +/- 163606.09, N = 3SE +/- 79112.63, N = 3SE +/- 29075.41, N = 3SE +/- 65922.96, N = 3SE +/- 55451.20, N = 3SE +/- 19374.10, N = 3SE +/- 28222.94, N = 3SE +/- 35625.01, N = 3SE +/- 32954.83, N = 3SE +/- 15626.15, N = 3SE +/- 9437.39, N = 38225794.404265815.0322318900.9920917619.7620981193.8817595501.8117727598.8413858394.6112627261.9312896969.6410443329.127003110.186612368.675095560.163438510.351. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4M8M12M16M20MMin: 8193174.36 / Avg: 8225794.4 / Max: 8287218.36Min: 4253702.33 / Avg: 4265815.03 / Max: 4273401.47Min: 21674928.96 / Avg: 22318900.99 / Max: 22997780.77Min: 20467620.04 / Avg: 20917619.76 / Max: 21262048.98Min: 20708784.06 / Avg: 20981193.88 / Max: 21274387.73Min: 17508090.68 / Avg: 17595501.81 / Max: 17753426.01Min: 17694612.55 / Avg: 17727598.84 / Max: 17785565.68Min: 13729117.41 / Avg: 13858394.61 / Max: 13945462.29Min: 12516504.31 / Avg: 12627261.93 / Max: 12687546.78Min: 12861868.91 / Avg: 12896969.64 / Max: 12928733.42Min: 10405966.38 / Avg: 10443329.12 / Max: 10498652.3Min: 6942183.09 / Avg: 7003110.18 / Max: 7065563.44Min: 6549932.25 / Avg: 6612368.67 / Max: 6661871.66Min: 5064455.01 / Avg: 5095560.16 / Max: 5113736.06Min: 3425266.43 / Avg: 3438510.35 / Max: 3456778.841. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.452.3912.2011.0711.249.589.527.586.907.015.764.023.722.931.89MIN: 4.31 / MAX: 4.52MIN: 2.37 / MAX: 2.42MIN: 11.24 / MAX: 12.35MIN: 10.75 / MAX: 11.24MIN: 10.99 / MAX: 11.36MIN: 9.17 / MAX: 9.71MIN: 9.35 / MAX: 9.62MIN: 7.46 / MAX: 7.63MIN: 6.8 / MAX: 6.99MIN: 6.94 / MAX: 7.09MIN: 5.68 / MAX: 5.81MIN: 3.98 / MAX: 4.05MIN: 3.69 / MAX: 3.76MIN: 2.9 / MAX: 2.97MIN: 1.84 / MAX: 1.91
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620Min: 4.44 / Avg: 4.45 / Max: 4.46Min: 2.39 / Avg: 2.39 / Max: 2.39Min: 12.2 / Avg: 12.2 / Max: 12.2Min: 10.99 / Avg: 11.07 / Max: 11.11Min: 11.24 / Avg: 11.24 / Max: 11.24Min: 9.52 / Avg: 9.58 / Max: 9.62Min: 9.52 / Avg: 9.52 / Max: 9.52Min: 7.58 / Avg: 7.58 / Max: 7.58Min: 6.9 / Avg: 6.9 / Max: 6.9Min: 6.99 / Avg: 7.01 / Max: 7.04Min: 5.75 / Avg: 5.76 / Max: 5.78Min: 4.02 / Avg: 4.02 / Max: 4.02Min: 3.72 / Avg: 3.72 / Max: 3.72Min: 2.93 / Avg: 2.93 / Max: 2.93Min: 1.89 / Avg: 1.89 / Max: 1.89

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 36.833.6518.1816.6716.9514.4914.2911.3610.4210.608.776.075.654.392.82MIN: 6.67 / MAX: 6.94MIN: 3.58 / MAX: 3.69MIN: 16.39 / MAX: 18.87MIN: 16.39 / MAX: 16.95MIN: 16.39 / MAX: 17.24MIN: 14.08 / MAX: 14.71MIN: 13.89 / MAX: 14.49MIN: 11.11 / MAX: 11.49MIN: 10.2 / MAX: 10.53MIN: 10.42 / MAX: 10.75MIN: 8.62 / MAX: 8.85MIN: 5.99 / MAX: 6.13MIN: 5.59 / MAX: 5.75MIN: 4.33 / MAX: 4.44MIN: 2.79 / MAX: 2.86
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path TracerEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025Min: 6.8 / Avg: 6.83 / Max: 6.85Min: 3.64 / Avg: 3.65 / Max: 3.65Min: 18.18 / Avg: 18.18 / Max: 18.18Min: 16.67 / Avg: 16.67 / Max: 16.67Min: 16.95 / Avg: 16.95 / Max: 16.95Min: 14.49 / Avg: 14.49 / Max: 14.49Min: 14.29 / Avg: 14.29 / Max: 14.29Min: 11.36 / Avg: 11.36 / Max: 11.36Min: 10.42 / Avg: 10.42 / Max: 10.42Min: 10.53 / Avg: 10.6 / Max: 10.64Min: 8.77 / Avg: 8.77 / Max: 8.77Min: 6.06 / Avg: 6.07 / Max: 6.1Min: 5.65 / Avg: 5.65 / Max: 5.65Min: 4.39 / Avg: 4.39 / Max: 4.39Min: 2.82 / Avg: 2.82 / Max: 2.83

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KSE +/- 118.03, N = 3SE +/- 74.25, N = 4SE +/- 145.73, N = 3SE +/- 112.07, N = 3SE +/- 121.22, N = 3SE +/- 197.00, N = 3SE +/- 61.33, N = 3SE +/- 162.55, N = 3SE +/- 215.40, N = 3SE +/- 137.10, N = 3SE +/- 63.07, N = 3SE +/- 68.60, N = 3SE +/- 27.68, N = 3SE +/- 48.79, N = 9SE +/- 28.76, N = 313692.306907.5421495.2020185.5020993.1021472.3016990.1011976.9018311.7012067.3012173.7010427.206587.576100.183350.221. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KMin: 13456.2 / Avg: 13692.27 / Max: 13810.3Min: 6728.09 / Avg: 6907.54 / Max: 7091.77Min: 21203.7 / Avg: 21495.17 / Max: 21640.9Min: 19992 / Avg: 20185.5 / Max: 20380.2Min: 20783.8 / Avg: 20993.07 / Max: 21203.7Min: 21275.3 / Avg: 21472.3 / Max: 21866.3Min: 16928.8 / Avg: 16990.13 / Max: 17112.8Min: 11662 / Avg: 11976.87 / Max: 12204.4Min: 18096.3 / Avg: 18311.7 / Max: 18742.5Min: 11793.1 / Avg: 12067.3 / Max: 12204.4Min: 12110.6 / Avg: 12173.67 / Max: 12299.8Min: 10290 / Avg: 10427.2 / Max: 10495.8Min: 6559.89 / Avg: 6587.57 / Max: 6642.93Min: 5788.14 / Avg: 6100.18 / Max: 6247.52Min: 3321.46 / Avg: 3350.22 / Max: 3407.741. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.05, N = 3SE +/- 0.14, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.27, N = 3SE +/- 0.04, N = 3SE +/- 0.26, N = 346.5992.1617.5419.1219.0822.5622.8128.5931.5730.9037.7354.8458.4775.04112.391. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 46.49 / Avg: 46.59 / Max: 46.65Min: 91.95 / Avg: 92.16 / Max: 92.42Min: 17.52 / Avg: 17.54 / Max: 17.56Min: 19.11 / Avg: 19.12 / Max: 19.14Min: 19.06 / Avg: 19.08 / Max: 19.12Min: 22.52 / Avg: 22.56 / Max: 22.63Min: 22.76 / Avg: 22.81 / Max: 22.84Min: 28.5 / Avg: 28.59 / Max: 28.69Min: 31.53 / Avg: 31.57 / Max: 31.59Min: 30.86 / Avg: 30.9 / Max: 30.92Min: 37.62 / Avg: 37.73 / Max: 37.87Min: 54.8 / Avg: 54.84 / Max: 54.9Min: 58.2 / Avg: 58.47 / Max: 59Min: 74.96 / Avg: 75.04 / Max: 75.08Min: 112.07 / Avg: 112.39 / Max: 112.91. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1000K2000K3000K4000K5000KSE +/- 1763.83, N = 3SE +/- 525.27, N = 3SE +/- 1527.53, N = 3SE +/- 3282.95, N = 3SE +/- 881.92, N = 3SE +/- 881.92, N = 3SE +/- 2905.93, N = 3SE +/- 1527.53, N = 3SE +/- 2185.81, N = 3SE +/- 1452.97, N = 3SE +/- 2603.42, N = 3SE +/- 2000.00, N = 3SE +/- 881.92, N = 3SE +/- 798.79, N = 317316678747004568000418300041953333557667352866727986672534000259633321273331460333135400010723337171751. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P800K1600K2400K3200K4000KMin: 1729000 / Avg: 1731666.67 / Max: 1735000Min: 873984 / Avg: 874700.33 / Max: 875724Min: 4566000 / Avg: 4568000 / Max: 4571000Min: 4189000 / Avg: 4195333.33 / Max: 4200000Min: 3556000 / Avg: 3557666.67 / Max: 3559000Min: 3527000 / Avg: 3528666.67 / Max: 3530000Min: 2794000 / Avg: 2798666.67 / Max: 2804000Min: 2531000 / Avg: 2534000 / Max: 2536000Min: 2592000 / Avg: 2596333.33 / Max: 2599000Min: 2125000 / Avg: 2127333.33 / Max: 2130000Min: 1456000 / Avg: 1460333.33 / Max: 1465000Min: 1352000 / Avg: 1354000 / Max: 1358000Min: 1071000 / Avg: 1072333.33 / Max: 1074000Min: 716134 / Avg: 717175 / Max: 7187451. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.01324, N = 5SE +/- 0.00869, N = 5SE +/- 0.00213, N = 5SE +/- 0.00630, N = 5SE +/- 0.01221, N = 5SE +/- 0.01041, N = 5SE +/- 0.00795, N = 5SE +/- 0.00208, N = 5SE +/- 0.00475, N = 5SE +/- 0.00256, N = 5SE +/- 0.01609, N = 5SE +/- 0.00471, N = 5SE +/- 0.00189, N = 5SE +/- 0.00853, N = 5SE +/- 0.01715, N = 52.778266.682091.148281.159431.125021.220112.286112.958301.251542.962834.133823.763954.472874.975127.15030MIN: 2.68MIN: 6.52MIN: 2.2MIN: 2.88MIN: 1.2MIN: 2.89MIN: 3.91MIN: 3.68MIN: 4.3MIN: 4.75MIN: 6.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 2.75 / Avg: 2.78 / Max: 2.83Min: 6.66 / Avg: 6.68 / Max: 6.71Min: 1.14 / Avg: 1.15 / Max: 1.16Min: 1.14 / Avg: 1.16 / Max: 1.17Min: 1.1 / Avg: 1.13 / Max: 1.17Min: 1.19 / Avg: 1.22 / Max: 1.25Min: 2.26 / Avg: 2.29 / Max: 2.3Min: 2.95 / Avg: 2.96 / Max: 2.97Min: 1.24 / Avg: 1.25 / Max: 1.26Min: 2.96 / Avg: 2.96 / Max: 2.97Min: 4.08 / Avg: 4.13 / Max: 4.18Min: 3.75 / Avg: 3.76 / Max: 3.78Min: 4.47 / Avg: 4.47 / Max: 4.48Min: 4.94 / Avg: 4.98 / Max: 4.99Min: 7.11 / Avg: 7.15 / Max: 7.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 7272EPYC 7232P11K22K33K44K55KSE +/- 165.84, N = 3SE +/- 117.08, N = 4SE +/- 614.94, N = 3SE +/- 469.86, N = 3SE +/- 392.90, N = 8SE +/- 148.55, N = 3SE +/- 284.86, N = 11SE +/- 305.20, N = 6SE +/- 144.16, N = 3SE +/- 298.29, N = 3SE +/- 120.35, N = 3SE +/- 140.41, N = 4SE +/- 50.21, N = 31952110044493644529244658392153784330632284012844815246121987864
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 7272EPYC 7232P9K18K27K36K45KMin: 19190 / Avg: 19521 / Max: 19705Min: 9719 / Avg: 10044.25 / Max: 10235Min: 48563 / Avg: 49364.33 / Max: 50573Min: 44695 / Avg: 45292 / Max: 46219Min: 43124 / Avg: 44658.38 / Max: 46027Min: 39060 / Avg: 39215 / Max: 39512Min: 36221 / Avg: 37842.73 / Max: 38879Min: 29652 / Avg: 30632 / Max: 31469Min: 28141 / Avg: 28400.67 / Max: 28639Min: 28095 / Avg: 28448 / Max: 29041Min: 15016 / Avg: 15246.33 / Max: 15422Min: 11788 / Avg: 12198.25 / Max: 12425Min: 7769 / Avg: 7863.67 / Max: 7940

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 0.16, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.12, N = 3109.17214.4441.8045.8045.8454.0454.3467.9575.2773.3889.07129.26137.78176.04262.141. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 108.86 / Avg: 109.17 / Max: 109.35Min: 214.41 / Avg: 214.44 / Max: 214.47Min: 41.78 / Avg: 41.8 / Max: 41.81Min: 45.79 / Avg: 45.8 / Max: 45.82Min: 45.83 / Avg: 45.84 / Max: 45.85Min: 53.96 / Avg: 54.04 / Max: 54.19Min: 54.32 / Avg: 54.34 / Max: 54.35Min: 67.93 / Avg: 67.95 / Max: 67.96Min: 75.26 / Avg: 75.27 / Max: 75.27Min: 73.37 / Avg: 73.38 / Max: 73.4Min: 89.06 / Avg: 89.07 / Max: 89.08Min: 129.21 / Avg: 129.26 / Max: 129.36Min: 137.74 / Avg: 137.78 / Max: 137.82Min: 176.03 / Avg: 176.04 / Max: 176.05Min: 262.01 / Avg: 262.14 / Max: 262.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30M60M90M120M150MSE +/- 491333.28, N = 3SE +/- 83002.90, N = 3SE +/- 113439.47, N = 3SE +/- 688813.01, N = 3SE +/- 1202983.30, N = 3SE +/- 355704.27, N = 3SE +/- 373542.05, N = 3SE +/- 394553.31, N = 3SE +/- 939330.26, N = 3SE +/- 375595.74, N = 3SE +/- 395352.82, N = 3SE +/- 230890.74, N = 3SE +/- 215295.19, N = 3SE +/- 91373.33, N = 3SE +/- 57670.67, N = 346342665246757531310418861228494611262900771039327851053045788271626274422699780846846258177542112219415838443179181321027795
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20M40M60M80M100MMin: 45648904 / Avg: 46342664.67 / Max: 47292245Min: 24510321 / Avg: 24675752.67 / Max: 24770415Min: 130816472 / Avg: 131041886 / Max: 131176885Min: 122057435 / Avg: 122849461.33 / Max: 124221646Min: 124900585 / Avg: 126290076.67 / Max: 128685849Min: 103309473 / Avg: 103932784.67 / Max: 104541406Min: 104841596 / Avg: 105304577.67 / Max: 106043845Min: 82311763 / Avg: 82716261.67 / Max: 83505284Min: 72577275 / Avg: 74422699 / Max: 75650094Min: 77335639 / Avg: 78084684 / Max: 78508351Min: 61945670 / Avg: 62581775.33 / Max: 63306570Min: 41683109 / Avg: 42112219 / Max: 42474524Min: 41298645 / Avg: 41583844.33 / Max: 42005822Min: 31610049 / Avg: 31791813.33 / Max: 31899083Min: 20912491 / Avg: 21027794.67 / Max: 21087999

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40K80K120K160K200KSE +/- 417.25, N = 3SE +/- 382.53, N = 3SE +/- 602.57, N = 3SE +/- 558.83, N = 3SE +/- 1795.92, N = 3SE +/- 217.28, N = 3SE +/- 457.19, N = 3SE +/- 157.32, N = 3SE +/- 270.71, N = 3SE +/- 1288.21, N = 3SE +/- 377.08, N = 3SE +/- 123.36, N = 3SE +/- 294.16, N = 3SE +/- 183.06, N = 3SE +/- 186.09, N = 376923.3938378.08197426.57179226.24179496.78127853.09152001.95122762.96110766.79112205.0192207.0763930.7259431.0346780.9431690.601. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KMin: 76194.59 / Avg: 76923.39 / Max: 77639.82Min: 37633.18 / Avg: 38378.08 / Max: 38901.65Min: 196236.3 / Avg: 197426.57 / Max: 198185.13Min: 178344.47 / Avg: 179226.24 / Max: 180261.88Min: 175929.7 / Avg: 179496.78 / Max: 181644.9Min: 127425.29 / Avg: 127853.09 / Max: 128133.12Min: 151506.09 / Avg: 152001.95 / Max: 152915.2Min: 122497.64 / Avg: 122762.96 / Max: 123042.09Min: 110323.78 / Avg: 110766.79 / Max: 111257.86Min: 109640.44 / Avg: 112205.01 / Max: 113701Min: 91633.53 / Avg: 92207.07 / Max: 92917.94Min: 63684.02 / Avg: 63930.72 / Max: 64057.16Min: 59024.95 / Avg: 59431.03 / Max: 60002.73Min: 46417.91 / Avg: 46780.94 / Max: 47003.53Min: 31367.04 / Avg: 31690.6 / Max: 32011.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1020304050SE +/- 0.06, N = 3SE +/- 0.08, N = 6SE +/- 0.00, N = 7SE +/- 0.00, N = 6SE +/- 0.00, N = 6SE +/- 0.00, N = 5SE +/- 0.17, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 3SE +/- 0.00, N = 313.458.1643.4840.0040.0034.4834.4827.7825.0026.0520.8314.0813.8910.426.99MIN: 12.82 / MAX: 13.89MIN: 6.85 / MAX: 8.4MIN: 35.71 / MAX: 45.45MIN: 34.48 / MAX: 41.67MIN: 38.46 / MAX: 41.67MIN: 33.33 / MAX: 35.71MIN: 33.33 / MAX: 35.71MIN: 27.03MIN: 23.81 / MAX: 25.64MIN: 25.64 / MAX: 26.32MIN: 20.41 / MAX: 21.28MIN: 12.99 / MAX: 14.29MIN: 13.33 / MAX: 14.08MIN: 10.1 / MAX: 10.64MIN: 5.95 / MAX: 7.04
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645Min: 13.33 / Avg: 13.45 / Max: 13.51Min: 7.75 / Avg: 8.16 / Max: 8.26Min: 43.48 / Avg: 43.48 / Max: 43.48Min: 34.48 / Avg: 34.48 / Max: 34.48Min: 34.48 / Avg: 34.48 / Max: 34.48Min: 27.78 / Avg: 27.78 / Max: 27.78Min: 25.64 / Avg: 26.05 / Max: 26.32Min: 20.83 / Avg: 20.83 / Max: 20.83Min: 14.08 / Avg: 14.08 / Max: 14.08Min: 13.89 / Avg: 13.89 / Max: 13.89Min: 10.42 / Avg: 10.42 / Max: 10.42Min: 6.99 / Avg: 6.99 / Max: 6.99

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P140280420560700SE +/- 0.98, N = 3SE +/- 0.51, N = 3SE +/- 0.24, N = 3SE +/- 0.15, N = 3SE +/- 0.39, N = 3SE +/- 0.14, N = 3SE +/- 0.92, N = 3SE +/- 0.07, N = 3SE +/- 0.33, N = 3SE +/- 0.14, N = 3SE +/- 1.69, N = 3SE +/- 0.50, N = 3SE +/- 1.19, N = 3SE +/- 0.64, N = 3SE +/- 0.72, N = 3265.13508.26107.79119.17118.26137.83139.21167.99189.09182.76219.75317.27338.52426.59664.29
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P120240360480600Min: 263.71 / Avg: 265.13 / Max: 267.02Min: 507.47 / Avg: 508.26 / Max: 509.22Min: 107.34 / Avg: 107.79 / Max: 108.17Min: 118.86 / Avg: 119.17 / Max: 119.35Min: 117.87 / Avg: 118.26 / Max: 119.05Min: 137.55 / Avg: 137.83 / Max: 138.03Min: 138.06 / Avg: 139.21 / Max: 141.03Min: 167.9 / Avg: 167.99 / Max: 168.12Min: 188.42 / Avg: 189.09 / Max: 189.45Min: 182.48 / Avg: 182.76 / Max: 182.95Min: 217.98 / Avg: 219.75 / Max: 223.14Min: 316.57 / Avg: 317.27 / Max: 318.25Min: 336.65 / Avg: 338.52 / Max: 340.74Min: 425.64 / Avg: 426.59 / Max: 427.81Min: 662.88 / Avg: 664.29 / Max: 665.27

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.61321.22641.83962.45283.066SE +/- 0.00173, N = 3SE +/- 0.00063, N = 3SE +/- 0.00033, N = 3SE +/- 0.00053, N = 3SE +/- 0.00014, N = 3SE +/- 0.00034, N = 3SE +/- 0.00037, N = 3SE +/- 0.00071, N = 3SE +/- 0.00074, N = 3SE +/- 0.00041, N = 3SE +/- 0.00125, N = 3SE +/- 0.00048, N = 3SE +/- 0.00125, N = 3SE +/- 0.00045, N = 3SE +/- 0.00080, N = 31.143752.229490.446270.492640.489080.570480.574840.715490.790790.774390.937051.350581.451451.838952.72553
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 1.14 / Avg: 1.14 / Max: 1.15Min: 2.23 / Avg: 2.23 / Max: 2.23Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.57 / Avg: 0.57 / Max: 0.57Min: 0.57 / Avg: 0.57 / Max: 0.58Min: 0.71 / Avg: 0.72 / Max: 0.72Min: 0.79 / Avg: 0.79 / Max: 0.79Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 1.35 / Avg: 1.35 / Max: 1.35Min: 1.45 / Avg: 1.45 / Max: 1.45Min: 1.84 / Avg: 1.84 / Max: 1.84Min: 2.72 / Avg: 2.73 / Max: 2.73

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P120240360480600SE +/- 0.74, N = 3SE +/- 0.25, N = 3SE +/- 0.23, N = 3SE +/- 0.41, N = 3SE +/- 0.25, N = 3SE +/- 0.77, N = 3SE +/- 0.69, N = 3SE +/- 0.41, N = 3SE +/- 1.09, N = 3SE +/- 0.20, N = 3SE +/- 0.04, N = 3SE +/- 0.64, N = 3SE +/- 0.83, N = 3SE +/- 0.31, N = 3SE +/- 0.21, N = 3237.79454.4991.67103.55101.68121.21122.03148.50168.40162.29196.70287.81304.34387.23558.87
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P100200300400500Min: 236.43 / Avg: 237.79 / Max: 238.96Min: 454.13 / Avg: 454.49 / Max: 454.97Min: 91.4 / Avg: 91.67 / Max: 92.12Min: 103.02 / Avg: 103.55 / Max: 104.35Min: 101.18 / Avg: 101.68 / Max: 101.94Min: 120.06 / Avg: 121.21 / Max: 122.68Min: 120.66 / Avg: 122.03 / Max: 122.86Min: 147.97 / Avg: 148.5 / Max: 149.3Min: 167.27 / Avg: 168.4 / Max: 170.57Min: 161.98 / Avg: 162.29 / Max: 162.65Min: 196.63 / Avg: 196.7 / Max: 196.77Min: 286.64 / Avg: 287.81 / Max: 288.83Min: 303.05 / Avg: 304.34 / Max: 305.88Min: 386.68 / Avg: 387.23 / Max: 387.74Min: 558.51 / Avg: 558.87 / Max: 559.22

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60K120K180K240K300KSE +/- 558.99, N = 3SE +/- 288.04, N = 3SE +/- 1266.96, N = 3SE +/- 226.28, N = 3SE +/- 419.38, N = 3SE +/- 370.12, N = 3SE +/- 453.51, N = 3SE +/- 190.42, N = 3SE +/- 894.75, N = 3SE +/- 249.88, N = 3SE +/- 315.33, N = 3SE +/- 192.03, N = 3SE +/- 313.91, N = 3SE +/- 108.98, N = 3SE +/- 84.48, N = 310906956955279656264908270140234062229747176148171242171362138575951379058869655459411. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50K100K150K200K250KMin: 108136 / Avg: 109069.33 / Max: 110069Min: 56532 / Avg: 56954.67 / Max: 57505Min: 278191 / Avg: 279656 / Max: 282179Min: 264563 / Avg: 264907.67 / Max: 265334Min: 269353 / Avg: 270139.67 / Max: 270785Min: 233325 / Avg: 234061.67 / Max: 234493Min: 229189 / Avg: 229746.67 / Max: 230645Min: 175784 / Avg: 176148 / Max: 176427Min: 169527 / Avg: 171242 / Max: 172542Min: 170978 / Avg: 171362 / Max: 171831Min: 138164 / Avg: 138575.33 / Max: 139195Min: 94939 / Avg: 95137 / Max: 95521Min: 90180 / Avg: 90587.67 / Max: 91205Min: 69500 / Avg: 69654.67 / Max: 69865Min: 45801 / Avg: 45941.33 / Max: 460931. (CXX) g++ options: -pipe -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50M100M150M200M250MSE +/- 33050.19, N = 3SE +/- 173513.15, N = 3SE +/- 2178579.83, N = 3SE +/- 2628540.04, N = 3SE +/- 1573991.80, N = 3SE +/- 1627088.40, N = 3SE +/- 1503518.94, N = 3SE +/- 979987.31, N = 3SE +/- 577476.24, N = 3SE +/- 298644.62, N = 3SE +/- 701112.41, N = 3SE +/- 731623.27, N = 6SE +/- 860580.20, N = 3SE +/- 532212.35, N = 3SE +/- 468827.18, N = 39388313048331735238239736218469824214717760185712982185095008147488280133038550139338056114391434769720907481656056928967392061561. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40M80M120M160M200MMin: 93839853 / Avg: 93883129.67 / Max: 93948038Min: 48063107 / Avg: 48331734.67 / Max: 48656311Min: 235588085 / Avg: 238239735.67 / Max: 242559754Min: 213921113 / Avg: 218469824.33 / Max: 223026632Min: 211571907 / Avg: 214717760.33 / Max: 216390965Min: 182486843 / Avg: 185712982.33 / Max: 187695198Min: 182409655 / Avg: 185095007.67 / Max: 187609592Min: 146389489 / Avg: 147488280.33 / Max: 149443243Min: 131941123 / Avg: 133038549.67 / Max: 133898996Min: 138995330 / Avg: 139338056 / Max: 139933058Min: 112997116 / Avg: 114391434.33 / Max: 115217369Min: 75222240 / Avg: 76972090.17 / Max: 79656987Min: 73275140 / Avg: 74816560.33 / Max: 76250454Min: 55956147 / Avg: 56928967.33 / Max: 57789497Min: 38308495 / Avg: 39206155.67 / Max: 398896151. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1428425670SE +/- 0.17, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 322.5612.5062.5055.5658.8250.0050.0040.0035.7137.0430.3020.8319.6115.6310.31MIN: 21.28 / MAX: 24.39MIN: 12.2 / MAX: 13.33MIN: 55.56 / MAX: 66.67MIN: 50 / MAX: 62.5MIN: 52.63 / MAX: 62.5MIN: 47.62 / MAX: 52.63MIN: 47.62 / MAX: 52.63MIN: 37.04 / MAX: 41.67MIN: 33.33 / MAX: 38.46MIN: 34.48 / MAX: 40MIN: 29.41 / MAX: 32.26MIN: 20.41 / MAX: 22.73MIN: 19.23 / MAX: 21.28MIN: 15.15 / MAX: 16.67MIN: 10.1 / MAX: 10.99
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1224364860Min: 22.22 / Avg: 22.56 / Max: 22.73Min: 12.5 / Avg: 12.5 / Max: 12.5Min: 62.5 / Avg: 62.5 / Max: 62.5Min: 55.56 / Avg: 55.56 / Max: 55.56Min: 58.82 / Avg: 58.82 / Max: 58.82Min: 35.71 / Avg: 35.71 / Max: 35.71Min: 37.04 / Avg: 37.04 / Max: 37.04Min: 30.3 / Avg: 30.3 / Max: 30.3Min: 20.83 / Avg: 20.83 / Max: 20.83Min: 19.61 / Avg: 19.61 / Max: 19.61Min: 15.63 / Avg: 15.63 / Max: 15.63Min: 10.31 / Avg: 10.31 / Max: 10.31

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000SE +/- 0.69, N = 3SE +/- 0.11, N = 3SE +/- 0.22, N = 3SE +/- 0.04, N = 3SE +/- 0.20, N = 3SE +/- 0.23, N = 3SE +/- 0.31, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.46, N = 3SE +/- 0.35, N = 3SE +/- 0.25, N = 3SE +/- 0.65, N = 3SE +/- 1.20, N = 3SE +/- 0.62, N = 3356.90683.81143.94154.42154.23179.18180.94223.08243.58237.65291.52411.56432.92554.64866.85
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P150300450600750Min: 356.09 / Avg: 356.9 / Max: 358.27Min: 683.61 / Avg: 683.81 / Max: 683.97Min: 143.49 / Avg: 143.94 / Max: 144.19Min: 154.37 / Avg: 154.42 / Max: 154.49Min: 153.83 / Avg: 154.23 / Max: 154.49Min: 178.74 / Avg: 179.18 / Max: 179.54Min: 180.32 / Avg: 180.94 / Max: 181.27Min: 222.95 / Avg: 223.08 / Max: 223.18Min: 243.32 / Avg: 243.58 / Max: 243.79Min: 237.18 / Avg: 237.65 / Max: 238.58Min: 291.13 / Avg: 291.52 / Max: 292.22Min: 411.08 / Avg: 411.56 / Max: 411.94Min: 432.21 / Avg: 432.92 / Max: 434.21Min: 552.98 / Avg: 554.64 / Max: 556.98Min: 865.62 / Avg: 866.85 / Max: 867.52

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P14K28K42K56K70KSE +/- 71.92, N = 3SE +/- 31.21, N = 3SE +/- 500.73, N = 3SE +/- 487.40, N = 3SE +/- 467.21, N = 3SE +/- 179.29, N = 3SE +/- 328.54, N = 3SE +/- 391.02, N = 6SE +/- 397.54, N = 5SE +/- 264.62, N = 15SE +/- 87.62, N = 3SE +/- 114.11, N = 3SE +/- 122.98, N = 3SE +/- 80.88, N = 3SE +/- 44.44, N = 3271321420466639627546229554445535703890436223370853318423486220221755011136
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12K24K36K48K60KMin: 27010 / Avg: 27132 / Max: 27259Min: 14151 / Avg: 14203.67 / Max: 14259Min: 65746 / Avg: 66639.33 / Max: 67478Min: 62183 / Avg: 62754.33 / Max: 63724Min: 61378 / Avg: 62294.67 / Max: 62910Min: 54235 / Avg: 54445.33 / Max: 54802Min: 53078 / Avg: 53569.67 / Max: 54193Min: 37112 / Avg: 38903.83 / Max: 39603Min: 35373 / Avg: 36223.2 / Max: 37595Min: 35376 / Avg: 37084.67 / Max: 38925Min: 33018 / Avg: 33183.67 / Max: 33316Min: 23305 / Avg: 23486.33 / Max: 23697Min: 21875 / Avg: 22021.67 / Max: 22266Min: 17397 / Avg: 17550 / Max: 17672Min: 11048 / Avg: 11136.33 / Max: 11189

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1530456075SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.02, N = 4SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.14, N = 325.8852.7911.1311.9411.8013.8715.3522.2318.0623.0325.8934.0341.6949.1166.421. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1326395265Min: 25.85 / Avg: 25.88 / Max: 25.92Min: 52.75 / Avg: 52.79 / Max: 52.81Min: 10.98 / Avg: 11.13 / Max: 11.24Min: 11.88 / Avg: 11.94 / Max: 12Min: 11.75 / Avg: 11.8 / Max: 11.87Min: 13.77 / Avg: 13.87 / Max: 13.95Min: 15.3 / Avg: 15.35 / Max: 15.39Min: 22.07 / Avg: 22.23 / Max: 22.32Min: 18.03 / Avg: 18.06 / Max: 18.09Min: 22.87 / Avg: 23.03 / Max: 23.23Min: 25.8 / Avg: 25.89 / Max: 25.96Min: 33.93 / Avg: 34.03 / Max: 34.16Min: 41.58 / Avg: 41.69 / Max: 41.79Min: 48.99 / Avg: 49.11 / Max: 49.32Min: 66.13 / Avg: 66.42 / Max: 66.581. (CXX) g++ options: -fopenmp -pthread -lmpi_cxx -lmpi

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KSE +/- 0.00, N = 3SE +/- 66.13, N = 4SE +/- 108.20, N = 2SE +/- 197.85, N = 2SE +/- 111.87, N = 3SE +/- 0.00, N = 3SE +/- 136.29, N = 3SE +/- 233.09, N = 3SE +/- 101.60, N = 3SE +/- 82.20, N = 3SE +/- 0.00, N = 3SE +/- 69.42, N = 3SE +/- 54.85, N = 9SE +/- 50.92, N = 316399.708265.9921311.9020382.2020991.7023040.7017300.8012699.5020585.3012698.2013892.5012204.407358.197130.893879.101. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - GriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KMin: 16399.7 / Avg: 16399.7 / Max: 16399.7Min: 8199.86 / Avg: 8265.99 / Max: 8464.38Min: 21203.7 / Avg: 21311.9 / Max: 21420.1Min: 20184.3 / Avg: 20382.15 / Max: 20580Min: 22817 / Avg: 23040.73 / Max: 23152.6Min: 17300.8 / Avg: 17300.8 / Max: 17300.8Min: 12495 / Avg: 12699.47 / Max: 12957.8Min: 20184.3 / Avg: 20585.33 / Max: 20991.7Min: 12495 / Avg: 12698.2 / Max: 12799.8Min: 13810.3 / Avg: 13892.5 / Max: 14056.9Min: 12204.4 / Avg: 12204.4 / Max: 12204.4Min: 7288.77 / Avg: 7358.19 / Max: 7497.02Min: 6786.09 / Avg: 7130.89 / Max: 7288.77Min: 3802.84 / Avg: 3879.1 / Max: 3975.691. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.19, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.20, N = 3SE +/- 0.51, N = 3SE +/- 0.05, N = 3125.53243.8450.3454.7154.6563.4764.2179.1187.5685.01103.00147.38157.17200.35296.691. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 125.35 / Avg: 125.53 / Max: 125.64Min: 243.81 / Avg: 243.84 / Max: 243.89Min: 50.25 / Avg: 50.34 / Max: 50.45Min: 54.68 / Avg: 54.71 / Max: 54.75Min: 54.27 / Avg: 54.65 / Max: 54.89Min: 63.31 / Avg: 63.47 / Max: 63.56Min: 64.09 / Avg: 64.21 / Max: 64.3Min: 79.03 / Avg: 79.11 / Max: 79.27Min: 87.32 / Avg: 87.56 / Max: 87.75Min: 84.8 / Avg: 85 / Max: 85.18Min: 102.75 / Avg: 103 / Max: 103.2Min: 147.07 / Avg: 147.38 / Max: 147.56Min: 156.97 / Avg: 157.17 / Max: 157.56Min: 199.81 / Avg: 200.35 / Max: 201.37Min: 296.63 / Avg: 296.69 / Max: 296.791. (CXX) g++ options: -O2 -lOpenCL

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KSE +/- 248.26, N = 3SE +/- 681.51, N = 3SE +/- 6275.15, N = 3SE +/- 5067.03, N = 3SE +/- 6923.68, N = 3SE +/- 1360.28, N = 3SE +/- 3710.21, N = 3SE +/- 1178.79, N = 3SE +/- 853.89, N = 3SE +/- 1108.32, N = 3SE +/- 1318.12, N = 3SE +/- 891.46, N = 3SE +/- 804.68, N = 3SE +/- 696.35, N = 3SE +/- 518.37, N = 33970382133459976528999929397507756877914566557745875486141835216533684493432292756771698321. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 396660.01 / Avg: 397038.18 / Max: 397505.91Min: 212208.72 / Avg: 213344.58 / Max: 214564.99Min: 985316.28 / Avg: 997652.34 / Max: 1005820.11Min: 889916.01 / Avg: 899992.3 / Max: 905966.24Min: 926183.25 / Avg: 939750.48 / Max: 948934.04Min: 773949.79 / Avg: 775687.15 / Max: 778368.93Min: 784040.65 / Avg: 791455.71 / Max: 795407.24Min: 654021.41 / Avg: 655774.08 / Max: 658015.96Min: 585986.61 / Avg: 587547.73 / Max: 588927.93Min: 613074.34 / Avg: 614182.67 / Max: 616399.31Min: 519362.06 / Avg: 521653.42 / Max: 523928.08Min: 367104.95 / Avg: 368449.41 / Max: 370135.74Min: 341698.14 / Avg: 343228.63 / Max: 344424.83Min: 274390.74 / Avg: 275676.66 / Max: 276782.78Min: 168803.67 / Avg: 169832.3 / Max: 170458.781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.33140.66280.99421.32561.657SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 30.6301.1730.2510.2790.2670.3230.3160.3820.4260.4070.4800.6790.7290.9081.4731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.63 / Avg: 0.63 / Max: 0.63Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 0.25 / Avg: 0.25 / Max: 0.26Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.26 / Avg: 0.27 / Max: 0.27Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.43 / Avg: 0.43 / Max: 0.43Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.68 / Avg: 0.68 / Max: 0.68Min: 0.73 / Avg: 0.73 / Max: 0.73Min: 0.9 / Avg: 0.91 / Max: 0.91Min: 1.47 / Avg: 1.47 / Max: 1.481. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2M4M6M8M10MSE +/- 8186.58, N = 3SE +/- 10805.74, N = 13SE +/- 61282.17, N = 12SE +/- 47749.78, N = 3SE +/- 27617.12, N = 3SE +/- 69440.76, N = 6SE +/- 44027.23, N = 3SE +/- 55384.00, N = 3SE +/- 32730.71, N = 3SE +/- 24292.51, N = 3SE +/- 43537.48, N = 3SE +/- 25144.15, N = 3SE +/- 26792.49, N = 3SE +/- 24217.23, N = 3SE +/- 1710.84, N = 33012387160367385974528232474817538371316007183579556443352143025486066439769530862373023239235792814706211. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.5M3M4.5M6M7.5MMin: 3002918 / Avg: 3012386.67 / Max: 3028689Min: 1550246 / Avg: 1603672.85 / Max: 1683898Min: 8404648 / Avg: 8597452.08 / Max: 9185015Min: 8181136 / Avg: 8232473.67 / Max: 8327881Min: 8142180 / Avg: 8175382.67 / Max: 8230211Min: 7048399 / Avg: 7131599.5 / Max: 7477000Min: 7101279 / Avg: 7183579 / Max: 7251844Min: 5463565 / Avg: 5564432.67 / Max: 5654508Min: 5150993 / Avg: 5214301.67 / Max: 5260375Min: 5439082 / Avg: 5486065.67 / Max: 5520271Min: 4337144 / Avg: 4397695 / Max: 4482162Min: 3036500 / Avg: 3086236.67 / Max: 3117538Min: 2989438 / Avg: 3023238.67 / Max: 3076148Min: 2309505 / Avg: 2357928 / Max: 2383052Min: 1467509 / Avg: 1470620.67 / Max: 14734091. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.18, N = 5SE +/- 0.00, N = 4SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.00, N = 7SE +/- 0.49, N = 6SE +/- 0.00, N = 6SE +/- 0.00, N = 6SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 4SE +/- 0.00, N = 329.5917.2483.3376.9276.9266.6766.6755.0747.6250.0041.6729.4127.7821.7414.29MIN: 27.78 / MAX: 31.25MIN: 16.67 / MAX: 17.86MIN: 55.56 / MAX: 90.91MIN: 71.43 / MAX: 83.33MIN: 62.5 / MAX: 83.33MIN: 62.5 / MAX: 71.43MIN: 58.82 / MAX: 71.43MIN: 50 / MAX: 55.56MIN: 45.45 / MAX: 50MIN: 45.45 / MAX: 52.63MIN: 38.46 / MAX: 43.48MIN: 27.78MIN: 26.32MIN: 20.83 / MAX: 22.22MIN: 13.89 / MAX: 14.49
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1632486480Min: 29.41 / Avg: 29.59 / Max: 30.3Min: 17.24 / Avg: 17.24 / Max: 17.24Min: 83.33 / Avg: 83.33 / Max: 83.33Min: 76.92 / Avg: 76.92 / Max: 76.92Min: 76.92 / Avg: 76.92 / Max: 76.92Min: 66.67 / Avg: 66.67 / Max: 66.67Min: 66.67 / Avg: 66.67 / Max: 66.67Min: 52.63 / Avg: 55.07 / Max: 55.56Min: 47.62 / Avg: 47.62 / Max: 47.62Min: 41.67 / Avg: 41.67 / Max: 41.67Min: 29.41 / Avg: 29.41 / Max: 29.41Min: 27.78 / Avg: 27.78 / Max: 27.78Min: 21.74 / Avg: 21.74 / Max: 21.74Min: 14.29 / Avg: 14.29 / Max: 14.29

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 33.571.848.647.827.926.816.755.544.925.044.172.912.742.141.50MIN: 3.48 / MAX: 3.63MIN: 1.81 / MAX: 1.92MIN: 8.32 / MAX: 8.76MIN: 7.53 / MAX: 7.98MIN: 7.72 / MAX: 8.01MIN: 6.61 / MAX: 6.89MIN: 6.64 / MAX: 6.79MIN: 5.5 / MAX: 5.57MIN: 4.81 / MAX: 5.05MIN: 4.94 / MAX: 5.12MIN: 4.11 / MAX: 4.26MIN: 2.88 / MAX: 2.96MIN: 2.69 / MAX: 2.78MIN: 2.08 / MAX: 2.17MIN: 1.48 / MAX: 1.54
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 3.53 / Avg: 3.57 / Max: 3.6Min: 1.83 / Avg: 1.84 / Max: 1.87Min: 8.49 / Avg: 8.64 / Max: 8.72Min: 7.71 / Avg: 7.82 / Max: 7.94Min: 7.83 / Avg: 7.92 / Max: 7.98Min: 6.71 / Avg: 6.81 / Max: 6.88Min: 6.73 / Avg: 6.75 / Max: 6.78Min: 5.52 / Avg: 5.54 / Max: 5.55Min: 4.81 / Avg: 4.92 / Max: 5.01Min: 4.94 / Avg: 5.04 / Max: 5.12Min: 4.11 / Avg: 4.17 / Max: 4.22Min: 2.89 / Avg: 2.91 / Max: 2.95Min: 2.73 / Avg: 2.74 / Max: 2.76Min: 2.12 / Avg: 2.14 / Max: 2.15Min: 1.5 / Avg: 1.5 / Max: 1.51

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.271.687.777.047.216.196.135.034.544.653.832.682.531.981.38MIN: 3.18 / MAX: 3.41MIN: 1.61 / MAX: 1.72MIN: 7.58 / MAX: 8.26MIN: 6.88 / MAX: 7.39MIN: 7.03 / MAX: 7.62MIN: 6.04 / MAX: 6.75MIN: 5.97 / MAX: 6.57MIN: 4.8 / MAX: 5.35MIN: 4.38 / MAX: 4.8MIN: 4.49 / MAX: 5.07MIN: 3.74 / MAX: 4.01MIN: 2.58 / MAX: 2.77MIN: 2.46 / MAX: 2.62MIN: 1.91 / MAX: 2.03MIN: 1.34 / MAX: 1.42
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 3.26 / Avg: 3.27 / Max: 3.29Min: 1.66 / Avg: 1.68 / Max: 1.69Min: 7.68 / Avg: 7.77 / Max: 7.83Min: 6.98 / Avg: 7.04 / Max: 7.07Min: 7.13 / Avg: 7.21 / Max: 7.27Min: 6.14 / Avg: 6.19 / Max: 6.26Min: 6.07 / Avg: 6.13 / Max: 6.22Min: 4.96 / Avg: 5.03 / Max: 5.09Min: 4.47 / Avg: 4.54 / Max: 4.64Min: 4.58 / Avg: 4.65 / Max: 4.76Min: 3.81 / Avg: 3.83 / Max: 3.84Min: 2.66 / Avg: 2.68 / Max: 2.71Min: 2.52 / Avg: 2.53 / Max: 2.55Min: 1.97 / Avg: 1.98 / Max: 1.99Min: 1.37 / Avg: 1.38 / Max: 1.39

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 314.0227.165.896.356.387.407.429.089.999.7311.7116.6817.7322.4033.151. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835Min: 14 / Avg: 14.02 / Max: 14.03Min: 27.15 / Avg: 27.16 / Max: 27.17Min: 5.87 / Avg: 5.89 / Max: 5.9Min: 6.35 / Avg: 6.35 / Max: 6.36Min: 6.36 / Avg: 6.38 / Max: 6.4Min: 7.39 / Avg: 7.4 / Max: 7.41Min: 7.41 / Avg: 7.42 / Max: 7.42Min: 9.08 / Avg: 9.08 / Max: 9.08Min: 9.99 / Avg: 9.99 / Max: 9.99Min: 9.73 / Avg: 9.73 / Max: 9.74Min: 11.71 / Avg: 11.71 / Max: 11.72Min: 16.67 / Avg: 16.68 / Max: 16.69Min: 17.72 / Avg: 17.73 / Max: 17.74Min: 22.39 / Avg: 22.4 / Max: 22.42Min: 33.13 / Avg: 33.15 / Max: 33.181. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P120240360480600SE +/- 1.86, N = 3SE +/- 1.00, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 3SE +/- 0.58, N = 3SE +/- 1.15, N = 322211653549250444644135932833727919218614296MIN: 1 / MAX: 779MIN: 1 / MAX: 413MIN: 1 / MAX: 1809MIN: 1 / MAX: 1688MIN: 1 / MAX: 1715MIN: 1 / MAX: 1551MIN: 1 / MAX: 1539MIN: 1 / MAX: 1250MIN: 1 / MAX: 1145MIN: 1 / MAX: 1172MIN: 1 / MAX: 983MIN: 1 / MAX: 672MIN: 1 / MAX: 658MIN: 1 / MAX: 507MIN: 1 / MAX: 343
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P90180270360450Min: 488 / Avg: 491.67 / Max: 494Min: 502 / Avg: 504 / Max: 505Min: 444 / Avg: 445.67 / Max: 447Min: 440 / Avg: 441.33 / Max: 442Min: 358 / Avg: 359 / Max: 360Min: 327 / Avg: 328 / Max: 329Min: 277 / Avg: 279 / Max: 281

rays1bench

This is a test of rays1bench, a simple path-tracer / ray-tracing that supports SSE and AVX instructions, multi-threading, and other features. This test profile is measuring the performance of the "large scene" in rays1bench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 0.10, N = 5SE +/- 0.01, N = 4SE +/- 0.16, N = 7SE +/- 0.20, N = 7SE +/- 0.21, N = 7SE +/- 0.11, N = 7SE +/- 0.08, N = 7SE +/- 0.03, N = 6SE +/- 0.06, N = 6SE +/- 0.06, N = 6SE +/- 0.04, N = 6SE +/- 0.08, N = 5SE +/- 0.06, N = 4SE +/- 0.04, N = 4SE +/- 0.08, N = 3109.9159.60269.74243.57243.25218.73217.70182.59163.00167.75134.1990.3784.5468.5148.61
OpenBenchmarking.orgmrays/s, More Is Betterrays1bench 2020-01-09Large SceneEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 109.61 / Avg: 109.91 / Max: 110.19Min: 59.57 / Avg: 59.6 / Max: 59.62Min: 269 / Avg: 269.74 / Max: 270.12Min: 242.82 / Avg: 243.57 / Max: 244.37Min: 242.31 / Avg: 243.25 / Max: 243.89Min: 218.36 / Avg: 218.73 / Max: 219.13Min: 217.39 / Avg: 217.7 / Max: 217.92Min: 182.51 / Avg: 182.59 / Max: 182.66Min: 162.8 / Avg: 163 / Max: 163.17Min: 167.52 / Avg: 167.75 / Max: 167.95Min: 134.04 / Avg: 134.19 / Max: 134.3Min: 90.09 / Avg: 90.37 / Max: 90.53Min: 84.42 / Avg: 84.54 / Max: 84.72Min: 68.41 / Avg: 68.51 / Max: 68.59Min: 48.46 / Avg: 48.61 / Max: 48.73

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72829K18K27K36K45KSE +/- 233.59, N = 3SE +/- 56.97, N = 6SE +/- 87.69, N = 5SE +/- 60.14, N = 4SE +/- 162.21, N = 3416067517972313556254111. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72827K14K21K28K35KMin: 41361 / Avg: 41606 / Max: 42073Min: 7367 / Avg: 7516.83 / Max: 7745Min: 9473 / Avg: 9722.8 / Max: 9928Min: 13434 / Avg: 13555.5 / Max: 13696Min: 25214 / Avg: 25411.33 / Max: 257331. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72829K18K27K36K45KSE +/- 8.41, N = 3SE +/- 10.89, N = 6SE +/- 10.01, N = 5SE +/- 29.85, N = 4SE +/- 18.66, N = 3415737573965213619254261. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72827K14K21K28K35KMin: 41561 / Avg: 41572.67 / Max: 41589Min: 7545 / Avg: 7573 / Max: 7614Min: 9633 / Avg: 9651.6 / Max: 9690Min: 13567 / Avg: 13618.5 / Max: 13689Min: 25402 / Avg: 25426.33 / Max: 254631. (CXX) g++ options: -O3 -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200SE +/- 0.27, N = 3SE +/- 0.25, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.27, N = 3SE +/- 0.19, N = 3SE +/- 0.42, N = 3SE +/- 0.06, N = 3SE +/- 0.21, N = 3SE +/- 0.37, N = 3SE +/- 0.08, N = 3SE +/- 0.95, N = 3SE +/- 0.34, N = 3SE +/- 0.34, N = 3SE +/- 1.01, N = 383.45156.3136.6040.0740.3345.7846.5955.3261.2560.7569.84102.29108.34136.20198.45
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200Min: 83.1 / Avg: 83.45 / Max: 83.98Min: 155.93 / Avg: 156.31 / Max: 156.78Min: 36.43 / Avg: 36.6 / Max: 36.71Min: 39.84 / Avg: 40.07 / Max: 40.23Min: 39.99 / Avg: 40.33 / Max: 40.86Min: 45.39 / Avg: 45.78 / Max: 45.99Min: 45.95 / Avg: 46.59 / Max: 47.39Min: 55.22 / Avg: 55.32 / Max: 55.43Min: 60.93 / Avg: 61.25 / Max: 61.64Min: 60 / Avg: 60.75 / Max: 61.17Min: 69.76 / Avg: 69.84 / Max: 69.99Min: 100.63 / Avg: 102.29 / Max: 103.93Min: 107.74 / Avg: 108.34 / Max: 108.92Min: 135.6 / Avg: 136.2 / Max: 136.76Min: 196.87 / Avg: 198.45 / Max: 200.33

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.981.646.876.056.605.675.534.704.194.363.512.512.381.811.28
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 2.97 / Avg: 2.98 / Max: 2.99Min: 1.64 / Avg: 1.64 / Max: 1.65Min: 6.86 / Avg: 6.87 / Max: 6.88Min: 6.03 / Avg: 6.05 / Max: 6.06Min: 6.58 / Avg: 6.6 / Max: 6.63Min: 5.66 / Avg: 5.67 / Max: 5.68Min: 5.52 / Avg: 5.53 / Max: 5.54Min: 4.69 / Avg: 4.7 / Max: 4.72Min: 4.18 / Avg: 4.19 / Max: 4.2Min: 4.32 / Avg: 4.36 / Max: 4.4Min: 3.5 / Avg: 3.51 / Max: 3.53Min: 2.5 / Avg: 2.51 / Max: 2.52Min: 2.37 / Avg: 2.38 / Max: 2.39Min: 1.8 / Avg: 1.81 / Max: 1.82Min: 1.27 / Avg: 1.28 / Max: 1.29

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KSE +/- 501.41, N = 3SE +/- 196.73, N = 3SE +/- 1487.88, N = 3SE +/- 3714.01, N = 3SE +/- 2726.39, N = 3SE +/- 492.11, N = 3SE +/- 2696.01, N = 3SE +/- 684.57, N = 3SE +/- 1146.88, N = 3SE +/- 1605.99, N = 3SE +/- 1145.86, N = 3SE +/- 406.00, N = 3SE +/- 721.66, N = 3SE +/- 527.65, N = 3SE +/- 258.28, N = 33811842156179813828974539155238354578325096549145783406106674991383577813445982785101842511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 380600.92 / Avg: 381183.76 / Max: 382181.9Min: 215250.14 / Avg: 215616.98 / Max: 215923.6Min: 979697.89 / Avg: 981381.7 / Max: 984348.44Min: 891671.21 / Avg: 897453.24 / Max: 904382.54Min: 911128.88 / Avg: 915522.69 / Max: 920516.11Min: 834632.97 / Avg: 835456.65 / Max: 836335.05Min: 827135.25 / Avg: 832508.89 / Max: 835580.98Min: 654192.39 / Avg: 654914.4 / Max: 656282.85Min: 576053.18 / Avg: 578339.82 / Max: 579639.5Min: 608958.8 / Avg: 610666.61 / Max: 613876.39Min: 497224.75 / Avg: 499138.43 / Max: 501187.22Min: 357317.88 / Avg: 357781.31 / Max: 358590.46Min: 343314.32 / Avg: 344597.63 / Max: 345811.31Min: 277469.8 / Avg: 278509.56 / Max: 279185.65Min: 183809.92 / Avg: 184251.3 / Max: 184704.381. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.12220.24440.36660.48880.611SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.2630.4640.1020.1120.1090.1200.1200.1530.1730.1640.2010.2800.2900.3590.5431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.11 / Avg: 0.11 / Max: 0.11Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.17 / Avg: 0.17 / Max: 0.17Min: 0.16 / Avg: 0.16 / Max: 0.16Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.29 / Avg: 0.29 / Max: 0.29Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.54 / Avg: 0.54 / Max: 0.541. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.991.666.836.026.625.675.534.744.204.373.532.512.371.811.29
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 2.97 / Avg: 2.99 / Max: 3.01Min: 1.65 / Avg: 1.66 / Max: 1.67Min: 6.77 / Avg: 6.83 / Max: 6.91Min: 6.01 / Avg: 6.02 / Max: 6.03Min: 6.6 / Avg: 6.62 / Max: 6.64Min: 5.66 / Avg: 5.67 / Max: 5.68Min: 5.51 / Avg: 5.53 / Max: 5.56Min: 4.72 / Avg: 4.74 / Max: 4.75Min: 4.19 / Avg: 4.2 / Max: 4.21Min: 4.33 / Avg: 4.37 / Max: 4.4Min: 3.52 / Avg: 3.53 / Max: 3.56Min: 2.5 / Avg: 2.51 / Max: 2.52Min: 2.34 / Avg: 2.37 / Max: 2.38Min: 1.8 / Avg: 1.81 / Max: 1.82Min: 1.29 / Avg: 1.29 / Max: 1.3

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72829K18K27K36K45KSE +/- 11.72, N = 3SE +/- 46.30, N = 6SE +/- 24.48, N = 5SE +/- 28.15, N = 4SE +/- 97.58, N = 3416797880983913798256831. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72827K14K21K28K35KMin: 41656 / Avg: 41679.33 / Max: 41693Min: 7742 / Avg: 7879.83 / Max: 8016Min: 9772 / Avg: 9839 / Max: 9907Min: 13745 / Avg: 13798 / Max: 13860Min: 25513 / Avg: 25683 / Max: 258511. (CXX) g++ options: -O3 -lpthread

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1326395265SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 4SE +/- 0.02, N = 4SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.04, N = 4SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.24, N = 324.1845.4710.6811.5011.5013.2713.2715.9417.6717.1420.4628.7030.5138.4155.781. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1122334455Min: 24.15 / Avg: 24.18 / Max: 24.23Min: 45.44 / Avg: 45.47 / Max: 45.5Min: 10.6 / Avg: 10.68 / Max: 10.78Min: 11.45 / Avg: 11.5 / Max: 11.55Min: 11.4 / Avg: 11.5 / Max: 11.56Min: 13.19 / Avg: 13.27 / Max: 13.35Min: 13.17 / Avg: 13.27 / Max: 13.35Min: 15.92 / Avg: 15.94 / Max: 15.96Min: 17.65 / Avg: 17.67 / Max: 17.7Min: 17.12 / Avg: 17.14 / Max: 17.17Min: 20.41 / Avg: 20.46 / Max: 20.56Min: 28.69 / Avg: 28.7 / Max: 28.72Min: 30.42 / Avg: 30.51 / Max: 30.57Min: 38.36 / Avg: 38.41 / Max: 38.49Min: 55.44 / Avg: 55.78 / Max: 56.241. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 0.20, N = 3SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.22, N = 3SE +/- 0.12, N = 3SE +/- 0.30, N = 3SE +/- 0.04, N = 3SE +/- 0.46, N = 3SE +/- 0.08, N = 3SE +/- 0.24, N = 3SE +/- 0.81, N = 3SE +/- 0.21, N = 3108.40207.1251.4055.4055.8261.9362.7272.2778.9478.0191.81129.81139.46177.65267.24
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 108.03 / Avg: 108.4 / Max: 108.72Min: 207.06 / Avg: 207.12 / Max: 207.19Min: 51.15 / Avg: 51.4 / Max: 51.63Min: 55.2 / Avg: 55.4 / Max: 55.58Min: 55.77 / Avg: 55.82 / Max: 55.85Min: 61.67 / Avg: 61.93 / Max: 62.15Min: 62.45 / Avg: 62.72 / Max: 63.15Min: 72.03 / Avg: 72.27 / Max: 72.43Min: 78.44 / Avg: 78.94 / Max: 79.47Min: 77.95 / Avg: 78.01 / Max: 78.08Min: 90.99 / Avg: 91.81 / Max: 92.58Min: 129.68 / Avg: 129.81 / Max: 129.95Min: 139.16 / Avg: 139.46 / Max: 139.93Min: 176.74 / Avg: 177.65 / Max: 179.26Min: 267.03 / Avg: 267.24 / Max: 267.66

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72829K18K27K36K45KSE +/- 1.20, N = 3SE +/- 8.17, N = 6SE +/- 11.42, N = 5SE +/- 17.05, N = 4SE +/- 4.48, N = 3414227991989213926253881. (CXX) g++ options: -O3 -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPEPYC 7F32EPYC 7742EPYC 7642EPYC 7532EPYC 72827K14K21K28K35KMin: 41420 / Avg: 41421.67 / Max: 41424Min: 7962 / Avg: 7990.83 / Max: 8010Min: 9863 / Avg: 9891.6 / Max: 9930Min: 13904 / Avg: 13925.75 / Max: 13976Min: 25379 / Avg: 25387.67 / Max: 253941. (CXX) g++ options: -O3 -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 34.022.168.927.888.987.797.226.135.685.724.733.323.232.471.74
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 4.02 / Avg: 4.02 / Max: 4.03Min: 2.14 / Avg: 2.16 / Max: 2.18Min: 8.9 / Avg: 8.92 / Max: 8.93Min: 7.84 / Avg: 7.88 / Max: 7.9Min: 8.97 / Avg: 8.98 / Max: 9.01Min: 7.78 / Avg: 7.79 / Max: 7.81Min: 7.19 / Avg: 7.22 / Max: 7.24Min: 6.03 / Avg: 6.13 / Max: 6.19Min: 5.67 / Avg: 5.68 / Max: 5.69Min: 5.7 / Avg: 5.72 / Max: 5.74Min: 4.68 / Avg: 4.73 / Max: 4.79Min: 3.31 / Avg: 3.32 / Max: 3.32Min: 3.2 / Avg: 3.23 / Max: 3.25Min: 2.44 / Avg: 2.47 / Max: 2.5Min: 1.71 / Avg: 1.74 / Max: 1.76

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.042.168.897.888.977.817.206.175.685.734.763.313.162.471.75
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 4.02 / Avg: 4.04 / Max: 4.07Min: 2.14 / Avg: 2.16 / Max: 2.2Min: 8.86 / Avg: 8.89 / Max: 8.93Min: 7.84 / Avg: 7.88 / Max: 7.9Min: 8.97 / Avg: 8.97 / Max: 8.97Min: 7.8 / Avg: 7.81 / Max: 7.83Min: 7.19 / Avg: 7.2 / Max: 7.22Min: 6.13 / Avg: 6.17 / Max: 6.19Min: 5.67 / Avg: 5.68 / Max: 5.68Min: 5.73 / Avg: 5.73 / Max: 5.73Min: 4.74 / Avg: 4.76 / Max: 4.8Min: 3.3 / Avg: 3.31 / Max: 3.32Min: 3.14 / Avg: 3.16 / Max: 3.18Min: 2.45 / Avg: 2.47 / Max: 2.48Min: 1.72 / Avg: 1.75 / Max: 1.76

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm.in input file (Problem 5). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1632486480SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 15SE +/- 0.05, N = 4SE +/- 0.07, N = 4SE +/- 0.08, N = 4SE +/- 0.02, N = 4SE +/- 0.03, N = 3SE +/- 0.07, N = 4SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 322.8336.8014.4014.7014.0313.6615.2818.6115.2919.0720.3525.2145.4648.2969.911. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1428425670Min: 22.76 / Avg: 22.83 / Max: 22.94Min: 36.61 / Avg: 36.8 / Max: 36.9Min: 14.05 / Avg: 14.4 / Max: 15.79Min: 14.59 / Avg: 14.7 / Max: 14.82Min: 13.91 / Avg: 14.03 / Max: 14.23Min: 13.55 / Avg: 13.66 / Max: 13.9Min: 15.23 / Avg: 15.28 / Max: 15.32Min: 18.57 / Avg: 18.61 / Max: 18.67Min: 15.1 / Avg: 15.29 / Max: 15.4Min: 19 / Avg: 19.07 / Max: 19.13Min: 20.26 / Avg: 20.35 / Max: 20.46Min: 25.12 / Avg: 25.21 / Max: 25.39Min: 45.38 / Avg: 45.46 / Max: 45.52Min: 48.21 / Avg: 48.29 / Max: 48.38Min: 69.87 / Avg: 69.91 / Max: 69.961. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P15003000450060007500SE +/- 3.57, N = 4SE +/- 30.11, N = 2SE +/- 22.42, N = 15SE +/- 47.34, N = 4SE +/- 42.67, N = 4SE +/- 40.59, N = 4SE +/- 36.86, N = 4SE +/- 0.00, N = 4SE +/- 38.66, N = 4SE +/- 0.00, N = 4SE +/- 35.55, N = 4SE +/- 37.73, N = 3SE +/- 10.27, N = 4SE +/- 10.02, N = 4SE +/- 6.46, N = 41406.934004.086679.717148.776699.076615.816302.575788.176455.405788.175726.605509.303317.933257.042616.811. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - DegriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12002400360048006000Min: 1401.35 / Avg: 1406.93 / Max: 1416.26Min: 3973.97 / Avg: 4004.08 / Max: 4034.18Min: 6494.05 / Avg: 6679.71 / Max: 6827.08Min: 7006.74 / Avg: 7148.77 / Max: 7196.11Min: 6656.4 / Avg: 6699.07 / Max: 6827.08Min: 6494.05 / Avg: 6615.81 / Max: 6656.4Min: 6192 / Avg: 6302.57 / Max: 6339.43Min: 5788.17 / Avg: 5788.17 / Max: 5788.17Min: 6339.43 / Avg: 6455.4 / Max: 6494.05Min: 5788.17 / Avg: 5788.17 / Max: 5788.17Min: 5665.02 / Avg: 5726.6 / Max: 5788.17Min: 5433.8 / Avg: 5509.27 / Max: 5547Min: 3287.11 / Avg: 3317.93 / Max: 3328.2Min: 3247.02 / Avg: 3257.04 / Max: 3287.11Min: 2610.35 / Avg: 2616.81 / Max: 2636.21. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 3SE +/- 0.18, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.30, N = 339.8960.5022.3623.3522.8323.4325.1729.6226.5330.6233.8140.8756.5767.23113.161. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 30MEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 39.79 / Avg: 39.89 / Max: 39.94Min: 60.32 / Avg: 60.5 / Max: 60.7Min: 22.24 / Avg: 22.36 / Max: 22.44Min: 23.06 / Avg: 23.35 / Max: 23.68Min: 22.63 / Avg: 22.83 / Max: 22.97Min: 23.3 / Avg: 23.43 / Max: 23.53Min: 25.06 / Avg: 25.17 / Max: 25.35Min: 29.5 / Avg: 29.62 / Max: 29.8Min: 26.35 / Avg: 26.53 / Max: 26.65Min: 30.52 / Avg: 30.62 / Max: 30.71Min: 33.73 / Avg: 33.81 / Max: 33.97Min: 40.67 / Avg: 40.87 / Max: 41.21Min: 56.49 / Avg: 56.57 / Max: 56.65Min: 67.08 / Avg: 67.23 / Max: 67.35Min: 112.77 / Avg: 113.16 / Max: 113.751. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40K80K120K160K200KSE +/- 61.58, N = 3SE +/- 13.05, N = 3SE +/- 110.87, N = 3SE +/- 118.11, N = 3SE +/- 61.64, N = 3SE +/- 392.20, N = 3SE +/- 518.49, N = 3SE +/- 72.07, N = 3SE +/- 69.37, N = 3SE +/- 44.27, N = 3SE +/- 40.77, N = 3SE +/- 15.21, N = 3SE +/- 47.75, N = 3SE +/- 28.48, N = 3SE +/- 20.79, N = 370154.5136023.033434.636468.833024.045818.850401.845118.149078.547911.762904.182490.486277.6120119.0165426.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KMin: 70069.3 / Avg: 70154.47 / Max: 70274.1Min: 135998 / Avg: 136023 / Max: 136042Min: 33282.2 / Avg: 33434.6 / Max: 33650.3Min: 36258.5 / Avg: 36468.83 / Max: 36667.1Min: 32924.1 / Avg: 33024 / Max: 33136.5Min: 45327.1 / Avg: 45818.77 / Max: 46593.9Min: 49377.6 / Avg: 50401.83 / Max: 51054.3Min: 45013.3 / Avg: 45118.1 / Max: 45256.2Min: 48987 / Avg: 49078.53 / Max: 49214.6Min: 47856.7 / Avg: 47911.7 / Max: 47999.3Min: 62851.8 / Avg: 62904.07 / Max: 62984.4Min: 82465 / Avg: 82490.43 / Max: 82517.6Min: 86198 / Avg: 86277.57 / Max: 86363.1Min: 120062 / Avg: 120118.67 / Max: 120152Min: 165394 / Avg: 165426 / Max: 165465

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KSE +/- 13.80, N = 3SE +/- 599.00, N = 3SE +/- 34.57, N = 3SE +/- 280.28, N = 3SE +/- 187.50, N = 3SE +/- 322.42, N = 3SE +/- 194.69, N = 3SE +/- 86.41, N = 3SE +/- 28.71, N = 3SE +/- 76.82, N = 3SE +/- 193.63, N = 3SE +/- 21.22, N = 3SE +/- 63.38, N = 3SE +/- 107.22, N = 3SE +/- 22.85, N = 368637.6133236.032117.935037.432085.545436.648507.443977.948084.747083.461419.980320.584723.5117060.0160713.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KMin: 68610.2 / Avg: 68637.6 / Max: 68654.2Min: 132636 / Avg: 133236 / Max: 134434Min: 32050.1 / Avg: 32117.93 / Max: 32163.4Min: 34603.7 / Avg: 35037.37 / Max: 35561.8Min: 31772.4 / Avg: 32085.53 / Max: 32420.8Min: 44937.4 / Avg: 45436.6 / Max: 46039.7Min: 48205.1 / Avg: 48507.43 / Max: 48871.1Min: 43815.4 / Avg: 43977.87 / Max: 44110.1Min: 48044.8 / Avg: 48084.67 / Max: 48140.4Min: 46946.3 / Avg: 47083.37 / Max: 47212Min: 61061.3 / Avg: 61419.9 / Max: 61725.8Min: 80284 / Avg: 80320.5 / Max: 80357.5Min: 84636.8 / Avg: 84723.47 / Max: 84846.9Min: 116941 / Avg: 117060 / Max: 117274Min: 160685 / Avg: 160712.67 / Max: 160758

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P700K1400K2100K2800K3500KSE +/- 192.21, N = 3SE +/- 227.03, N = 3SE +/- 5733.03, N = 3SE +/- 6069.08, N = 3SE +/- 3136.75, N = 3SE +/- 1781.71, N = 3SE +/- 907.34, N = 3SE +/- 425.74, N = 3SE +/- 143.80, N = 3SE +/- 487.90, N = 3SE +/- 1233.46, N = 3SE +/- 272.21, N = 3SE +/- 565.25, N = 3SE +/- 436.36, N = 3SE +/- 1129.29, N = 3149918328722277236997642697008339386919788289432291034367100634713664101742890182015324937473472940
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P600K1200K1800K2400K3000KMin: 1498800 / Avg: 1499183.33 / Max: 1499400Min: 2871920 / Avg: 2872226.67 / Max: 2872670Min: 713298 / Avg: 723698.67 / Max: 733079Min: 752152 / Avg: 764269 / Max: 770948Min: 695393 / Avg: 700833.33 / Max: 706259Min: 935608 / Avg: 938691 / Max: 941780Min: 977917 / Avg: 978828.33 / Max: 980643Min: 942378 / Avg: 943229 / Max: 943679Min: 1034080 / Avg: 1034366.67 / Max: 1034530Min: 1005560 / Avg: 1006346.67 / Max: 1007240Min: 1365020 / Avg: 1366410 / Max: 1368870Min: 1742350 / Avg: 1742890 / Max: 1743220Min: 1819400 / Avg: 1820153.33 / Max: 1821260Min: 2492910 / Avg: 2493746.67 / Max: 2494380Min: 3471470 / Avg: 3472940 / Max: 3475160

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.08862.17723.26584.35445.443SE +/- 0.005, N = 3SE +/- 0.003, N = 3SE +/- 0.023, N = 3SE +/- 0.003, N = 3SE +/- 0.008, N = 3SE +/- 0.013, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.008, N = 3SE +/- 0.011, N = 3SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 31.9951.3454.8384.3734.5414.1173.8633.3233.2673.1282.7412.0141.6771.4090.9851. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 1.99 / Avg: 2 / Max: 2.01Min: 1.34 / Avg: 1.35 / Max: 1.35Min: 4.81 / Avg: 4.84 / Max: 4.88Min: 4.37 / Avg: 4.37 / Max: 4.38Min: 4.53 / Avg: 4.54 / Max: 4.55Min: 4.09 / Avg: 4.12 / Max: 4.13Min: 3.86 / Avg: 3.86 / Max: 3.87Min: 3.32 / Avg: 3.32 / Max: 3.33Min: 3.25 / Avg: 3.27 / Max: 3.28Min: 3.11 / Avg: 3.13 / Max: 3.14Min: 2.74 / Avg: 2.74 / Max: 2.74Min: 2.01 / Avg: 2.01 / Max: 2.02Min: 1.68 / Avg: 1.68 / Max: 1.68Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 0.98 / Avg: 0.99 / Max: 0.991. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P700K1400K2100K2800K3500KSE +/- 1646.01, N = 3SE +/- 314.32, N = 3SE +/- 4362.30, N = 3SE +/- 6234.46, N = 7SE +/- 4181.60, N = 3SE +/- 837.49, N = 3SE +/- 1818.31, N = 3SE +/- 149.07, N = 3SE +/- 386.91, N = 3SE +/- 466.83, N = 3SE +/- 1259.14, N = 3SE +/- 366.38, N = 3SE +/- 1027.95, N = 3SE +/- 518.11, N = 3SE +/- 326.24, N = 31346153259297064315070260765527983073089351784060492186589788412067901567070163916722429503136920
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P500K1000K1500K2000K2500KMin: 1343550 / Avg: 1346153.33 / Max: 1349200Min: 2592570 / Avg: 2592970 / Max: 2593590Min: 636650 / Avg: 643150 / Max: 651440Min: 690630 / Avg: 702607 / Max: 737249Min: 649807 / Avg: 655278.67 / Max: 663492Min: 829366 / Avg: 830730.33 / Max: 832254Min: 889933 / Avg: 893516.67 / Max: 895844Min: 840370 / Avg: 840604 / Max: 840881Min: 921416 / Avg: 921864.67 / Max: 922635Min: 897006 / Avg: 897884 / Max: 898598Min: 1205130 / Avg: 1206790 / Max: 1209260Min: 1566650 / Avg: 1567070 / Max: 1567800Min: 1637410 / Avg: 1639166.67 / Max: 1640970Min: 2241960 / Avg: 2242950 / Max: 2243710Min: 3136350 / Avg: 3136920 / Max: 3137480

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.00628, N = 9SE +/- 0.00176, N = 9SE +/- 0.00429, N = 9SE +/- 0.00637, N = 9SE +/- 0.00834, N = 9SE +/- 0.00625, N = 9SE +/- 0.00157, N = 9SE +/- 0.00215, N = 9SE +/- 0.00104, N = 9SE +/- 0.00338, N = 9SE +/- 0.00168, N = 9SE +/- 0.00304, N = 9SE +/- 0.00324, N = 9SE +/- 0.00336, N = 9SE +/- 0.00235, N = 93.028975.582601.396601.494461.431241.539151.554941.896241.988091.948452.383063.346633.452904.527586.79445MIN: 2.92MIN: 5.52MIN: 1.27MIN: 1.36MIN: 1.26MIN: 1.37MIN: 1.42MIN: 1.77MIN: 1.83MIN: 1.82MIN: 2.33MIN: 3.25MIN: 3.34MIN: 4.47MIN: 6.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 2.99 / Avg: 3.03 / Max: 3.05Min: 5.57 / Avg: 5.58 / Max: 5.59Min: 1.38 / Avg: 1.4 / Max: 1.42Min: 1.48 / Avg: 1.49 / Max: 1.53Min: 1.41 / Avg: 1.43 / Max: 1.48Min: 1.52 / Avg: 1.54 / Max: 1.57Min: 1.55 / Avg: 1.55 / Max: 1.56Min: 1.89 / Avg: 1.9 / Max: 1.91Min: 1.98 / Avg: 1.99 / Max: 1.99Min: 1.94 / Avg: 1.95 / Max: 1.97Min: 2.38 / Avg: 2.38 / Max: 2.39Min: 3.33 / Avg: 3.35 / Max: 3.36Min: 3.44 / Avg: 3.45 / Max: 3.47Min: 4.51 / Avg: 4.53 / Max: 4.54Min: 6.79 / Avg: 6.79 / Max: 6.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430SE +/- 0.053, N = 3SE +/- 0.004, N = 3SE +/- 0.042, N = 3SE +/- 0.024, N = 3SE +/- 0.056, N = 3SE +/- 0.029, N = 3SE +/- 0.112, N = 3SE +/- 0.062, N = 3SE +/- 0.016, N = 3SE +/- 0.059, N = 3SE +/- 0.010, N = 3SE +/- 0.009, N = 3SE +/- 0.021, N = 3SE +/- 0.023, N = 3SE +/- 0.007, N = 311.7576.70526.28124.81825.20622.44222.04618.15617.52517.61414.90710.6029.8897.7375.4061. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430Min: 11.66 / Avg: 11.76 / Max: 11.85Min: 6.7 / Avg: 6.7 / Max: 6.71Min: 26.22 / Avg: 26.28 / Max: 26.36Min: 24.78 / Avg: 24.82 / Max: 24.86Min: 25.11 / Avg: 25.21 / Max: 25.31Min: 22.4 / Avg: 22.44 / Max: 22.5Min: 21.83 / Avg: 22.05 / Max: 22.19Min: 18.03 / Avg: 18.16 / Max: 18.22Min: 17.5 / Avg: 17.53 / Max: 17.55Min: 17.51 / Avg: 17.61 / Max: 17.71Min: 14.89 / Avg: 14.91 / Max: 14.93Min: 10.58 / Avg: 10.6 / Max: 10.62Min: 9.85 / Avg: 9.89 / Max: 9.92Min: 7.71 / Avg: 7.74 / Max: 7.78Min: 5.39 / Avg: 5.41 / Max: 5.411. (CXX) g++ options: -O3 -pthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.02104, N = 3SE +/- 0.00611, N = 3SE +/- 0.00691, N = 3SE +/- 0.00645, N = 3SE +/- 0.00798, N = 3SE +/- 0.00608, N = 3SE +/- 0.00201, N = 3SE +/- 0.00531, N = 3SE +/- 0.00309, N = 3SE +/- 0.00464, N = 3SE +/- 0.02587, N = 3SE +/- 0.00946, N = 3SE +/- 0.02527, N = 3SE +/- 0.00535, N = 3SE +/- 0.02294, N = 35.496057.237631.790651.888421.798741.930421.997072.069832.205432.151945.922046.034976.063335.253318.67154MIN: 5.32MIN: 7.01MIN: 1.71MIN: 1.8MIN: 1.69MIN: 1.82MIN: 1.91MIN: 2MIN: 2.05MIN: 2.02MIN: 5.79MIN: 5.92MIN: 5.87MIN: 5.07MIN: 8.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 5.46 / Avg: 5.5 / Max: 5.54Min: 7.23 / Avg: 7.24 / Max: 7.25Min: 1.78 / Avg: 1.79 / Max: 1.8Min: 1.88 / Avg: 1.89 / Max: 1.9Min: 1.78 / Avg: 1.8 / Max: 1.81Min: 1.92 / Avg: 1.93 / Max: 1.94Min: 1.99 / Avg: 2 / Max: 2Min: 2.06 / Avg: 2.07 / Max: 2.08Min: 2.2 / Avg: 2.21 / Max: 2.21Min: 2.15 / Avg: 2.15 / Max: 2.16Min: 5.89 / Avg: 5.92 / Max: 5.97Min: 6.02 / Avg: 6.03 / Max: 6.04Min: 6.02 / Avg: 6.06 / Max: 6.1Min: 5.25 / Avg: 5.25 / Max: 5.26Min: 8.63 / Avg: 8.67 / Max: 8.711. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.80631.61262.41893.22524.0315SE +/- 0.001361, N = 4SE +/- 0.000640, N = 4SE +/- 0.005312, N = 4SE +/- 0.002127, N = 4SE +/- 0.000537, N = 4SE +/- 0.001659, N = 4SE +/- 0.000788, N = 4SE +/- 0.005244, N = 4SE +/- 0.001948, N = 4SE +/- 0.003967, N = 4SE +/- 0.009261, N = 15SE +/- 0.001219, N = 4SE +/- 0.001269, N = 4SE +/- 0.001298, N = 4SE +/- 0.001594, N = 41.5650303.0129200.7505190.8006430.7808030.8631460.8680241.0158201.1048601.0739901.2959901.8417801.9130502.4937203.583630MIN: 1.54MIN: 2.73MIN: 0.71MIN: 0.76MIN: 0.7MIN: 0.77MIN: 0.81MIN: 0.98MIN: 0.99MIN: 0.98MIN: 1.25MIN: 1.82MIN: 1.87MIN: 2.45MIN: 3.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 1.56 / Avg: 1.57 / Max: 1.57Min: 3.01 / Avg: 3.01 / Max: 3.01Min: 0.74 / Avg: 0.75 / Max: 0.77Min: 0.79 / Avg: 0.8 / Max: 0.8Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 1 / Avg: 1.02 / Max: 1.03Min: 1.1 / Avg: 1.1 / Max: 1.11Min: 1.07 / Avg: 1.07 / Max: 1.08Min: 1.27 / Avg: 1.3 / Max: 1.37Min: 1.84 / Avg: 1.84 / Max: 1.85Min: 1.91 / Avg: 1.91 / Max: 1.92Min: 2.49 / Avg: 2.49 / Max: 2.5Min: 3.58 / Avg: 3.58 / Max: 3.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835SE +/- 0.02, N = 4SE +/- 0.01, N = 3SE +/- 0.02, N = 6SE +/- 0.03, N = 6SE +/- 0.04, N = 6SE +/- 0.05, N = 6SE +/- 0.02, N = 6SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 5SE +/- 0.01, N = 4SE +/- 0.12, N = 6SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 314.247.9430.2527.6729.4126.6826.1321.5520.1420.2517.1712.1811.349.036.55
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835Min: 14.2 / Avg: 14.24 / Max: 14.27Min: 7.92 / Avg: 7.94 / Max: 7.96Min: 30.17 / Avg: 30.25 / Max: 30.32Min: 27.55 / Avg: 27.67 / Max: 27.73Min: 29.29 / Avg: 29.41 / Max: 29.54Min: 26.53 / Avg: 26.68 / Max: 26.79Min: 26.05 / Avg: 26.13 / Max: 26.17Min: 21.52 / Avg: 21.55 / Max: 21.6Min: 20.11 / Avg: 20.14 / Max: 20.18Min: 20.23 / Avg: 20.25 / Max: 20.28Min: 17.16 / Avg: 17.17 / Max: 17.19Min: 11.56 / Avg: 12.18 / Max: 12.32Min: 11.34 / Avg: 11.34 / Max: 11.36Min: 9.02 / Avg: 9.03 / Max: 9.03Min: 6.54 / Avg: 6.55 / Max: 6.55

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240SE +/- 0.03705, N = 4SE +/- 0.01111, N = 3SE +/- 0.03180, N = 6SE +/- 0.02278, N = 6SE +/- 0.02077, N = 6SE +/- 0.03455, N = 5SE +/- 0.04134, N = 5SE +/- 0.02895, N = 5SE +/- 0.03618, N = 5SE +/- 0.00983, N = 5SE +/- 0.02622, N = 4SE +/- 0.02745, N = 3SE +/- 0.03890, N = 3SE +/- 0.03918, N = 3SE +/- 0.03115, N = 314.1327026.516107.092977.507067.550938.568288.719539.9284910.8131010.5740012.2623016.6175017.6542021.9958032.538801. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835Min: 14.08 / Avg: 14.13 / Max: 14.23Min: 26.5 / Avg: 26.52 / Max: 26.54Min: 6.99 / Avg: 7.09 / Max: 7.21Min: 7.46 / Avg: 7.51 / Max: 7.59Min: 7.5 / Avg: 7.55 / Max: 7.63Min: 8.43 / Avg: 8.57 / Max: 8.61Min: 8.63 / Avg: 8.72 / Max: 8.86Min: 9.86 / Avg: 9.93 / Max: 10.01Min: 10.71 / Avg: 10.81 / Max: 10.91Min: 10.56 / Avg: 10.57 / Max: 10.61Min: 12.21 / Avg: 12.26 / Max: 12.33Min: 16.56 / Avg: 16.62 / Max: 16.65Min: 17.6 / Avg: 17.65 / Max: 17.73Min: 21.93 / Avg: 22 / Max: 22.07Min: 32.48 / Avg: 32.54 / Max: 32.591. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

ebizzy

This is a test of ebizzy, a program to generate workloads resembling web server workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P600K1200K1800K2400K3000KSE +/- 7117.75, N = 3SE +/- 6904.71, N = 7SE +/- 24194.67, N = 15SE +/- 25794.01, N = 15SE +/- 34594.95, N = 12SE +/- 37169.94, N = 3SE +/- 24033.50, N = 3SE +/- 8453.72, N = 3SE +/- 15669.57, N = 3SE +/- 10663.27, N = 3SE +/- 15686.47, N = 3SE +/- 8057.83, N = 3SE +/- 8956.45, N = 3SE +/- 906.56, N = 3SE +/- 4933.88, N = 31475280776880285378327017672762647271938824565112136850197732519478361721854120846610219908839656232721. (CC) gcc options: -pthread -lpthread -O3 -march=native
OpenBenchmarking.orgRecords/s, More Is Betterebizzy 0.3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P500K1000K1500K2000K2500KMin: 1461618 / Avg: 1475280 / Max: 1485575Min: 747575 / Avg: 776880.43 / Max: 796470Min: 2672217 / Avg: 2853782.87 / Max: 2972568Min: 2510210 / Avg: 2701767.07 / Max: 2845248Min: 2508914 / Avg: 2762646.92 / Max: 2926207Min: 2645052 / Avg: 2719388 / Max: 2757214Min: 2420178 / Avg: 2456510.67 / Max: 2501931Min: 2127912 / Avg: 2136850 / Max: 2153748Min: 1951917 / Avg: 1977324.67 / Max: 2005917Min: 1926759 / Avg: 1947835.67 / Max: 1961193Min: 1691610 / Avg: 1721854 / Max: 1744199Min: 1192406 / Avg: 1208466.33 / Max: 1217652Min: 1009030 / Avg: 1021990 / Max: 1039179Min: 882270 / Avg: 883965 / Max: 885370Min: 616705 / Avg: 623272 / Max: 6329341. (CC) gcc options: -pthread -lpthread -O3 -march=native

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50K100K150K200K250KSE +/- 1803.14, N = 3SE +/- 224.51, N = 3SE +/- 910.59, N = 3SE +/- 1230.73, N = 3SE +/- 2776.27, N = 3SE +/- 1690.31, N = 3SE +/- 1126.54, N = 3SE +/- 2207.46, N = 15SE +/- 2221.55, N = 3SE +/- 2790.65, N = 3SE +/- 2693.44, N = 3SE +/- 1793.27, N = 3SE +/- 936.51, N = 3SE +/- 388.59, N = 3SE +/- 244.34, N = 3144123581182151962275642193802155762308712365242113612337302007051352601364429315551734
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40K80K120K160K200KMin: 141196 / Avg: 144123.33 / Max: 147411Min: 57842 / Avg: 58118.33 / Max: 58563Min: 213537 / Avg: 215196.33 / Max: 216676Min: 225103 / Avg: 227564.33 / Max: 228816Min: 214454 / Avg: 219380 / Max: 224062Min: 212562 / Avg: 215576 / Max: 218409Min: 229505 / Avg: 230871.33 / Max: 233106Min: 227162 / Avg: 236524.33 / Max: 255532Min: 208632 / Avg: 211361 / Max: 215762Min: 228668 / Avg: 233730 / Max: 238297Min: 197409 / Avg: 200705 / Max: 206043Min: 131775 / Avg: 135259.67 / Max: 137737Min: 134572 / Avg: 136442 / Max: 137469Min: 92380 / Avg: 93154.67 / Max: 93596Min: 51248 / Avg: 51734.33 / Max: 52019

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 0.00, N = 4SE +/- 35.30, N = 4SE +/- 211.93, N = 15SE +/- 0.00, N = 4SE +/- 0.00, N = 4SE +/- 81.97, N = 4SE +/- 77.48, N = 4SE +/- 67.99, N = 4SE +/- 63.04, N = 4SE +/- 79.54, N = 4SE +/- 72.46, N = 4SE +/- 57.68, N = 5SE +/- 0.00, N = 4SE +/- 17.38, N = 4SE +/- 7.61, N = 42113.144035.119177.929509.149509.149427.178454.706658.488257.476617.906379.885503.594294.454004.082840.131. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve OpenMP - GriddingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P17003400510068008500Min: 2113.14 / Avg: 2113.14 / Max: 2113.14Min: 3973.97 / Avg: 4035.11 / Max: 4096.25Min: 7006.74 / Avg: 9177.92 / Max: 9861.33Min: 9509.14 / Avg: 9509.14 / Max: 9509.14Min: 9509.14 / Avg: 9509.14 / Max: 9509.14Min: 9181.24 / Avg: 9427.17 / Max: 9509.14Min: 8320.5 / Avg: 8454.7 / Max: 8588.9Min: 6494.05 / Avg: 6658.48 / Max: 6827.08Min: 8068.36 / Avg: 8257.47 / Max: 8320.5Min: 6494.05 / Avg: 6617.9 / Max: 6827.08Min: 6192 / Avg: 6379.88 / Max: 6494.05Min: 5325.12 / Avg: 5503.59 / Max: 5665.02Min: 4294.45 / Avg: 4294.45 / Max: 4294.45Min: 3973.97 / Avg: 4004.08 / Max: 4034.18Min: 2832.51 / Avg: 2840.13 / Max: 2862.971. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50K100K150K200K250KSE +/- 132.22, N = 3SE +/- 44.74, N = 3SE +/- 8.41, N = 3SE +/- 39.50, N = 3SE +/- 174.48, N = 3SE +/- 111.10, N = 3SE +/- 74.34, N = 3SE +/- 77.46, N = 3SE +/- 39.58, N = 3SE +/- 126.18, N = 3SE +/- 6.99, N = 3SE +/- 25.85, N = 3SE +/- 38.26, N = 3SE +/- 82.39, N = 3SE +/- 202.36, N = 3107391.0201258.057509.961679.656195.565104.468430.470622.977264.775080.093404.1123947.0129293.0173807.0242480.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40K80K120K160K200KMin: 107127 / Avg: 107390.67 / Max: 107540Min: 201180 / Avg: 201257.67 / Max: 201335Min: 57494.2 / Avg: 57509.87 / Max: 57523Min: 61602.1 / Avg: 61679.57 / Max: 61731.7Min: 55903.9 / Avg: 56195.5 / Max: 56507.3Min: 64904.3 / Avg: 65104.43 / Max: 65288.1Min: 68340.5 / Avg: 68430.4 / Max: 68577.9Min: 70510.4 / Avg: 70622.9 / Max: 70771.4Min: 77193.3 / Avg: 77264.7 / Max: 77330Min: 74862.2 / Avg: 75080 / Max: 75299.3Min: 93391.7 / Avg: 93404.1 / Max: 93415.9Min: 123895 / Avg: 123946.67 / Max: 123974Min: 129222 / Avg: 129293.33 / Max: 129353Min: 173701 / Avg: 173806.67 / Max: 173969Min: 242075 / Avg: 242479.67 / Max: 242688

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300118.48225.0961.6867.7467.3370.4973.3381.8389.0187.51105.69149.34151.86195.08265.54

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KSE +/- 17.97, N = 3SE +/- 13.57, N = 3SE +/- 21.66, N = 3SE +/- 57.10, N = 3SE +/- 64.33, N = 3SE +/- 88.09, N = 3SE +/- 24.65, N = 3SE +/- 32.93, N = 3SE +/- 125.44, N = 9SE +/- 30.09, N = 3SE +/- 16.45, N = 3SE +/- 6.53, N = 3SE +/- 16.90, N = 3SE +/- 8.24, N = 3SE +/- 24.34, N = 38415.195638.5320955.9319577.0319990.7518451.9118192.3217266.2115568.2416396.5713570.839499.339134.727332.824965.961. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KMin: 8394.92 / Avg: 8415.19 / Max: 8451.03Min: 5624.62 / Avg: 5638.53 / Max: 5665.66Min: 20921.85 / Avg: 20955.93 / Max: 20996.13Min: 19482.9 / Avg: 19577.03 / Max: 19680.11Min: 19911.11 / Avg: 19990.75 / Max: 20118.07Min: 18293.63 / Avg: 18451.91 / Max: 18598.07Min: 18145.03 / Avg: 18192.32 / Max: 18228.03Min: 17206.64 / Avg: 17266.21 / Max: 17320.33Min: 14571.02 / Avg: 15568.24 / Max: 15743.07Min: 16363.28 / Avg: 16396.57 / Max: 16456.63Min: 13539.78 / Avg: 13570.83 / Max: 13595.76Min: 9488.78 / Avg: 9499.33 / Max: 9511.28Min: 9104 / Avg: 9134.72 / Max: 9162.29Min: 7316.76 / Avg: 7332.82 / Max: 7344.03Min: 4920.56 / Avg: 4965.96 / Max: 5003.891. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.45410.90821.36231.81642.2705SE +/- 0.001140, N = 4SE +/- 0.001580, N = 4SE +/- 0.002953, N = 4SE +/- 0.002887, N = 4SE +/- 0.005359, N = 5SE +/- 0.003528, N = 4SE +/- 0.004962, N = 15SE +/- 0.000738, N = 4SE +/- 0.005528, N = 6SE +/- 0.000902, N = 4SE +/- 0.000660, N = 4SE +/- 0.000863, N = 4SE +/- 0.000258, N = 4SE +/- 0.000447, N = 4SE +/- 0.005714, N = 40.6604191.1601700.5369160.5874430.5170230.4842190.5339610.5005210.5534440.5376360.5923080.8022720.8757491.0934402.018380MIN: 0.62MIN: 1.13MIN: 0.49MIN: 0.54MIN: 0.47MIN: 0.46MIN: 0.49MIN: 0.48MIN: 0.49MIN: 0.51MIN: 0.56MIN: 0.72MIN: 0.74MIN: 0.96MIN: 1.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.66 / Avg: 0.66 / Max: 0.66Min: 1.16 / Avg: 1.16 / Max: 1.16Min: 0.53 / Avg: 0.54 / Max: 0.54Min: 0.58 / Avg: 0.59 / Max: 0.6Min: 0.5 / Avg: 0.52 / Max: 0.53Min: 0.48 / Avg: 0.48 / Max: 0.49Min: 0.51 / Avg: 0.53 / Max: 0.57Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.54 / Avg: 0.55 / Max: 0.58Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.8 / Avg: 0.8 / Max: 0.8Min: 0.88 / Avg: 0.88 / Max: 0.88Min: 1.09 / Avg: 1.09 / Max: 1.09Min: 2 / Avg: 2.02 / Max: 2.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 BuckyballEPYC 7F52EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 72722K4K6K8K10K5684.62123.12247.02220.72648.52716.73653.93653.63678.84709.56676.47056.08844.11. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025SE +/- 0.028, N = 11SE +/- 0.003, N = 9SE +/- 0.158, N = 15SE +/- 0.257, N = 15SE +/- 0.301, N = 15SE +/- 0.291, N = 15SE +/- 0.286, N = 15SE +/- 0.252, N = 15SE +/- 0.178, N = 15SE +/- 0.286, N = 15SE +/- 0.089, N = 15SE +/- 0.024, N = 10SE +/- 0.026, N = 10SE +/- 0.008, N = 9SE +/- 0.002, N = 811.5216.47021.32019.99721.76319.32818.50616.56416.26515.68514.03810.3749.6857.5825.2371. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025Min: 11.4 / Avg: 11.52 / Max: 11.65Min: 6.45 / Avg: 6.47 / Max: 6.49Min: 20.58 / Avg: 21.32 / Max: 22.8Min: 19.07 / Avg: 20 / Max: 22.21Min: 20.67 / Avg: 21.76 / Max: 24.47Min: 18.26 / Avg: 19.33 / Max: 21.71Min: 17.37 / Avg: 18.51 / Max: 20.7Min: 14.32 / Avg: 16.56 / Max: 18.32Min: 14.97 / Avg: 16.27 / Max: 17.08Min: 13.34 / Avg: 15.69 / Max: 17.35Min: 13.39 / Avg: 14.04 / Max: 14.46Min: 10.19 / Avg: 10.37 / Max: 10.46Min: 9.59 / Avg: 9.69 / Max: 9.82Min: 7.55 / Avg: 7.58 / Max: 7.61Min: 5.23 / Avg: 5.24 / Max: 5.251. (CXX) g++ options: -O3 -pthread -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1632486480SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 333.8060.4017.9818.8718.6620.6920.8023.9725.8825.2829.1639.7541.2451.7773.431. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1428425670Min: 33.79 / Avg: 33.8 / Max: 33.81Min: 60.39 / Avg: 60.4 / Max: 60.41Min: 17.94 / Avg: 17.98 / Max: 18.03Min: 18.86 / Avg: 18.87 / Max: 18.88Min: 18.65 / Avg: 18.66 / Max: 18.66Min: 20.68 / Avg: 20.69 / Max: 20.7Min: 20.77 / Avg: 20.8 / Max: 20.82Min: 23.96 / Avg: 23.97 / Max: 23.98Min: 25.87 / Avg: 25.88 / Max: 25.91Min: 25.27 / Avg: 25.28 / Max: 25.31Min: 29.15 / Avg: 29.16 / Max: 29.17Min: 39.75 / Avg: 39.75 / Max: 39.76Min: 41.2 / Avg: 41.24 / Max: 41.3Min: 51.75 / Avg: 51.77 / Max: 51.78Min: 73.43 / Avg: 73.43 / Max: 73.441. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.79, N = 3SE +/- 0.03, N = 3SE +/- 0.38, N = 3SE +/- 1.22, N = 3SE +/- 0.55, N = 3SE +/- 0.12, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.00, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 376.147.0148.8147.5149.6130.1125.7114.4122.7114.396.973.866.455.237.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 74.9 / Avg: 76.13 / Max: 77.6Min: 47 / Avg: 47.03 / Max: 47.1Min: 148.2 / Avg: 148.83 / Max: 149.5Min: 145.1 / Avg: 147.53 / Max: 148.8Min: 148.7 / Avg: 149.6 / Max: 150.6Min: 129.9 / Avg: 130.1 / Max: 130.3Min: 125.4 / Avg: 125.67 / Max: 125.9Min: 114.2 / Avg: 114.4 / Max: 114.7Min: 122.7 / Avg: 122.7 / Max: 122.7Min: 114.2 / Avg: 114.33 / Max: 114.5Min: 96.8 / Avg: 96.93 / Max: 97.1Min: 73.8 / Avg: 73.83 / Max: 73.9Min: 66.2 / Avg: 66.4 / Max: 66.6Min: 55.1 / Avg: 55.2 / Max: 55.3Min: 37.2 / Avg: 37.3 / Max: 37.41. (CC) gcc options: -O3 -pthread -lz -llzma

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P6001200180024003000SE +/- 17.43, N = 5SE +/- 16.37, N = 9SE +/- 28.78, N = 9SE +/- 26.03, N = 3SE +/- 29.49, N = 9SE +/- 37.70, N = 9SE +/- 31.31, N = 9SE +/- 9.84, N = 3SE +/- 20.90, N = 3SE +/- 14.82, N = 9SE +/- 5.90, N = 3SE +/- 12.27, N = 9SE +/- 11.49, N = 9SE +/- 11.79, N = 9SE +/- 8.88, N = 316991104265626862408231119271617176915391439123310519256811. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5001000150020002500Min: 1639 / Avg: 1699.2 / Max: 1747Min: 1024 / Avg: 1103.67 / Max: 1176Min: 2523 / Avg: 2656.22 / Max: 2818Min: 2634 / Avg: 2685.67 / Max: 2717Min: 2310 / Avg: 2408 / Max: 2551Min: 2112 / Avg: 2311.33 / Max: 2527Min: 1773 / Avg: 1926.89 / Max: 2050Min: 1597 / Avg: 1616.67 / Max: 1627Min: 1729 / Avg: 1769.33 / Max: 1799Min: 1454 / Avg: 1538.78 / Max: 1593Min: 1432 / Avg: 1439.33 / Max: 1451Min: 1194 / Avg: 1232.78 / Max: 1292Min: 1008 / Avg: 1051.22 / Max: 1107Min: 870 / Avg: 924.56 / Max: 993Min: 663 / Avg: 680.67 / Max: 6911. (CXX) g++ options: -flto -pthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.21, N = 5SE +/- 0.03, N = 4SE +/- 0.41, N = 5SE +/- 0.27, N = 5SE +/- 0.32, N = 5SE +/- 0.16, N = 5SE +/- 0.20, N = 5SE +/- 0.18, N = 6SE +/- 0.17, N = 6SE +/- 0.25, N = 6SE +/- 0.03, N = 6SE +/- 0.17, N = 5SE +/- 0.09, N = 5SE +/- 0.11, N = 4SE +/- 0.07, N = 442.8426.1385.0079.8283.4063.2162.4059.9456.5357.3254.8737.0435.3830.3821.761. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1632486480Min: 42.28 / Avg: 42.84 / Max: 43.39Min: 26.07 / Avg: 26.13 / Max: 26.22Min: 84.01 / Avg: 85 / Max: 86.21Min: 79.13 / Avg: 79.82 / Max: 80.5Min: 82.2 / Avg: 83.4 / Max: 84.06Min: 62.71 / Avg: 63.21 / Max: 63.66Min: 61.9 / Avg: 62.4 / Max: 62.94Min: 59.31 / Avg: 59.94 / Max: 60.53Min: 56.08 / Avg: 56.53 / Max: 57.07Min: 56.32 / Avg: 57.32 / Max: 57.97Min: 54.76 / Avg: 54.87 / Max: 54.97Min: 36.48 / Avg: 37.04 / Max: 37.5Min: 35.07 / Avg: 35.38 / Max: 35.57Min: 30.14 / Avg: 30.38 / Max: 30.6Min: 21.59 / Avg: 21.76 / Max: 21.911. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.43, N = 3SE +/- 0.46, N = 3SE +/- 0.25, N = 7SE +/- 0.26, N = 7SE +/- 0.28, N = 6SE +/- 0.30, N = 6SE +/- 0.30, N = 6SE +/- 0.41, N = 4SE +/- 0.42, N = 4SE +/- 0.42, N = 4SE +/- 0.47, N = 4SE +/- 0.65, N = 4SE +/- 0.75, N = 3SE +/- 0.59, N = 3SE +/- 0.42, N = 346.9179.6626.7228.1327.9830.0830.2134.5536.8536.3941.5053.4057.9169.61101.09
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 46.21 / Avg: 46.91 / Max: 47.7Min: 79.06 / Avg: 79.66 / Max: 80.57Min: 26.43 / Avg: 26.72 / Max: 28.22Min: 27.79 / Avg: 28.13 / Max: 29.66Min: 27.59 / Avg: 27.98 / Max: 29.37Min: 29.67 / Avg: 30.08 / Max: 31.58Min: 29.87 / Avg: 30.21 / Max: 31.71Min: 34.07 / Avg: 34.55 / Max: 35.78Min: 36.38 / Avg: 36.85 / Max: 38.1Min: 35.93 / Avg: 36.39 / Max: 37.65Min: 40.71 / Avg: 41.5 / Max: 42.85Min: 52.39 / Avg: 53.4 / Max: 55.3Min: 56.64 / Avg: 57.91 / Max: 59.25Min: 68.52 / Avg: 69.61 / Max: 70.53Min: 100.61 / Avg: 101.09 / Max: 101.92

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1020304050SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 4SE +/- 0.04, N = 4SE +/- 0.03, N = 4SE +/- 0.04, N = 4SE +/- 0.03, N = 4SE +/- 0.07, N = 4SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 320.5735.2512.3312.9912.9613.7113.8215.6916.3816.4618.5924.0626.2731.1445.59
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645Min: 20.43 / Avg: 20.57 / Max: 20.67Min: 35.21 / Avg: 35.25 / Max: 35.28Min: 12.28 / Avg: 12.33 / Max: 12.38Min: 12.91 / Avg: 12.99 / Max: 13.09Min: 12.87 / Avg: 12.96 / Max: 13.01Min: 13.64 / Avg: 13.71 / Max: 13.81Min: 13.75 / Avg: 13.82 / Max: 13.87Min: 15.6 / Avg: 15.69 / Max: 15.9Min: 16.35 / Avg: 16.38 / Max: 16.42Min: 16.42 / Avg: 16.46 / Max: 16.52Min: 18.5 / Avg: 18.59 / Max: 18.68Min: 24.05 / Avg: 24.06 / Max: 24.07Min: 26.23 / Avg: 26.27 / Max: 26.29Min: 31.1 / Avg: 31.14 / Max: 31.21Min: 45.54 / Avg: 45.59 / Max: 45.62

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 72721.09622.19243.28864.38485.481SE +/- 0.006, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.005, N = 3SE +/- 0.006, N = 3SE +/- 0.004, N = 3SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 32.3131.3484.8724.3714.5334.1063.3173.2563.1401.6751.4071. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021Input: water_GMX50_bareEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 7272246810Min: 2.31 / Avg: 2.31 / Max: 2.33Min: 1.35 / Avg: 1.35 / Max: 1.35Min: 4.87 / Avg: 4.87 / Max: 4.88Min: 4.37 / Avg: 4.37 / Max: 4.37Min: 4.52 / Avg: 4.53 / Max: 4.54Min: 4.1 / Avg: 4.11 / Max: 4.12Min: 3.31 / Avg: 3.32 / Max: 3.33Min: 3.24 / Avg: 3.26 / Max: 3.27Min: 3.14 / Avg: 3.14 / Max: 3.14Min: 1.67 / Avg: 1.68 / Max: 1.68Min: 1.41 / Avg: 1.41 / Max: 1.411. (CXX) g++ options: -O3 -pthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P6001200180024003000SE +/- 31.35, N = 9SE +/- 22.57, N = 9SE +/- 29.59, N = 9SE +/- 14.38, N = 3SE +/- 41.44, N = 9SE +/- 26.91, N = 3SE +/- 27.18, N = 3SE +/- 20.09, N = 3SE +/- 22.27, N = 9SE +/- 23.71, N = 9SE +/- 20.26, N = 3SE +/- 13.76, N = 9SE +/- 3.71, N = 3SE +/- 7.54, N = 3SE +/- 4.36, N = 317581052262326992376220319691666173515591521125310429467471. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5001000150020002500Min: 1652 / Avg: 1758 / Max: 1943Min: 958 / Avg: 1052.44 / Max: 1142Min: 2470 / Avg: 2622.56 / Max: 2756Min: 2683 / Avg: 2699.33 / Max: 2728Min: 2146 / Avg: 2376 / Max: 2558Min: 2155 / Avg: 2203.33 / Max: 2248Min: 1916 / Avg: 1969 / Max: 2006Min: 1626 / Avg: 1665.67 / Max: 1691Min: 1624 / Avg: 1734.78 / Max: 1815Min: 1445 / Avg: 1559.33 / Max: 1649Min: 1492 / Avg: 1521 / Max: 1560Min: 1190 / Avg: 1253.33 / Max: 1313Min: 1035 / Avg: 1042.33 / Max: 1047Min: 931 / Avg: 945.67 / Max: 956Min: 739 / Avg: 747 / Max: 7541. (CXX) g++ options: -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 6.65, N = 3SE +/- 0.89, N = 3SE +/- 18.38, N = 9SE +/- 3.86, N = 3SE +/- 11.35, N = 3SE +/- 2.03, N = 3SE +/- 1.92, N = 3SE +/- 2.43, N = 3SE +/- 5.23, N = 3SE +/- 2.18, N = 3SE +/- 1.08, N = 3SE +/- 0.38, N = 3SE +/- 2.25, N = 3SE +/- 3.17, N = 3SE +/- 1.97, N = 32013.243345.672263.322296.532230.181210.041352.622721.872230.352743.711674.662410.232834.093057.084347.66MIN: 1992.69MIN: 3327MIN: 2218.68MIN: 2270.01MIN: 2194.78MIN: 1190.59MIN: 1333.53MIN: 2708.11MIN: 2212.66MIN: 2727.14MIN: 1660.31MIN: 2395.31MIN: 2799.34MIN: 3044.36MIN: 4308.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P8001600240032004000Min: 2003.33 / Avg: 2013.24 / Max: 2025.88Min: 3344.19 / Avg: 3345.67 / Max: 3347.26Min: 2231.56 / Avg: 2263.32 / Max: 2407.42Min: 2288.81 / Avg: 2296.53 / Max: 2300.61Min: 2212.51 / Avg: 2230.18 / Max: 2251.36Min: 1206.71 / Avg: 1210.04 / Max: 1213.71Min: 1350.2 / Avg: 1352.62 / Max: 1356.42Min: 2718.51 / Avg: 2721.87 / Max: 2726.6Min: 2221.2 / Avg: 2230.35 / Max: 2239.31Min: 2741.09 / Avg: 2743.71 / Max: 2748.05Min: 1672.87 / Avg: 1674.66 / Max: 1676.6Min: 2409.5 / Avg: 2410.23 / Max: 2410.77Min: 2829.71 / Avg: 2834.09 / Max: 2837.17Min: 3051.79 / Avg: 3057.08 / Max: 3062.75Min: 4343.96 / Avg: 4347.66 / Max: 4350.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 0.63, N = 3SE +/- 2.55, N = 3SE +/- 6.65, N = 3SE +/- 9.75, N = 3SE +/- 0.34, N = 3SE +/- 3.66, N = 3SE +/- 1.28, N = 3SE +/- 2.51, N = 3SE +/- 5.96, N = 3SE +/- 4.79, N = 3SE +/- 0.94, N = 3SE +/- 3.71, N = 3SE +/- 1.77, N = 3SE +/- 0.81, N = 3SE +/- 4.35, N = 31997.393348.062251.632314.362203.161211.541350.132716.542221.372740.521673.312407.932835.353059.194348.88MIN: 1989.1MIN: 3331.21MIN: 2223.61MIN: 2276.78MIN: 2183.71MIN: 1187.15MIN: 1333.75MIN: 2705.25MIN: 2198.48MIN: 2722.51MIN: 1660.31MIN: 2392.45MIN: 2800.15MIN: 3051.11MIN: 4311.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P8001600240032004000Min: 1996.27 / Avg: 1997.39 / Max: 1998.44Min: 3342.99 / Avg: 3348.06 / Max: 3351.09Min: 2238.42 / Avg: 2251.63 / Max: 2259.51Min: 2296.67 / Avg: 2314.36 / Max: 2330.29Min: 2202.51 / Avg: 2203.16 / Max: 2203.65Min: 1207.18 / Avg: 1211.54 / Max: 1218.81Min: 1347.63 / Avg: 1350.13 / Max: 1351.85Min: 2713.68 / Avg: 2716.54 / Max: 2721.54Min: 2213.13 / Avg: 2221.37 / Max: 2232.96Min: 2731.74 / Avg: 2740.52 / Max: 2748.21Min: 1671.47 / Avg: 1673.31 / Max: 1674.53Min: 2401.58 / Avg: 2407.93 / Max: 2414.43Min: 2831.82 / Avg: 2835.35 / Max: 2837.39Min: 3057.83 / Avg: 3059.19 / Max: 3060.62Min: 4341.28 / Avg: 4348.88 / Max: 4356.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 2.05, N = 3SE +/- 0.86, N = 3SE +/- 22.03, N = 6SE +/- 5.75, N = 3SE +/- 8.27, N = 3SE +/- 0.57, N = 3SE +/- 1.32, N = 3SE +/- 1.42, N = 3SE +/- 2.63, N = 3SE +/- 5.60, N = 3SE +/- 0.54, N = 3SE +/- 2.99, N = 3SE +/- 1.43, N = 3SE +/- 1.59, N = 3SE +/- 5.10, N = 32013.753345.102283.912306.332221.331213.581350.552718.682214.792738.421672.242412.792836.683056.054345.12MIN: 1995.38MIN: 3322.56MIN: 2224.95MIN: 2274.33MIN: 2189.19MIN: 1190.99MIN: 1331.05MIN: 2705.41MIN: 2191.84MIN: 2722.76MIN: 1658.48MIN: 2397.68MIN: 2798.53MIN: 3041.35MIN: 4298.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P8001600240032004000Min: 2010.58 / Avg: 2013.75 / Max: 2017.58Min: 3343.83 / Avg: 3345.1 / Max: 3346.73Min: 2242.08 / Avg: 2283.91 / Max: 2381.6Min: 2294.83 / Avg: 2306.33 / Max: 2312.48Min: 2206.05 / Avg: 2221.33 / Max: 2234.44Min: 1212.83 / Avg: 1213.58 / Max: 1214.69Min: 1347.94 / Avg: 1350.55 / Max: 1352.2Min: 2716.43 / Avg: 2718.68 / Max: 2721.31Min: 2209.9 / Avg: 2214.79 / Max: 2218.93Min: 2731.04 / Avg: 2738.42 / Max: 2749.41Min: 1671.4 / Avg: 1672.24 / Max: 1673.25Min: 2408.58 / Avg: 2412.79 / Max: 2418.57Min: 2834.78 / Avg: 2836.68 / Max: 2839.49Min: 3053.73 / Avg: 3056.05 / Max: 3059.09Min: 4339.98 / Avg: 4345.12 / Max: 4355.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P70K140K210K280K350KSE +/- 436.15, N = 3SE +/- 175.05, N = 3SE +/- 23.47, N = 3SE +/- 166.32, N = 3SE +/- 735.95, N = 3SE +/- 768.33, N = 3SE +/- 749.32, N = 3SE +/- 344.84, N = 3SE +/- 622.13, N = 3SE +/- 219.22, N = 3SE +/- 607.97, N = 3SE +/- 203.74, N = 3SE +/- 223.01, N = 3SE +/- 204.78, N = 3172585100843331439287115333412331791331300258759253689259353216427163248163535131583943711. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60K120K180K240K300KMin: 171773 / Avg: 172585 / Max: 173267Min: 100631 / Avg: 100842.67 / Max: 101190Min: 331409 / Avg: 331438.67 / Max: 331485Min: 286867 / Avg: 287115 / Max: 287431Min: 332027 / Avg: 333412 / Max: 334536Min: 330767 / Avg: 331790.67 / Max: 333295Min: 329808 / Avg: 331300 / Max: 332168Min: 258263 / Avg: 258759 / Max: 259422Min: 252478 / Avg: 253689 / Max: 254542Min: 259071 / Avg: 259353.33 / Max: 259785Min: 215402 / Avg: 216427 / Max: 217506Min: 163163 / Avg: 163535 / Max: 163865Min: 131216 / Avg: 131583 / Max: 131986Min: 93963 / Avg: 94371 / Max: 946061. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KSE +/- 308.97, N = 9SE +/- 37.94, N = 10SE +/- 1018.84, N = 15SE +/- 1033.87, N = 15SE +/- 1366.04, N = 15SE +/- 641.51, N = 9SE +/- 375.36, N = 8SE +/- 143.74, N = 9SE +/- 154.09, N = 8SE +/- 160.95, N = 8SE +/- 134.09, N = 10SE +/- 80.44, N = 9SE +/- 71.59, N = 9SE +/- 22.37, N = 11SE +/- 139.55, N = 971767.9654800.99148358.57145868.29155375.98149967.74135838.12144583.87135786.94136276.50111563.8484316.2376796.0461838.5244242.981. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KMin: 70448.73 / Avg: 71767.96 / Max: 73371.09Min: 54543.54 / Avg: 54800.99 / Max: 54955.54Min: 145615.22 / Avg: 148358.57 / Max: 161715.14Min: 143378.84 / Avg: 145868.29 / Max: 159924Min: 151616.85 / Avg: 155375.98 / Max: 173605.93Min: 148789.93 / Avg: 149967.74 / Max: 154980.74Min: 133822.4 / Avg: 135838.12 / Max: 137193.41Min: 143957.47 / Avg: 144583.87 / Max: 145193.47Min: 135207.86 / Avg: 135786.94 / Max: 136435.92Min: 135695.57 / Avg: 136276.5 / Max: 136949.63Min: 110544.87 / Avg: 111563.84 / Max: 112037.3Min: 83921.86 / Avg: 84316.23 / Max: 84643.19Min: 76508.11 / Avg: 76796.04 / Max: 77062.73Min: 61730.3 / Avg: 61838.52 / Max: 61950.09Min: 43253.52 / Avg: 44242.98 / Max: 44735.981. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 310.665.7316.5014.9416.4415.9415.5014.9413.6513.9711.229.158.616.494.721. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620Min: 10.64 / Avg: 10.66 / Max: 10.69Min: 5.71 / Avg: 5.73 / Max: 5.74Min: 16.44 / Avg: 16.5 / Max: 16.57Min: 14.93 / Avg: 14.94 / Max: 14.97Min: 16.42 / Avg: 16.44 / Max: 16.48Min: 15.92 / Avg: 15.94 / Max: 15.97Min: 15.49 / Avg: 15.5 / Max: 15.51Min: 14.93 / Avg: 14.94 / Max: 14.95Min: 13.61 / Avg: 13.65 / Max: 13.67Min: 13.95 / Avg: 13.97 / Max: 13.98Min: 11.18 / Avg: 11.22 / Max: 11.24Min: 9.13 / Avg: 9.15 / Max: 9.17Min: 8.58 / Avg: 8.61 / Max: 8.63Min: 6.48 / Avg: 6.49 / Max: 6.49Min: 4.72 / Avg: 4.72 / Max: 4.731. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 0.87, N = 3SE +/- 0.20, N = 3SE +/- 0.19, N = 3SE +/- 0.15, N = 3SE +/- 0.05, N = 3SE +/- 0.26, N = 3SE +/- 0.11, N = 3SE +/- 0.27, N = 3SE +/- 0.02, N = 3SE +/- 0.20, N = 3SE +/- 0.46, N = 3SE +/- 0.19, N = 3SE +/- 0.09, N = 3SE +/- 0.37, N = 3SE +/- 0.67, N = 3165.41200.4178.0181.0678.3979.9886.74103.3093.24105.67116.41139.15177.91205.89271.161. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 163.66 / Avg: 165.41 / Max: 166.3Min: 200.15 / Avg: 200.41 / Max: 200.81Min: 77.68 / Avg: 78 / Max: 78.33Min: 80.78 / Avg: 81.06 / Max: 81.29Min: 78.29 / Avg: 78.39 / Max: 78.44Min: 79.56 / Avg: 79.98 / Max: 80.47Min: 86.54 / Avg: 86.74 / Max: 86.91Min: 102.87 / Avg: 103.3 / Max: 103.79Min: 93.21 / Avg: 93.24 / Max: 93.28Min: 105.35 / Avg: 105.67 / Max: 106.04Min: 115.76 / Avg: 116.41 / Max: 117.3Min: 138.85 / Avg: 139.15 / Max: 139.52Min: 177.8 / Avg: 177.9 / Max: 178.09Min: 205.17 / Avg: 205.89 / Max: 206.37Min: 270.41 / Avg: 271.16 / Max: 272.491. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.00511, N = 9SE +/- 0.00118, N = 9SE +/- 0.02450, N = 15SE +/- 0.03302, N = 15SE +/- 0.02912, N = 15SE +/- 0.01930, N = 15SE +/- 0.00950, N = 9SE +/- 0.00582, N = 9SE +/- 0.00881, N = 9SE +/- 0.00414, N = 9SE +/- 0.00636, N = 9SE +/- 0.00474, N = 9SE +/- 0.00482, N = 9SE +/- 0.00724, N = 9SE +/- 0.00735, N = 93.927376.463982.607642.820152.523632.403802.553353.441183.601303.688333.346334.726085.083286.291358.27184MIN: 3.81MIN: 6.38MIN: 2.34MIN: 2.49MIN: 2.25MIN: 2.23MIN: 2.41MIN: 3.03MIN: 3.22MIN: 3.26MIN: 3.15MIN: 4.52MIN: 4.73MIN: 6.07MIN: 81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 3.89 / Avg: 3.93 / Max: 3.94Min: 6.46 / Avg: 6.46 / Max: 6.47Min: 2.5 / Avg: 2.61 / Max: 2.79Min: 2.71 / Avg: 2.82 / Max: 3.12Min: 2.43 / Avg: 2.52 / Max: 2.85Min: 2.29 / Avg: 2.4 / Max: 2.52Min: 2.52 / Avg: 2.55 / Max: 2.62Min: 3.42 / Avg: 3.44 / Max: 3.47Min: 3.57 / Avg: 3.6 / Max: 3.64Min: 3.67 / Avg: 3.69 / Max: 3.7Min: 3.33 / Avg: 3.35 / Max: 3.39Min: 4.71 / Avg: 4.73 / Max: 4.74Min: 5.06 / Avg: 5.08 / Max: 5.11Min: 6.26 / Avg: 6.29 / Max: 6.32Min: 8.23 / Avg: 8.27 / Max: 8.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.26, N = 3SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 320.0012.0629.4926.4931.8429.3727.5027.6725.3526.5123.3417.3916.6113.679.30
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835Min: 19.79 / Avg: 20 / Max: 20.19Min: 11.98 / Avg: 12.06 / Max: 12.18Min: 29.33 / Avg: 29.49 / Max: 29.66Min: 26.42 / Avg: 26.49 / Max: 26.59Min: 31.48 / Avg: 31.84 / Max: 32.35Min: 29.23 / Avg: 29.37 / Max: 29.61Min: 27.47 / Avg: 27.5 / Max: 27.53Min: 27.53 / Avg: 27.67 / Max: 27.96Min: 25.31 / Avg: 25.35 / Max: 25.42Min: 26.11 / Avg: 26.51 / Max: 26.82Min: 23.11 / Avg: 23.34 / Max: 23.52Min: 17.26 / Avg: 17.39 / Max: 17.46Min: 16.52 / Avg: 16.61 / Max: 16.72Min: 13.44 / Avg: 13.67 / Max: 13.83Min: 9.23 / Avg: 9.3 / Max: 9.41

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P160320480640800SE +/- 2.67, N = 3SE +/- 0.89, N = 3SE +/- 0.11, N = 3SE +/- 1.73, N = 3SE +/- 0.89, N = 3SE +/- 1.29, N = 3SE +/- 1.44, N = 3SE +/- 1.12, N = 3SE +/- 1.56, N = 3SE +/- 2.29, N = 3SE +/- 3.16, N = 9SE +/- 5.04, N = 4SE +/- 3.68, N = 3SE +/- 0.47, N = 3SE +/- 2.05, N = 3343.39581.30224.90234.21233.51243.92245.40275.35280.78289.36312.70404.33440.00529.62763.56
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P130260390520650Min: 340.62 / Avg: 343.39 / Max: 348.73Min: 579.68 / Avg: 581.3 / Max: 582.76Min: 224.67 / Avg: 224.9 / Max: 225.02Min: 230.9 / Avg: 234.2 / Max: 236.76Min: 232.12 / Avg: 233.51 / Max: 235.17Min: 241.53 / Avg: 243.91 / Max: 245.95Min: 242.68 / Avg: 245.4 / Max: 247.61Min: 273.43 / Avg: 275.35 / Max: 277.3Min: 278.05 / Avg: 280.78 / Max: 283.44Min: 286.12 / Avg: 289.36 / Max: 293.79Min: 297.71 / Avg: 312.7 / Max: 321.97Min: 391.86 / Avg: 404.33 / Max: 414.08Min: 432.97 / Avg: 440 / Max: 445.42Min: 528.7 / Avg: 529.62 / Max: 530.26Min: 759.57 / Avg: 763.56 / Max: 766.35

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P110220330440550226.80381.37146.84154.44153.46153.98154.73161.64172.91170.48194.53260.98275.81333.25487.37

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P15K30K45K60K75KSE +/- 337.73, N = 15SE +/- 63.04, N = 3SE +/- 283.18, N = 3SE +/- 486.09, N = 3SE +/- 229.52, N = 3SE +/- 539.78, N = 3SE +/- 709.54, N = 3SE +/- 676.58, N = 3SE +/- 174.06, N = 3SE +/- 406.89, N = 15SE +/- 387.48, N = 3SE +/- 447.90, N = 12SE +/- 309.38, N = 15SE +/- 91.21, N = 3SE +/- 18.98, N = 33715224137706905569960949588895694552325476155179447662384583858631896213341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12K24K36K48K60KMin: 35001.56 / Avg: 37152.01 / Max: 39986.56Min: 24025.48 / Avg: 24137.33 / Max: 24243.66Min: 70359.3 / Avg: 70689.79 / Max: 71253.35Min: 54882.69 / Avg: 55699.33 / Max: 56564.47Min: 60636.54 / Avg: 60949.34 / Max: 61396.7Min: 57853.09 / Avg: 58888.92 / Max: 59670.2Min: 55594.88 / Avg: 56944.6 / Max: 57999Min: 51433.97 / Avg: 52325.08 / Max: 53652.51Min: 47424.2 / Avg: 47614.55 / Max: 47962.14Min: 49652.94 / Avg: 51793.61 / Max: 54520.15Min: 47265.69 / Avg: 47661.51 / Max: 48436.42Min: 36956.38 / Avg: 38457.92 / Max: 41415.74Min: 36728.54 / Avg: 38586.13 / Max: 40215.78Min: 31714.82 / Avg: 31895.88 / Max: 32005.63Min: 21301.63 / Avg: 21333.63 / Max: 21367.311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.05532.11063.16594.22125.2765SE +/- 0.024, N = 15SE +/- 0.011, N = 3SE +/- 0.005, N = 3SE +/- 0.016, N = 3SE +/- 0.006, N = 3SE +/- 0.016, N = 3SE +/- 0.022, N = 3SE +/- 0.024, N = 3SE +/- 0.008, N = 3SE +/- 0.015, N = 15SE +/- 0.017, N = 3SE +/- 0.029, N = 12SE +/- 0.021, N = 15SE +/- 0.009, N = 3SE +/- 0.004, N = 32.6974.1451.4171.7981.6431.7011.7591.9152.1031.9352.1012.6062.5963.1374.6901. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 2.5 / Avg: 2.7 / Max: 2.86Min: 4.13 / Avg: 4.14 / Max: 4.16Min: 1.41 / Avg: 1.42 / Max: 1.42Min: 1.77 / Avg: 1.8 / Max: 1.82Min: 1.63 / Avg: 1.64 / Max: 1.65Min: 1.68 / Avg: 1.7 / Max: 1.73Min: 1.73 / Avg: 1.76 / Max: 1.8Min: 1.87 / Avg: 1.91 / Max: 1.95Min: 2.09 / Avg: 2.1 / Max: 2.11Min: 1.84 / Avg: 1.94 / Max: 2.02Min: 2.07 / Avg: 2.1 / Max: 2.12Min: 2.42 / Avg: 2.61 / Max: 2.71Min: 2.49 / Avg: 2.6 / Max: 2.73Min: 3.13 / Avg: 3.14 / Max: 3.16Min: 4.68 / Avg: 4.69 / Max: 4.71. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430SE +/- 0.022, N = 4SE +/- 0.078, N = 3SE +/- 0.061, N = 6SE +/- 0.066, N = 8SE +/- 0.051, N = 15SE +/- 0.073, N = 5SE +/- 0.083, N = 6SE +/- 0.005, N = 5SE +/- 0.057, N = 5SE +/- 0.030, N = 5SE +/- 0.007, N = 4SE +/- 0.022, N = 4SE +/- 0.009, N = 4SE +/- 0.065, N = 3SE +/- 0.030, N = 313.53022.2187.6837.8197.7478.4348.4059.8069.8589.89211.57715.44514.79218.13425.3741. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430Min: 13.49 / Avg: 13.53 / Max: 13.59Min: 22.12 / Avg: 22.22 / Max: 22.37Min: 7.4 / Avg: 7.68 / Max: 7.86Min: 7.45 / Avg: 7.82 / Max: 8.05Min: 7.37 / Avg: 7.75 / Max: 8.02Min: 8.16 / Avg: 8.43 / Max: 8.59Min: 8.14 / Avg: 8.41 / Max: 8.71Min: 9.79 / Avg: 9.81 / Max: 9.82Min: 9.76 / Avg: 9.86 / Max: 10.08Min: 9.81 / Avg: 9.89 / Max: 9.96Min: 11.57 / Avg: 11.58 / Max: 11.6Min: 15.41 / Avg: 15.44 / Max: 15.51Min: 14.76 / Avg: 14.79 / Max: 14.8Min: 18.02 / Avg: 18.13 / Max: 18.24Min: 25.33 / Avg: 25.37 / Max: 25.431. (CXX) g++ options: -O2 -lOpenCL

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.28, N = 3SE +/- 0.46, N = 4SE +/- 0.45, N = 3SE +/- 0.16, N = 3SE +/- 0.37, N = 3SE +/- 0.11, N = 3SE +/- 0.19, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.22, N = 3SE +/- 0.15, N = 324.2914.6734.9931.8937.3835.2532.6833.3330.3632.0527.8121.2420.3416.7911.41
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240Min: 24.18 / Avg: 24.29 / Max: 24.46Min: 14.6 / Avg: 14.67 / Max: 14.82Min: 34.74 / Avg: 34.99 / Max: 35.29Min: 31.34 / Avg: 31.89 / Max: 32.28Min: 36.33 / Avg: 37.38 / Max: 38.57Min: 34.41 / Avg: 35.25 / Max: 35.93Min: 32.38 / Avg: 32.68 / Max: 32.94Min: 32.65 / Avg: 33.33 / Max: 33.94Min: 30.21 / Avg: 30.36 / Max: 30.58Min: 31.69 / Avg: 32.05 / Max: 32.3Min: 27.57 / Avg: 27.81 / Max: 27.96Min: 21.13 / Avg: 21.24 / Max: 21.44Min: 20.24 / Avg: 20.34 / Max: 20.43Min: 16.5 / Avg: 16.79 / Max: 17.22Min: 11.13 / Avg: 11.41 / Max: 11.63

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5001000150020002500SE +/- 1.61, N = 3SE +/- 0.51, N = 3SE +/- 1.15, N = 3SE +/- 3.54, N = 3SE +/- 1.17, N = 3SE +/- 1.28, N = 3SE +/- 2.70, N = 3SE +/- 4.31, N = 3SE +/- 1.51, N = 3SE +/- 5.72, N = 3SE +/- 0.91, N = 3SE +/- 0.19, N = 3SE +/- 3.12, N = 3SE +/- 2.69, N = 3SE +/- 2.26, N = 3997.851745.54811.00874.44793.20732.81787.43890.48813.12902.26972.931229.691882.642022.452397.56MIN: 987.15MIN: 1733.89MIN: 788.26MIN: 847.07MIN: 776.66MIN: 712.9MIN: 769.99MIN: 867.03MIN: 799.88MIN: 882.8MIN: 960.02MIN: 1221.64MIN: 1853.21MIN: 2012.67MIN: 2367.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400800120016002000Min: 995.25 / Avg: 997.85 / Max: 1000.79Min: 1744.59 / Avg: 1745.54 / Max: 1746.31Min: 808.75 / Avg: 811 / Max: 812.5Min: 868.68 / Avg: 874.44 / Max: 880.89Min: 790.85 / Avg: 793.2 / Max: 794.46Min: 731.06 / Avg: 732.81 / Max: 735.31Min: 782.55 / Avg: 787.43 / Max: 791.88Min: 882.05 / Avg: 890.48 / Max: 896.25Min: 810.1 / Avg: 813.12 / Max: 814.7Min: 890.83 / Avg: 902.26 / Max: 908.5Min: 971.59 / Avg: 972.93 / Max: 974.66Min: 1229.46 / Avg: 1229.69 / Max: 1230.06Min: 1876.41 / Avg: 1882.64 / Max: 1885.98Min: 2018.73 / Avg: 2022.45 / Max: 2027.68Min: 2394.02 / Avg: 2397.56 / Max: 2401.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5001000150020002500SE +/- 3.97, N = 3SE +/- 0.95, N = 3SE +/- 5.32, N = 3SE +/- 2.81, N = 3SE +/- 1.88, N = 3SE +/- 3.28, N = 3SE +/- 2.09, N = 3SE +/- 1.35, N = 3SE +/- 1.32, N = 3SE +/- 2.58, N = 3SE +/- 1.57, N = 3SE +/- 0.54, N = 3SE +/- 5.33, N = 3SE +/- 4.61, N = 3SE +/- 3.61, N = 3998.671742.79808.62878.20793.23734.40784.82886.88812.99919.61974.121227.561886.302032.882397.22MIN: 987.09MIN: 1728.82MIN: 785.04MIN: 851.72MIN: 777.03MIN: 716.82MIN: 771.05MIN: 865.5MIN: 799.51MIN: 906.32MIN: 962.55MIN: 1218.44MIN: 1851.55MIN: 2016.92MIN: 2370.411. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400800120016002000Min: 991.29 / Avg: 998.67 / Max: 1004.91Min: 1741.68 / Avg: 1742.79 / Max: 1744.69Min: 801.38 / Avg: 808.62 / Max: 818.99Min: 873.24 / Avg: 878.2 / Max: 882.97Min: 791.29 / Avg: 793.23 / Max: 796.99Min: 728.04 / Avg: 734.4 / Max: 738.94Min: 780.67 / Avg: 784.82 / Max: 787.4Min: 884.18 / Avg: 886.88 / Max: 888.33Min: 810.37 / Avg: 812.99 / Max: 814.64Min: 916.98 / Avg: 919.61 / Max: 924.77Min: 971.07 / Avg: 974.12 / Max: 976.32Min: 1226.72 / Avg: 1227.56 / Max: 1228.57Min: 1876.6 / Avg: 1886.3 / Max: 1894.99Min: 2023.66 / Avg: 2032.88 / Max: 2037.71Min: 2392.08 / Avg: 2397.22 / Max: 2404.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5001000150020002500SE +/- 0.99, N = 3SE +/- 2.16, N = 3SE +/- 5.74, N = 3SE +/- 3.99, N = 3SE +/- 0.96, N = 3SE +/- 2.51, N = 3SE +/- 1.66, N = 3SE +/- 1.82, N = 3SE +/- 2.11, N = 3SE +/- 5.94, N = 3SE +/- 0.65, N = 3SE +/- 1.03, N = 3SE +/- 4.32, N = 3SE +/- 1.28, N = 3SE +/- 4.09, N = 3999.141744.75810.18875.71792.55735.82782.65888.54814.03917.61972.771228.501884.702029.282398.98MIN: 991.16MIN: 1731.94MIN: 783.06MIN: 846.67MIN: 773.96MIN: 715.63MIN: 769.36MIN: 872.96MIN: 801.22MIN: 892.04MIN: 962.66MIN: 1219.05MIN: 1853.9MIN: 2020.38MIN: 2367.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400800120016002000Min: 997.98 / Avg: 999.14 / Max: 1001.1Min: 1742.54 / Avg: 1744.75 / Max: 1749.07Min: 800.07 / Avg: 810.18 / Max: 819.95Min: 869.12 / Avg: 875.71 / Max: 882.9Min: 791.43 / Avg: 792.55 / Max: 794.46Min: 731.42 / Avg: 735.82 / Max: 740.12Min: 779.51 / Avg: 782.65 / Max: 785.17Min: 885.37 / Avg: 888.54 / Max: 891.69Min: 810.97 / Avg: 814.03 / Max: 818.07Min: 907.1 / Avg: 917.61 / Max: 927.67Min: 971.9 / Avg: 972.77 / Max: 974.05Min: 1226.65 / Avg: 1228.5 / Max: 1230.2Min: 1878.26 / Avg: 1884.7 / Max: 1892.91Min: 2026.92 / Avg: 2029.28 / Max: 2031.33Min: 2391.8 / Avg: 2398.98 / Max: 2405.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.06622.13243.19864.26485.331SE +/- 0.01053, N = 4SE +/- 0.00382, N = 4SE +/- 0.01153, N = 10SE +/- 0.00178, N = 4SE +/- 0.01406, N = 4SE +/- 0.00906, N = 4SE +/- 0.00439, N = 4SE +/- 0.00341, N = 4SE +/- 0.00891, N = 4SE +/- 0.00185, N = 4SE +/- 0.00090, N = 4SE +/- 0.00144, N = 4SE +/- 0.00488, N = 4SE +/- 0.00316, N = 4SE +/- 0.00331, N = 41.980353.560201.515281.703601.489811.467751.599381.472221.615021.593001.829512.410823.446683.809554.73866MIN: 1.86MIN: 3.44MIN: 1.35MIN: 1.48MIN: 1.33MIN: 1.39MIN: 1.51MIN: 1.41MIN: 1.5MIN: 1.51MIN: 1.74MIN: 2.2MIN: 2.92MIN: 3.44MIN: 4.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 1.96 / Avg: 1.98 / Max: 2.01Min: 3.55 / Avg: 3.56 / Max: 3.57Min: 1.46 / Avg: 1.52 / Max: 1.57Min: 1.7 / Avg: 1.7 / Max: 1.71Min: 1.45 / Avg: 1.49 / Max: 1.52Min: 1.45 / Avg: 1.47 / Max: 1.49Min: 1.59 / Avg: 1.6 / Max: 1.61Min: 1.46 / Avg: 1.47 / Max: 1.48Min: 1.6 / Avg: 1.62 / Max: 1.64Min: 1.59 / Avg: 1.59 / Max: 1.6Min: 1.83 / Avg: 1.83 / Max: 1.83Min: 2.41 / Avg: 2.41 / Max: 2.41Min: 3.43 / Avg: 3.45 / Max: 3.45Min: 3.8 / Avg: 3.81 / Max: 3.81Min: 4.73 / Avg: 4.74 / Max: 4.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12K24K36K48K60KSE +/- 92.29, N = 3SE +/- 32.87, N = 3SE +/- 858.45, N = 15SE +/- 365.68, N = 3SE +/- 734.82, N = 3SE +/- 308.59, N = 3SE +/- 725.42, N = 3SE +/- 320.52, N = 13SE +/- 489.37, N = 5SE +/- 468.05, N = 3SE +/- 510.25, N = 4SE +/- 306.40, N = 6SE +/- 44.96, N = 3SE +/- 98.97, N = 3SE +/- 5.29, N = 32819320541571025218154318503815144446655443434567141273303133211525170177091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P10K20K30K40K50KMin: 28026.73 / Avg: 28193.29 / Max: 28345.44Min: 20498.68 / Avg: 20541.36 / Max: 20606Min: 53269.54 / Avg: 57102 / Max: 63379.05Min: 51458.66 / Avg: 52181.25 / Max: 52640.41Min: 53010.6 / Avg: 54318.48 / Max: 55552.89Min: 49982.78 / Avg: 50381.39 / Max: 50988.76Min: 50134.59 / Avg: 51444.21 / Max: 52639.74Min: 46038.91 / Avg: 46655.09 / Max: 50394.39Min: 42733.56 / Avg: 44342.61 / Max: 45416.75Min: 45084.17 / Avg: 45670.73 / Max: 46595.8Min: 40219.85 / Avg: 41272.74 / Max: 42149.61Min: 29631.74 / Avg: 30313.35 / Max: 31648.97Min: 32033.65 / Avg: 32115.45 / Max: 32188.7Min: 25002.83 / Avg: 25170.22 / Max: 25345.42Min: 17702.56 / Avg: 17709.27 / Max: 17719.711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620SE +/- 0.029, N = 3SE +/- 0.020, N = 3SE +/- 0.064, N = 15SE +/- 0.034, N = 3SE +/- 0.063, N = 3SE +/- 0.031, N = 3SE +/- 0.069, N = 3SE +/- 0.034, N = 13SE +/- 0.063, N = 5SE +/- 0.056, N = 3SE +/- 0.075, N = 4SE +/- 0.082, N = 6SE +/- 0.011, N = 3SE +/- 0.040, N = 3SE +/- 0.004, N = 38.87612.1764.4034.8044.6174.9724.8725.3695.6485.4816.0678.2587.7919.93914.1231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620Min: 8.83 / Avg: 8.88 / Max: 8.93Min: 12.14 / Avg: 12.18 / Max: 12.2Min: 3.96 / Avg: 4.4 / Max: 4.7Min: 4.76 / Avg: 4.8 / Max: 4.87Min: 4.51 / Avg: 4.62 / Max: 4.73Min: 4.91 / Avg: 4.97 / Max: 5.01Min: 4.76 / Avg: 4.87 / Max: 5Min: 4.97 / Avg: 5.37 / Max: 5.44Min: 5.51 / Avg: 5.65 / Max: 5.86Min: 5.37 / Avg: 5.48 / Max: 5.55Min: 5.94 / Avg: 6.07 / Max: 6.22Min: 7.91 / Avg: 8.26 / Max: 8.44Min: 7.77 / Avg: 7.79 / Max: 7.81Min: 9.87 / Avg: 9.94 / Max: 10.01Min: 14.12 / Avg: 14.12 / Max: 14.131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.52, N = 3SE +/- 0.11, N = 3SE +/- 0.51, N = 15SE +/- 0.61, N = 14SE +/- 0.47, N = 3SE +/- 0.47, N = 15SE +/- 0.82, N = 15SE +/- 0.28, N = 3SE +/- 0.42, N = 3SE +/- 0.26, N = 3SE +/- 0.25, N = 3SE +/- 1.14, N = 3SE +/- 0.76, N = 10SE +/- 0.53, N = 3SE +/- 0.41, N = 390.52130.9847.1748.7247.3746.5347.4357.6859.4457.9859.6598.1999.72107.97148.201. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 89.7 / Avg: 90.52 / Max: 91.49Min: 130.76 / Avg: 130.98 / Max: 131.14Min: 44.36 / Avg: 47.17 / Max: 50.76Min: 46.16 / Avg: 48.72 / Max: 52.95Min: 46.63 / Avg: 47.37 / Max: 48.23Min: 44.25 / Avg: 46.53 / Max: 49.64Min: 44.96 / Avg: 47.42 / Max: 54.58Min: 57.12 / Avg: 57.68 / Max: 57.99Min: 58.61 / Avg: 59.44 / Max: 59.95Min: 57.58 / Avg: 57.98 / Max: 58.48Min: 59.34 / Avg: 59.65 / Max: 60.14Min: 96.7 / Avg: 98.19 / Max: 100.44Min: 96.18 / Avg: 99.72 / Max: 103.91Min: 106.91 / Avg: 107.97 / Max: 108.52Min: 147.55 / Avg: 148.2 / Max: 148.961. (CXX) g++ options: -O2 -lOpenCL

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P100200300400500SE +/- 0.36, N = 10SE +/- 0.26, N = 8SE +/- 2.40, N = 10SE +/- 3.60, N = 15SE +/- 0.96, N = 10SE +/- 0.61, N = 10SE +/- 0.97, N = 10SE +/- 0.74, N = 11SE +/- 1.03, N = 11SE +/- 0.76, N = 11SE +/- 0.86, N = 11SE +/- 0.29, N = 10SE +/- 0.35, N = 10SE +/- 0.45, N = 9SE +/- 0.15, N = 8263.13167.08437.25409.36448.08458.24445.92459.14426.52437.88413.09316.49295.80239.47144.321. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80160240320400Min: 260.87 / Avg: 263.13 / Max: 264.67Min: 165.84 / Avg: 167.08 / Max: 168.07Min: 424.03 / Avg: 437.25 / Max: 446.43Min: 379.51 / Avg: 409.36 / Max: 426.74Min: 442.15 / Avg: 448.08 / Max: 452.83Min: 456.27 / Avg: 458.24 / Max: 462.61Min: 441.83 / Avg: 445.92 / Max: 451.13Min: 454.55 / Avg: 459.14 / Max: 462.25Min: 419.29 / Avg: 426.52 / Max: 430.73Min: 434.47 / Avg: 437.88 / Max: 443.13Min: 407.89 / Avg: 413.09 / Max: 416.67Min: 315.62 / Avg: 316.49 / Max: 317.97Min: 293.54 / Avg: 295.8 / Max: 297.47Min: 237.06 / Avg: 239.47 / Max: 241.16Min: 143.85 / Avg: 144.32 / Max: 145.11. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.03514, N = 7SE +/- 0.04127, N = 7SE +/- 0.01095, N = 7SE +/- 0.00551, N = 7SE +/- 0.01107, N = 7SE +/- 0.00965, N = 7SE +/- 0.00723, N = 7SE +/- 0.00813, N = 7SE +/- 0.01167, N = 7SE +/- 0.01750, N = 7SE +/- 0.00525, N = 7SE +/- 0.00742, N = 7SE +/- 0.03193, N = 7SE +/- 0.00984, N = 7SE +/- 0.04608, N = 76.915619.151243.531163.558923.504123.543163.840183.976993.612683.970434.251115.454006.762427.6050510.95820MIN: 6.67MIN: 8.56MIN: 3.44MIN: 3.47MIN: 3.39MIN: 3.43MIN: 3.75MIN: 3.87MIN: 3.49MIN: 3.86MIN: 4.06MIN: 5.12MIN: 6.41MIN: 7.15MIN: 10.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 6.79 / Avg: 6.92 / Max: 7.05Min: 9.06 / Avg: 9.15 / Max: 9.39Min: 3.5 / Avg: 3.53 / Max: 3.58Min: 3.54 / Avg: 3.56 / Max: 3.58Min: 3.47 / Avg: 3.5 / Max: 3.54Min: 3.52 / Avg: 3.54 / Max: 3.59Min: 3.81 / Avg: 3.84 / Max: 3.86Min: 3.94 / Avg: 3.98 / Max: 4Min: 3.55 / Avg: 3.61 / Max: 3.64Min: 3.92 / Avg: 3.97 / Max: 4.03Min: 4.23 / Avg: 4.25 / Max: 4.27Min: 5.41 / Avg: 5.45 / Max: 5.47Min: 6.64 / Avg: 6.76 / Max: 6.85Min: 7.56 / Avg: 7.61 / Max: 7.64Min: 10.75 / Avg: 10.96 / Max: 11.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KSE +/- 82.34, N = 3SE +/- 31.62, N = 3SE +/- 491.67, N = 3SE +/- 34.41, N = 3SE +/- 298.03, N = 3SE +/- 281.17, N = 3SE +/- 82.31, N = 3SE +/- 207.08, N = 3SE +/- 26.26, N = 3SE +/- 106.91, N = 3SE +/- 81.46, N = 3SE +/- 3.68, N = 3SE +/- 22.56, N = 3SE +/- 24.66, N = 3SE +/- 12.95, N = 344891.6943172.51105428.67102548.95103985.9199557.2395079.4179644.8886547.2878846.9674421.6161041.2749617.9243321.3833816.481. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KMin: 44754.21 / Avg: 44891.69 / Max: 45038.95Min: 43122.35 / Avg: 43172.51 / Max: 43230.93Min: 104525.99 / Avg: 105428.67 / Max: 106217.81Min: 102490.22 / Avg: 102548.95 / Max: 102609.37Min: 103567 / Avg: 103985.91 / Max: 104562.59Min: 98994.91 / Avg: 99557.23 / Max: 99842.55Min: 94922.44 / Avg: 95079.41 / Max: 95200.87Min: 79399.38 / Avg: 79644.88 / Max: 80056.49Min: 86498.25 / Avg: 86547.28 / Max: 86588.09Min: 78710.27 / Avg: 78846.96 / Max: 79057.7Min: 74315 / Avg: 74421.61 / Max: 74581.6Min: 61034.85 / Avg: 61041.27 / Max: 61047.59Min: 49572.87 / Avg: 49617.92 / Max: 49642.69Min: 43285.83 / Avg: 43321.38 / Max: 43368.77Min: 33792.21 / Avg: 33816.48 / Max: 33836.431. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 4SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 4SE +/- 0.00, N = 4SE +/- 0.01, N = 3SE +/- 0.02, N = 4SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 319.8632.6312.8413.3013.2114.1914.2415.6416.7016.3118.1723.3424.2429.2439.671. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240Min: 19.86 / Avg: 19.86 / Max: 19.87Min: 32.63 / Avg: 32.63 / Max: 32.64Min: 12.79 / Avg: 12.83 / Max: 12.88Min: 13.28 / Avg: 13.3 / Max: 13.33Min: 13.2 / Avg: 13.21 / Max: 13.22Min: 14.17 / Avg: 14.19 / Max: 14.21Min: 14.23 / Avg: 14.24 / Max: 14.25Min: 15.63 / Avg: 15.64 / Max: 15.64Min: 16.69 / Avg: 16.7 / Max: 16.71Min: 16.28 / Avg: 16.31 / Max: 16.38Min: 18.16 / Avg: 18.17 / Max: 18.17Min: 23.32 / Avg: 23.34 / Max: 23.38Min: 24.19 / Avg: 24.24 / Max: 24.27Min: 29.23 / Avg: 29.24 / Max: 29.27Min: 39.66 / Avg: 39.67 / Max: 39.671. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.00961, N = 3SE +/- 0.06606, N = 3SE +/- 0.02021, N = 3SE +/- 0.00813, N = 3SE +/- 0.01627, N = 15SE +/- 0.02012, N = 3SE +/- 0.01668, N = 15SE +/- 0.00152, N = 3SE +/- 0.01295, N = 3SE +/- 0.01343, N = 3SE +/- 0.00832, N = 3SE +/- 0.01740, N = 3SE +/- 0.01071, N = 3SE +/- 0.00923, N = 3SE +/- 0.02020, N = 32.757435.032362.068842.335192.022311.987262.157942.048662.222422.197272.489773.283773.501754.440776.12261MIN: 2.63MIN: 4.86MIN: 1.89MIN: 2.08MIN: 1.84MIN: 1.87MIN: 1.99MIN: 1.99MIN: 2.06MIN: 2.12MIN: 2.4MIN: 3.08MIN: 3.1MIN: 4.2MIN: 5.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 2.74 / Avg: 2.76 / Max: 2.77Min: 4.94 / Avg: 5.03 / Max: 5.16Min: 2.04 / Avg: 2.07 / Max: 2.11Min: 2.32 / Avg: 2.34 / Max: 2.35Min: 1.93 / Avg: 2.02 / Max: 2.15Min: 1.95 / Avg: 1.99 / Max: 2.02Min: 2.07 / Avg: 2.16 / Max: 2.28Min: 2.05 / Avg: 2.05 / Max: 2.05Min: 2.21 / Avg: 2.22 / Max: 2.25Min: 2.18 / Avg: 2.2 / Max: 2.22Min: 2.48 / Avg: 2.49 / Max: 2.51Min: 3.25 / Avg: 3.28 / Max: 3.3Min: 3.49 / Avg: 3.5 / Max: 3.52Min: 4.43 / Avg: 4.44 / Max: 4.46Min: 6.1 / Avg: 6.12 / Max: 6.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P100200300400500SE +/- 0.46, N = 3SE +/- 0.27, N = 3SE +/- 0.33, N = 3SE +/- 0.34, N = 3SE +/- 0.74, N = 3SE +/- 0.17, N = 3SE +/- 0.66, N = 3SE +/- 0.75, N = 3SE +/- 0.29, N = 3SE +/- 0.16, N = 3SE +/- 0.61, N = 3SE +/- 0.23, N = 3SE +/- 0.22, N = 3SE +/- 0.16, N = 3SE +/- 0.14, N = 3244.17168.17454.41437.31457.27416.96406.93367.19349.99361.72328.54245.60239.84203.40150.43MIN: 200.55 / MAX: 270.42MIN: 157.91 / MAX: 191.26MIN: 219.4 / MAX: 490.36MIN: 217.67 / MAX: 473.18MIN: 225.33 / MAX: 494.73MIN: 251.44 / MAX: 465MIN: 257.69 / MAX: 456.39MIN: 288.65 / MAX: 424.55MIN: 266.02 / MAX: 399.72MIN: 287.95 / MAX: 416.23MIN: 249.05 / MAX: 376.65MIN: 210.68 / MAX: 281.84MIN: 211 / MAX: 275.93MIN: 187.66 / MAX: 232.21MIN: 141.09 / MAX: 170.161. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80160240320400Min: 243.51 / Avg: 244.17 / Max: 245.05Min: 167.67 / Avg: 168.17 / Max: 168.6Min: 453.74 / Avg: 454.41 / Max: 454.75Min: 436.66 / Avg: 437.31 / Max: 437.79Min: 455.84 / Avg: 457.27 / Max: 458.3Min: 416.77 / Avg: 416.96 / Max: 417.29Min: 405.7 / Avg: 406.93 / Max: 407.97Min: 365.83 / Avg: 367.19 / Max: 368.43Min: 349.42 / Avg: 349.99 / Max: 350.39Min: 361.4 / Avg: 361.72 / Max: 361.91Min: 327.42 / Avg: 328.54 / Max: 329.53Min: 245.22 / Avg: 245.6 / Max: 246Min: 239.57 / Avg: 239.84 / Max: 240.27Min: 203.1 / Avg: 203.4 / Max: 203.66Min: 150.28 / Avg: 150.43 / Max: 150.71. (CC) gcc options: -pthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 4SE +/- 0.04, N = 3SE +/- 0.01, N = 4SE +/- 0.03, N = 4SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 326.3115.5638.6536.0739.1537.3935.8135.0632.7233.2526.6122.9421.7017.8312.981. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240Min: 26.29 / Avg: 26.31 / Max: 26.33Min: 15.54 / Avg: 15.56 / Max: 15.61Min: 38.55 / Avg: 38.65 / Max: 38.8Min: 36 / Avg: 36.07 / Max: 36.11Min: 39.12 / Avg: 39.15 / Max: 39.19Min: 37.31 / Avg: 37.39 / Max: 37.44Min: 35.77 / Avg: 35.81 / Max: 35.84Min: 35.04 / Avg: 35.06 / Max: 35.09Min: 32.69 / Avg: 32.72 / Max: 32.73Min: 33.24 / Avg: 33.25 / Max: 33.26Min: 26.56 / Avg: 26.61 / Max: 26.7Min: 22.92 / Avg: 22.94 / Max: 22.97Min: 21.66 / Avg: 21.7 / Max: 21.75Min: 17.82 / Avg: 17.83 / Max: 17.84Min: 12.97 / Avg: 12.98 / Max: 12.991. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1530456075SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 333.9854.1523.0324.0323.9825.2125.5027.8629.2329.1232.0739.7442.8649.9268.50
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1326395265Min: 33.92 / Avg: 33.98 / Max: 34.07Min: 54.13 / Avg: 54.15 / Max: 54.16Min: 23.01 / Avg: 23.03 / Max: 23.06Min: 23.95 / Avg: 24.03 / Max: 24.14Min: 23.93 / Avg: 23.98 / Max: 24.01Min: 25.18 / Avg: 25.21 / Max: 25.25Min: 25.47 / Avg: 25.5 / Max: 25.56Min: 27.84 / Avg: 27.86 / Max: 27.89Min: 29.13 / Avg: 29.23 / Max: 29.37Min: 29.07 / Avg: 29.12 / Max: 29.19Min: 32.01 / Avg: 32.07 / Max: 32.1Min: 39.64 / Avg: 39.74 / Max: 39.84Min: 42.78 / Avg: 42.86 / Max: 42.98Min: 49.75 / Avg: 49.92 / Max: 50.11Min: 68.42 / Avg: 68.5 / Max: 68.63

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200SE +/- 1.73, N = 3SE +/- 0.41, N = 3SE +/- 0.66, N = 5SE +/- 0.72, N = 15SE +/- 0.74, N = 3SE +/- 0.60, N = 15SE +/- 0.43, N = 14SE +/- 0.43, N = 3SE +/- 0.21, N = 3SE +/- 0.38, N = 3SE +/- 1.05, N = 3SE +/- 1.37, N = 3SE +/- 1.06, N = 3SE +/- 0.80, N = 3SE +/- 0.97, N = 3127.45172.3563.2767.2566.4867.6664.8864.9285.5670.7080.50110.94102.53131.03186.181. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 124 / Avg: 127.45 / Max: 129.28Min: 171.56 / Avg: 172.35 / Max: 172.91Min: 61.44 / Avg: 63.27 / Max: 64.99Min: 61.8 / Avg: 67.25 / Max: 71.05Min: 65.27 / Avg: 66.48 / Max: 67.84Min: 64.7 / Avg: 67.66 / Max: 71.97Min: 61.7 / Avg: 64.88 / Max: 67.09Min: 64.15 / Avg: 64.92 / Max: 65.62Min: 85.16 / Avg: 85.56 / Max: 85.86Min: 69.95 / Avg: 70.7 / Max: 71.18Min: 79.1 / Avg: 80.5 / Max: 82.54Min: 108.85 / Avg: 110.94 / Max: 113.51Min: 100.59 / Avg: 102.53 / Max: 104.24Min: 129.53 / Avg: 131.03 / Max: 132.24Min: 184.7 / Avg: 186.18 / Max: 188.011. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KSE +/- 66.01, N = 14SE +/- 8.04, N = 3SE +/- 8.90, N = 3SE +/- 34.52, N = 3SE +/- 5.01, N = 3SE +/- 29.81, N = 3SE +/- 68.31, N = 3SE +/- 5.12, N = 3SE +/- 8.80, N = 3SE +/- 22.39, N = 3SE +/- 5.98, N = 3SE +/- 24.99, N = 3SE +/- 1.40, N = 3SE +/- 16.20, N = 3SE +/- 10.40, N = 36787.0617351.2019245.9019155.4019256.7019436.3018424.1016649.7019645.4016649.5016743.6017082.4010003.6010066.6010156.601. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3K6K9K12K15KMin: 5976.74 / Avg: 6787.06 / Max: 6965.65Min: 17335.9 / Avg: 17351.17 / Max: 17363.2Min: 19230.9 / Avg: 19245.87 / Max: 19261.7Min: 19088.2 / Avg: 19155.37 / Max: 19202.8Min: 19250.6 / Avg: 19256.67 / Max: 19266.6Min: 19380.5 / Avg: 19436.3 / Max: 19482.4Min: 18336.1 / Avg: 18424.1 / Max: 18558.6Min: 16643.8 / Avg: 16649.7 / Max: 16659.9Min: 19632.2 / Avg: 19645.43 / Max: 19662.1Min: 16604.7 / Avg: 16649.47 / Max: 16672.4Min: 16735 / Avg: 16743.6 / Max: 16755.1Min: 17033.7 / Avg: 17082.43 / Max: 17116.4Min: 10000.9 / Avg: 10003.63 / Max: 10005.5Min: 10034.4 / Avg: 10066.63 / Max: 10085.6Min: 10136.2 / Avg: 10156.6 / Max: 10170.31. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi_cxx -lmpi

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200SE +/- 0.32, N = 3SE +/- 0.21, N = 3SE +/- 0.18, N = 3SE +/- 0.31, N = 3SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.26, N = 3SE +/- 0.14, N = 3SE +/- 0.30, N = 384.21136.6260.3762.5662.2761.8162.4866.5769.6269.4276.1696.77103.74122.41172.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 83.58 / Avg: 84.21 / Max: 84.54Min: 136.37 / Avg: 136.62 / Max: 137.03Min: 60.14 / Avg: 60.37 / Max: 60.74Min: 62.22 / Avg: 62.56 / Max: 63.19Min: 61.99 / Avg: 62.27 / Max: 62.58Min: 61.69 / Avg: 61.81 / Max: 61.9Min: 62.24 / Avg: 62.48 / Max: 62.81Min: 66.45 / Avg: 66.57 / Max: 66.76Min: 69.45 / Avg: 69.62 / Max: 69.79Min: 69.16 / Avg: 69.42 / Max: 69.74Min: 75.93 / Avg: 76.16 / Max: 76.36Min: 96.64 / Avg: 96.77 / Max: 96.88Min: 103.26 / Avg: 103.74 / Max: 104.13Min: 122.19 / Avg: 122.41 / Max: 122.67Min: 171.71 / Avg: 172.13 / Max: 172.7

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30060090012001500SE +/- 1.44, N = 3SE +/- 0.28, N = 3SE +/- 3.10, N = 3SE +/- 1.60, N = 3SE +/- 3.75, N = 3SE +/- 1.10, N = 3SE +/- 3.10, N = 3SE +/- 0.47, N = 3SE +/- 1.15, N = 3SE +/- 1.87, N = 3SE +/- 2.68, N = 3SE +/- 0.73, N = 3SE +/- 0.46, N = 3SE +/- 0.90, N = 3SE +/- 0.82, N = 3651.36453.361166.421050.991193.661070.091044.25937.44880.57932.75847.40634.51644.23555.11419.19MIN: 480.87 / MAX: 706.63MIN: 400.76 / MAX: 488.59MIN: 498.33 / MAX: 1299.42MIN: 478.51 / MAX: 1166.67MIN: 500.62 / MAX: 1329.43MIN: 574.58 / MAX: 1186.8MIN: 554.63 / MAX: 1157.48MIN: 659.51 / MAX: 1052.41MIN: 567.79 / MAX: 985.57MIN: 659.65 / MAX: 1047.61MIN: 566.75 / MAX: 939.62MIN: 482.67 / MAX: 693.38MIN: 503.48 / MAX: 704.1MIN: 463.4 / MAX: 605.59MIN: 370.7 / MAX: 456.841. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 648.52 / Avg: 651.36 / Max: 653.25Min: 452.94 / Avg: 453.36 / Max: 453.9Min: 1162.99 / Avg: 1166.42 / Max: 1172.62Min: 1048.15 / Avg: 1050.99 / Max: 1053.7Min: 1186.16 / Avg: 1193.66 / Max: 1197.47Min: 1068.71 / Avg: 1070.09 / Max: 1072.27Min: 1038.12 / Avg: 1044.25 / Max: 1048.12Min: 936.52 / Avg: 937.44 / Max: 938.02Min: 879.23 / Avg: 880.57 / Max: 882.86Min: 929.58 / Avg: 932.75 / Max: 936.06Min: 842.07 / Avg: 847.4 / Max: 850.64Min: 633.06 / Avg: 634.51 / Max: 635.27Min: 643.31 / Avg: 644.23 / Max: 644.7Min: 553.68 / Avg: 555.11 / Max: 556.76Min: 417.79 / Avg: 419.19 / Max: 420.621. (CC) gcc options: -pthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430SE +/- 0.12, N = 3SE +/- 0.13, N = 4SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.21, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 320.0010.2025.5725.0425.6525.2325.3625.2123.7125.1323.8420.4420.0317.099.201. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430Min: 19.78 / Avg: 20 / Max: 20.2Min: 9.85 / Avg: 10.2 / Max: 10.45Min: 25.56 / Avg: 25.57 / Max: 25.58Min: 24.94 / Avg: 25.04 / Max: 25.19Min: 25.41 / Avg: 25.65 / Max: 26.06Min: 25.14 / Avg: 25.23 / Max: 25.39Min: 25.3 / Avg: 25.36 / Max: 25.48Min: 25.09 / Avg: 25.21 / Max: 25.34Min: 23.64 / Avg: 23.71 / Max: 23.81Min: 25.08 / Avg: 25.13 / Max: 25.16Min: 23.72 / Avg: 23.84 / Max: 23.98Min: 20.34 / Avg: 20.44 / Max: 20.61Min: 19.97 / Avg: 20.03 / Max: 20.1Min: 16.96 / Avg: 17.09 / Max: 17.21Min: 9.07 / Avg: 9.2 / Max: 9.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.035, N = 4SE +/- 0.023, N = 3SE +/- 0.026, N = 3SE +/- 0.003, N = 3SE +/- 0.060, N = 3SE +/- 0.039, N = 4SE +/- 0.012, N = 3SE +/- 0.032, N = 4SE +/- 0.048, N = 4SE +/- 0.032, N = 3SE +/- 0.043, N = 3SE +/- 0.017, N = 3SE +/- 0.006, N = 3SE +/- 0.008, N = 3SE +/- 0.008, N = 35.3333.1846.9856.7526.9666.6786.4946.0915.9585.8665.5634.5064.2253.5932.5361. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 5.23 / Avg: 5.33 / Max: 5.39Min: 3.14 / Avg: 3.18 / Max: 3.21Min: 6.95 / Avg: 6.99 / Max: 7.04Min: 6.75 / Avg: 6.75 / Max: 6.76Min: 6.88 / Avg: 6.97 / Max: 7.08Min: 6.57 / Avg: 6.68 / Max: 6.75Min: 6.47 / Avg: 6.49 / Max: 6.51Min: 6.04 / Avg: 6.09 / Max: 6.18Min: 5.88 / Avg: 5.96 / Max: 6.09Min: 5.8 / Avg: 5.87 / Max: 5.91Min: 5.48 / Avg: 5.56 / Max: 5.61Min: 4.47 / Avg: 4.51 / Max: 4.53Min: 4.22 / Avg: 4.23 / Max: 4.24Min: 3.58 / Avg: 3.59 / Max: 3.61Min: 2.52 / Avg: 2.54 / Max: 2.551. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250SE +/- 0.76, N = 9SE +/- 0.33, N = 7SE +/- 2.10, N = 15SE +/- 1.78, N = 15SE +/- 1.83, N = 15SE +/- 1.24, N = 15SE +/- 1.18, N = 15SE +/- 0.83, N = 9SE +/- 0.75, N = 9SE +/- 1.00, N = 9SE +/- 0.75, N = 9SE +/- 0.59, N = 8SE +/- 0.71, N = 8SE +/- 0.60, N = 7SE +/- 0.37, N = 6173.5095.50211.39198.58210.69203.74198.00188.45181.15182.42174.63150.45142.37117.1478.211. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200Min: 167.66 / Avg: 173.5 / Max: 175.1Min: 94.18 / Avg: 95.5 / Max: 96.44Min: 186.29 / Avg: 211.39 / Max: 220.81Min: 179.11 / Avg: 198.58 / Max: 206.01Min: 187.36 / Avg: 210.69 / Max: 218.31Min: 187.17 / Avg: 203.74 / Max: 207.43Min: 182.68 / Avg: 198 / Max: 201.65Min: 183.08 / Avg: 188.45 / Max: 190.79Min: 175.43 / Avg: 181.15 / Max: 183.15Min: 175.43 / Avg: 182.42 / Max: 186.12Min: 169.05 / Avg: 174.63 / Max: 176.72Min: 146.52 / Avg: 150.45 / Max: 151.9Min: 137.77 / Avg: 142.37 / Max: 143.62Min: 113.91 / Avg: 117.14 / Max: 119.03Min: 76.63 / Avg: 78.21 / Max: 79.161. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50K100K150K200K250KSE +/- 244.85, N = 3SE +/- 54.60, N = 3SE +/- 898.32, N = 15SE +/- 718.53, N = 15SE +/- 833.99, N = 6SE +/- 34.51, N = 3SE +/- 275.99, N = 3SE +/- 115.93, N = 3SE +/- 115.78, N = 3SE +/- 797.17, N = 15SE +/- 414.30, N = 3SE +/- 652.17, N = 3SE +/- 183.23, N = 3SE +/- 219.52, N = 3SE +/- 42.10, N = 3126743.0177928.091245.895873.788093.786140.692153.279313.684263.585462.8105046.0144445.0151235.0177540.0212253.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40K80K120K160K200KMin: 126253 / Avg: 126742.67 / Max: 126993Min: 177860 / Avg: 177928 / Max: 178036Min: 86292.5 / Avg: 91245.77 / Max: 101420Min: 91993.6 / Avg: 95873.69 / Max: 100262Min: 85520.1 / Avg: 88093.72 / Max: 91636.7Min: 86071.8 / Avg: 86140.57 / Max: 86180.1Min: 91707.3 / Avg: 92153.17 / Max: 92657.9Min: 79189.4 / Avg: 79313.63 / Max: 79545.3Min: 84034.6 / Avg: 84263.47 / Max: 84408.4Min: 82675.3 / Avg: 85462.75 / Max: 91289.3Min: 104457 / Avg: 105045.67 / Max: 105845Min: 143270 / Avg: 144444.67 / Max: 145523Min: 150900 / Avg: 151235.33 / Max: 151531Min: 177299 / Avg: 177539.67 / Max: 177978Min: 212188 / Avg: 212253.33 / Max: 212332

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1326395265SE +/- 0.02, N = 4SE +/- 0.04, N = 3SE +/- 0.47, N = 15SE +/- 0.54, N = 5SE +/- 0.44, N = 5SE +/- 0.16, N = 5SE +/- 0.48, N = 5SE +/- 0.05, N = 5SE +/- 0.03, N = 5SE +/- 0.05, N = 5SE +/- 0.05, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 4SE +/- 0.05, N = 3SE +/- 0.02, N = 347.1727.1657.4053.5258.6459.5955.9956.9854.0754.9248.3341.5239.6531.3923.111. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1224364860Min: 47.11 / Avg: 47.17 / Max: 47.21Min: 27.11 / Avg: 27.16 / Max: 27.23Min: 54.58 / Avg: 57.4 / Max: 61.01Min: 52.03 / Avg: 53.52 / Max: 54.9Min: 57.87 / Avg: 58.64 / Max: 60.27Min: 59.13 / Avg: 59.59 / Max: 59.95Min: 55.17 / Avg: 55.99 / Max: 57.85Min: 56.85 / Avg: 56.98 / Max: 57.12Min: 54 / Avg: 54.07 / Max: 54.14Min: 54.78 / Avg: 54.92 / Max: 55.03Min: 48.24 / Avg: 48.33 / Max: 48.45Min: 41.47 / Avg: 41.52 / Max: 41.57Min: 39.58 / Avg: 39.65 / Max: 39.69Min: 31.34 / Avg: 31.39 / Max: 31.49Min: 23.08 / Avg: 23.11 / Max: 23.141. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P12K24K36K48K60KSE +/- 44.41, N = 3SE +/- 10.84, N = 4SE +/- 93.63, N = 6SE +/- 40.27, N = 5SE +/- 50.60, N = 5SE +/- 17.91, N = 5SE +/- 17.78, N = 5SE +/- 36.50, N = 5SE +/- 19.15, N = 4SE +/- 23.10, N = 4SE +/- 23.87, N = 322117.1727284.6956055.8755136.5055668.6347098.2249235.9246206.3337175.2029526.9721822.061. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P10K20K30K40K50KMin: 22040.65 / Avg: 22117.17 / Max: 22194.48Min: 27272.21 / Avg: 27284.69 / Max: 27317.11Min: 55606.59 / Avg: 56055.87 / Max: 56253.95Min: 55002.69 / Avg: 55136.5 / Max: 55211.78Min: 55491.76 / Avg: 55668.63 / Max: 55799.49Min: 47048.85 / Avg: 47098.22 / Max: 47153.11Min: 49168.13 / Avg: 49235.92 / Max: 49272.59Min: 46062.44 / Avg: 46206.33 / Max: 46260.89Min: 37140.1 / Avg: 37175.2 / Max: 37216.01Min: 29462.38 / Avg: 29526.97 / Max: 29571Min: 21777.28 / Avg: 21822.06 / Max: 21858.761. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Sysbench

This is a benchmark of Sysbench with CPU and memory sub-tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: MemoryEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2M4M6M8M10MSE +/- 17904.29, N = 5SE +/- 717.93, N = 5SE +/- 18226.83, N = 5SE +/- 11992.02, N = 5SE +/- 35042.66, N = 5SE +/- 25894.00, N = 5SE +/- 14871.56, N = 5SE +/- 8561.16, N = 5SE +/- 61191.25, N = 15SE +/- 5458.69, N = 5SE +/- 6709.34, N = 5SE +/- 780.60, N = 5SE +/- 11575.58, N = 5SE +/- 1918.40, N = 5SE +/- 12382.78, N = 53087955.634656182.266334165.636302136.656374595.125536026.076380400.356618850.244720934.266612746.505614019.494502591.457880986.597232205.305942474.371. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 2018-07-28Test: MemoryEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.4M2.8M4.2M5.6M7MMin: 3067897.69 / Avg: 3087955.63 / Max: 3159400.31Min: 4654731.01 / Avg: 4656182.26 / Max: 4658759.57Min: 6274890.2 / Avg: 6334165.63 / Max: 6379485.7Min: 6269344.44 / Avg: 6302136.65 / Max: 6328806.52Min: 6319346.34 / Avg: 6374595.12 / Max: 6505001.22Min: 5462715.81 / Avg: 5536026.07 / Max: 5604522.17Min: 6349678.61 / Avg: 6380400.35 / Max: 6428305.03Min: 6607713.8 / Avg: 6618850.24 / Max: 6652865.09Min: 4407088.34 / Avg: 4720934.26 / Max: 5255661.79Min: 6604337.31 / Avg: 6612746.5 / Max: 6634214.25Min: 5602311.72 / Avg: 5614019.49 / Max: 5638809.43Min: 4500022.71 / Avg: 4502591.45 / Max: 4504708.71Min: 7856776.88 / Avg: 7880986.59 / Max: 7920212.37Min: 7225906.31 / Avg: 7232205.3 / Max: 7236184.18Min: 5895950.69 / Avg: 5942474.37 / Max: 5962905.571. (CC) gcc options: -pthread -O3 -funroll-loops -ggdb3 -march=amdfam10 -rdynamic -ldl -laio -lm

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620SE +/- 0.05299, N = 11SE +/- 0.06375, N = 3SE +/- 0.00350, N = 3SE +/- 0.03096, N = 3SE +/- 0.00378, N = 3SE +/- 0.00479, N = 3SE +/- 0.00667, N = 3SE +/- 0.00275, N = 3SE +/- 0.00249, N = 3SE +/- 0.00324, N = 3SE +/- 0.01008, N = 3SE +/- 0.07483, N = 3SE +/- 0.00263, N = 3SE +/- 0.00448, N = 3SE +/- 0.00542, N = 37.0892512.9560017.3763017.3075017.3863017.6850016.6494015.2794017.9677015.2920015.5629015.632909.051549.097438.646611. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025Min: 6.64 / Avg: 7.09 / Max: 7.27Min: 12.86 / Avg: 12.96 / Max: 13.08Min: 17.37 / Avg: 17.38 / Max: 17.38Min: 17.25 / Avg: 17.31 / Max: 17.34Min: 17.38 / Avg: 17.39 / Max: 17.39Min: 17.68 / Avg: 17.69 / Max: 17.69Min: 16.64 / Avg: 16.65 / Max: 16.66Min: 15.28 / Avg: 15.28 / Max: 15.28Min: 17.96 / Avg: 17.97 / Max: 17.97Min: 15.29 / Avg: 15.29 / Max: 15.3Min: 15.55 / Avg: 15.56 / Max: 15.58Min: 15.51 / Avg: 15.63 / Max: 15.77Min: 9.05 / Avg: 9.05 / Max: 9.05Min: 9.09 / Avg: 9.1 / Max: 9.1Min: 8.64 / Avg: 8.65 / Max: 8.661. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

OpenFOAM

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P120240360480600SE +/- 1.71, N = 3SE +/- 0.36, N = 3SE +/- 0.14, N = 3SE +/- 0.04, N = 3SE +/- 0.24, N = 3SE +/- 0.31, N = 3SE +/- 0.13, N = 3SE +/- 0.59, N = 3SE +/- 0.57, N = 3SE +/- 0.44, N = 3SE +/- 0.21, N = 3SE +/- 0.15, N = 3SE +/- 0.84, N = 3SE +/- 1.13, N = 3SE +/- 0.29, N = 3454.84362.74232.40233.34232.98233.73260.78319.87236.99320.20314.06313.64520.45530.10573.041. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 8Input: Motorbike 60MEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P100200300400500Min: 452.49 / Avg: 454.84 / Max: 458.16Min: 362.09 / Avg: 362.74 / Max: 363.35Min: 232.15 / Avg: 232.4 / Max: 232.62Min: 233.29 / Avg: 233.34 / Max: 233.42Min: 232.55 / Avg: 232.98 / Max: 233.38Min: 233.28 / Avg: 233.73 / Max: 234.33Min: 260.58 / Avg: 260.78 / Max: 261.02Min: 319.1 / Avg: 319.87 / Max: 321.04Min: 235.98 / Avg: 236.99 / Max: 237.94Min: 319.73 / Avg: 320.2 / Max: 321.08Min: 313.84 / Avg: 314.06 / Max: 314.48Min: 313.39 / Avg: 313.64 / Max: 313.91Min: 518.85 / Avg: 520.45 / Max: 521.68Min: 528.05 / Avg: 530.1 / Max: 531.94Min: 572.49 / Avg: 573.04 / Max: 573.471. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -ldecompose -lgenericPatchFields -lmetisDecomp -lscotchDecomp -llagrangian -lregionModels -lOpenFOAM -ldl -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000SE +/- 1.86, N = 3SE +/- 0.38, N = 3SE +/- 2.57, N = 3SE +/- 3.47, N = 3SE +/- 2.82, N = 3SE +/- 0.45, N = 3SE +/- 2.26, N = 3SE +/- 0.52, N = 3SE +/- 2.20, N = 3SE +/- 0.95, N = 3SE +/- 1.22, N = 3SE +/- 0.72, N = 3SE +/- 0.66, N = 3SE +/- 1.45, N = 3SE +/- 0.34, N = 3671.07496.111113.87983.611158.081116.831095.40939.90892.42937.73839.06698.08712.26624.79473.82MIN: 524.48 / MAX: 829.64MIN: 390.79 / MAX: 704.31MIN: 632.86 / MAX: 1429.43MIN: 616.84 / MAX: 1260.07MIN: 644.06 / MAX: 1489.95MIN: 672.49 / MAX: 1434.66MIN: 671.87 / MAX: 1406.03MIN: 689.36 / MAX: 1207.51MIN: 641.23 / MAX: 1144.96MIN: 687.68 / MAX: 1200.87MIN: 646.27 / MAX: 1067.76MIN: 539.13 / MAX: 873.36MIN: 546.18 / MAX: 898.36MIN: 483.27 / MAX: 784.91MIN: 365.77 / MAX: 674.591. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 667.49 / Avg: 671.07 / Max: 673.75Min: 495.72 / Avg: 496.11 / Max: 496.87Min: 1108.89 / Avg: 1113.87 / Max: 1117.47Min: 977.12 / Avg: 983.61 / Max: 988.98Min: 1153.84 / Avg: 1158.08 / Max: 1163.42Min: 1116.14 / Avg: 1116.83 / Max: 1117.68Min: 1091.59 / Avg: 1095.4 / Max: 1099.4Min: 938.86 / Avg: 939.9 / Max: 940.47Min: 888.17 / Avg: 892.42 / Max: 895.56Min: 935.96 / Avg: 937.73 / Max: 939.19Min: 837.5 / Avg: 839.06 / Max: 841.47Min: 697.03 / Avg: 698.08 / Max: 699.47Min: 711 / Avg: 712.26 / Max: 713.24Min: 622.12 / Avg: 624.79 / Max: 627.09Min: 473.15 / Avg: 473.82 / Max: 474.251. (CC) gcc options: -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P11K22K33K44K55KSE +/- 80.46, N = 5SE +/- 20.16, N = 8SE +/- 292.47, N = 8SE +/- 224.56, N = 8SE +/- 122.43, N = 8SE +/- 86.13, N = 8SE +/- 218.93, N = 8SE +/- 76.06, N = 8SE +/- 17.34, N = 8SE +/- 23.57, N = 6SE +/- 37.32, N = 721714.6545336.8951593.6251795.6152245.2344205.8052022.7644082.9547349.2929776.8130311.941. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P9K18K27K36K45KMin: 21536.77 / Avg: 21714.65 / Max: 22018.31Min: 45233.69 / Avg: 45336.89 / Max: 45424.87Min: 50186.95 / Avg: 51593.62 / Max: 52586.54Min: 50825.97 / Avg: 51795.61 / Max: 52696.05Min: 51776.39 / Avg: 52245.23 / Max: 52860.72Min: 43844.26 / Avg: 44205.8 / Max: 44502.94Min: 50688.93 / Avg: 52022.76 / Max: 52586.47Min: 43772.91 / Avg: 44082.95 / Max: 44328.66Min: 47278.3 / Avg: 47349.29 / Max: 47425.88Min: 29684.13 / Avg: 29776.81 / Max: 29837.83Min: 30136.45 / Avg: 30311.94 / Max: 30404.241. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P100200300400500SE +/- 0.68, N = 3SE +/- 0.12, N = 3SE +/- 0.47, N = 3SE +/- 0.50, N = 3SE +/- 0.75, N = 3SE +/- 0.73, N = 3SE +/- 1.22, N = 3SE +/- 0.39, N = 3SE +/- 0.57, N = 3SE +/- 0.52, N = 3SE +/- 0.20, N = 3SE +/- 0.25, N = 3SE +/- 0.47, N = 3SE +/- 0.37, N = 3SE +/- 1.04, N = 3230.94375.70213.09224.62206.95195.34206.58187.60193.92189.66200.21262.63270.72325.84450.851. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80160240320400Min: 229.71 / Avg: 230.94 / Max: 232.08Min: 375.47 / Avg: 375.7 / Max: 375.88Min: 212.32 / Avg: 213.09 / Max: 213.94Min: 224.11 / Avg: 224.62 / Max: 225.62Min: 206.04 / Avg: 206.95 / Max: 208.44Min: 194.23 / Avg: 195.34 / Max: 196.73Min: 204.79 / Avg: 206.58 / Max: 208.92Min: 186.85 / Avg: 187.6 / Max: 188.17Min: 192.99 / Avg: 193.92 / Max: 194.97Min: 188.81 / Avg: 189.66 / Max: 190.61Min: 199.84 / Avg: 200.21 / Max: 200.55Min: 262.32 / Avg: 262.63 / Max: 263.13Min: 270.13 / Avg: 270.72 / Max: 271.65Min: 325.11 / Avg: 325.84 / Max: 326.32Min: 449.53 / Avg: 450.85 / Max: 452.911. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P4K8K12K16K20KSE +/- 43.06, N = 3SE +/- 18.04, N = 4SE +/- 12.30, N = 5SE +/- 15.75, N = 5SE +/- 27.38, N = 5SE +/- 30.70, N = 5SE +/- 25.61, N = 5SE +/- 69.28, N = 5SE +/- 15.81, N = 5SE +/- 18.07, N = 4SE +/- 22.89, N = 47779.4912350.8715249.7215079.0815230.2314977.2918019.0114759.4015688.579697.839844.951. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P3K6K9K12K15KMin: 7733.94 / Avg: 7779.49 / Max: 7865.57Min: 12309.98 / Avg: 12350.87 / Max: 12395.51Min: 15214.37 / Avg: 15249.72 / Max: 15282.02Min: 15027.23 / Avg: 15079.08 / Max: 15122.17Min: 15146.06 / Avg: 15230.23 / Max: 15316.01Min: 14873.66 / Avg: 14977.29 / Max: 15062.42Min: 17953.88 / Avg: 18019.01 / Max: 18084.24Min: 14524.99 / Avg: 14759.4 / Max: 14889.39Min: 15654.67 / Avg: 15688.57 / Max: 15744.56Min: 9644.91 / Avg: 9697.83 / Max: 9725.86Min: 9819.42 / Avg: 9844.95 / Max: 9913.541. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Timed ImageMagick Compilation

This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645SE +/- 0.15, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.16, N = 3SE +/- 0.02, N = 3SE +/- 0.17, N = 321.6328.6016.7617.1217.1917.6718.0618.9419.2719.4120.4323.5525.4628.2237.72
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed ImageMagick Compilation 6.9.0Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240Min: 21.4 / Avg: 21.63 / Max: 21.91Min: 28.39 / Avg: 28.6 / Max: 28.73Min: 16.66 / Avg: 16.76 / Max: 16.81Min: 16.99 / Avg: 17.11 / Max: 17.27Min: 16.99 / Avg: 17.19 / Max: 17.31Min: 17.49 / Avg: 17.67 / Max: 17.79Min: 17.88 / Avg: 18.06 / Max: 18.19Min: 18.83 / Avg: 18.93 / Max: 19.02Min: 19.16 / Avg: 19.27 / Max: 19.35Min: 19.38 / Avg: 19.4 / Max: 19.45Min: 20.3 / Avg: 20.43 / Max: 20.52Min: 23.47 / Avg: 23.55 / Max: 23.6Min: 25.14 / Avg: 25.46 / Max: 25.64Min: 28.17 / Avg: 28.22 / Max: 28.26Min: 37.4 / Avg: 37.72 / Max: 38.01

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025SE +/- 0.049, N = 4SE +/- 0.017, N = 3SE +/- 0.008, N = 5SE +/- 0.017, N = 5SE +/- 0.030, N = 5SE +/- 0.074, N = 15SE +/- 0.119, N = 4SE +/- 0.022, N = 4SE +/- 0.016, N = 5SE +/- 0.007, N = 4SE +/- 0.042, N = 4SE +/- 0.034, N = 3SE +/- 0.159, N = 8SE +/- 0.079, N = 3SE +/- 0.040, N = 415.04919.7078.7918.9288.9059.18512.02714.3169.91314.32014.76117.72918.18916.89515.5401. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025Min: 14.94 / Avg: 15.05 / Max: 15.15Min: 19.68 / Avg: 19.71 / Max: 19.74Min: 8.78 / Avg: 8.79 / Max: 8.82Min: 8.9 / Avg: 8.93 / Max: 8.99Min: 8.85 / Avg: 8.91 / Max: 9.02Min: 9.06 / Avg: 9.19 / Max: 10.22Min: 11.69 / Avg: 12.03 / Max: 12.26Min: 14.25 / Avg: 14.32 / Max: 14.36Min: 9.88 / Avg: 9.91 / Max: 9.97Min: 14.31 / Avg: 14.32 / Max: 14.34Min: 14.65 / Avg: 14.76 / Max: 14.83Min: 17.66 / Avg: 17.73 / Max: 17.77Min: 17.99 / Avg: 18.19 / Max: 19.3Min: 16.76 / Avg: 16.9 / Max: 17.03Min: 15.47 / Avg: 15.54 / Max: 15.651. (CXX) g++ options: -O2 -lOpenCL

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.30, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.22, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.32, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.23, N = 3SE +/- 0.29, N = 375.85113.2666.2567.9768.2969.4369.8671.4674.4874.2076.6587.6694.35105.71145.44
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 75.26 / Avg: 75.85 / Max: 76.19Min: 113.02 / Avg: 113.26 / Max: 113.53Min: 66.01 / Avg: 66.25 / Max: 66.52Min: 67.65 / Avg: 67.97 / Max: 68.39Min: 68.11 / Avg: 68.29 / Max: 68.39Min: 69.34 / Avg: 69.43 / Max: 69.57Min: 69.58 / Avg: 69.86 / Max: 70.15Min: 70.83 / Avg: 71.46 / Max: 71.85Min: 74.18 / Avg: 74.48 / Max: 74.74Min: 74.09 / Avg: 74.2 / Max: 74.34Min: 76.5 / Avg: 76.65 / Max: 76.84Min: 87.37 / Avg: 87.66 / Max: 87.82Min: 94.22 / Avg: 94.35 / Max: 94.43Min: 105.37 / Avg: 105.71 / Max: 106.15Min: 145.08 / Avg: 145.44 / Max: 146.02

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 3.08, N = 3SE +/- 19.31, N = 3SE +/- 2.49, N = 3SE +/- 12.73, N = 3SE +/- 0.31, N = 3SE +/- 2.50, N = 3SE +/- 2.44, N = 3SE +/- 3.65, N = 3SE +/- 1.33, N = 3SE +/- 1.04, N = 3SE +/- 12.54, N = 3SE +/- 1.52, N = 3SE +/- 7.78, N = 3SE +/- 9.26, N = 3SE +/- 14.48, N = 31976.701836.513589.694029.983562.633059.053325.992579.792815.802794.252507.582414.742527.862434.172298.63
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P7001400210028003500Min: 1971.22 / Avg: 1976.7 / Max: 1981.88Min: 1797.95 / Avg: 1836.51 / Max: 1857.52Min: 3585.11 / Avg: 3589.69 / Max: 3593.65Min: 4015.57 / Avg: 4029.98 / Max: 4055.36Min: 3562.19 / Avg: 3562.63 / Max: 3563.22Min: 3056.48 / Avg: 3059.05 / Max: 3064.04Min: 3322.37 / Avg: 3325.99 / Max: 3330.62Min: 2575.39 / Avg: 2579.79 / Max: 2587.03Min: 2813.23 / Avg: 2815.8 / Max: 2817.7Min: 2792.4 / Avg: 2794.25 / Max: 2795.99Min: 2482.49 / Avg: 2507.58 / Max: 2520.14Min: 2412.09 / Avg: 2414.74 / Max: 2417.36Min: 2515.93 / Avg: 2527.86 / Max: 2542.47Min: 2418.87 / Avg: 2434.17 / Max: 2450.86Min: 2271.79 / Avg: 2298.63 / Max: 2321.46

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3K6K9K12K15KSE +/- 40.17, N = 6SE +/- 12.34, N = 6SE +/- 138.93, N = 7SE +/- 146.93, N = 12SE +/- 142.13, N = 3SE +/- 58.13, N = 4SE +/- 60.62, N = 4SE +/- 157.59, N = 4SE +/- 43.36, N = 4SE +/- 131.70, N = 5SE +/- 9.05, N = 6SE +/- 8.39, N = 6SE +/- 15.52, N = 6SE +/- 38.61, N = 6SE +/- 11.79, N = 66839.178720.8814778.6414612.2114603.3314015.6613435.3013099.6213938.2512840.648024.377954.156954.806871.936757.991. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3K6K9K12K15KMin: 6733.72 / Avg: 6839.17 / Max: 6954.28Min: 8666.52 / Avg: 8720.88 / Max: 8753.04Min: 13961.68 / Avg: 14778.64 / Max: 15005.98Min: 13024.64 / Avg: 14612.21 / Max: 14895.69Min: 14340.08 / Avg: 14603.33 / Max: 14827.83Min: 13885.31 / Avg: 14015.66 / Max: 14137.76Min: 13262.26 / Avg: 13435.3 / Max: 13545.43Min: 12632.05 / Avg: 13099.62 / Max: 13300.62Min: 13879.03 / Avg: 13938.25 / Max: 14067.14Min: 12361.8 / Avg: 12840.64 / Max: 13103.71Min: 7986.3 / Avg: 8024.37 / Max: 8053.74Min: 7941.34 / Avg: 7954.15 / Max: 7995.56Min: 6907.71 / Avg: 6954.8 / Max: 7005.82Min: 6730.47 / Avg: 6871.93 / Max: 6957.51Min: 6718.89 / Avg: 6757.99 / Max: 6804.971. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 1.86, N = 3SE +/- 14.55, N = 3SE +/- 1.62, N = 3SE +/- 8.92, N = 3SE +/- 0.45, N = 3SE +/- 1.92, N = 3SE +/- 3.41, N = 3SE +/- 2.29, N = 3SE +/- 2.24, N = 3SE +/- 0.81, N = 3SE +/- 3.50, N = 3SE +/- 2.67, N = 3SE +/- 13.37, N = 3SE +/- 34.12, N = 3SE +/- 17.95, N = 31980.561844.313584.774030.533558.093069.383327.502572.362814.152791.542522.612413.452446.022406.222314.66
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P7001400210028003500Min: 1978.38 / Avg: 1980.56 / Max: 1984.26Min: 1815.75 / Avg: 1844.31 / Max: 1863.4Min: 3581.64 / Avg: 3584.77 / Max: 3587.04Min: 4020.18 / Avg: 4030.53 / Max: 4048.29Min: 3557.24 / Avg: 3558.09 / Max: 3558.76Min: 3065.61 / Avg: 3069.38 / Max: 3071.93Min: 3322.21 / Avg: 3327.5 / Max: 3333.86Min: 2567.84 / Avg: 2572.36 / Max: 2575.2Min: 2809.91 / Avg: 2814.15 / Max: 2817.55Min: 2789.93 / Avg: 2791.54 / Max: 2792.44Min: 2516.2 / Avg: 2522.61 / Max: 2528.27Min: 2410.34 / Avg: 2413.45 / Max: 2418.77Min: 2426.14 / Avg: 2446.02 / Max: 2471.46Min: 2355.5 / Avg: 2406.22 / Max: 2471.13Min: 2283.58 / Avg: 2314.66 / Max: 2345.75

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P11002200330044005500SE +/- 1.31, N = 3SE +/- 4.62, N = 3SE +/- 18.38, N = 3SE +/- 6.77, N = 3SE +/- 6.15, N = 3SE +/- 4.94, N = 3SE +/- 5.33, N = 3SE +/- 2.78, N = 3SE +/- 10.25, N = 3SE +/- 1.53, N = 3SE +/- 14.72, N = 3SE +/- 5.57, N = 3SE +/- 2.79, N = 3SE +/- 9.03, N = 3SE +/- 3.19, N = 32643.392399.194536.005170.994731.724132.134197.603276.523757.353578.093310.913145.893344.393264.493071.91
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500Min: 2641.61 / Avg: 2643.39 / Max: 2645.95Min: 2393.6 / Avg: 2399.19 / Max: 2408.36Min: 4505.91 / Avg: 4536 / Max: 4569.34Min: 5162.09 / Avg: 5170.99 / Max: 5184.28Min: 4723.88 / Avg: 4731.72 / Max: 4743.85Min: 4122.78 / Avg: 4132.13 / Max: 4139.56Min: 4187.64 / Avg: 4197.6 / Max: 4205.86Min: 3273.17 / Avg: 3276.52 / Max: 3282.04Min: 3741.86 / Avg: 3757.35 / Max: 3776.73Min: 3576.1 / Avg: 3578.09 / Max: 3581.09Min: 3286.85 / Avg: 3310.91 / Max: 3337.63Min: 3138.42 / Avg: 3145.89 / Max: 3156.77Min: 3340.81 / Avg: 3344.39 / Max: 3349.89Min: 3253.32 / Avg: 3264.49 / Max: 3282.36Min: 3065.98 / Avg: 3071.91 / Max: 3076.9

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P400800120016002000SE +/- 9.24, N = 3SE +/- 5.70, N = 3SE +/- 3.31, N = 3SE +/- 4.04, N = 3SE +/- 11.89, N = 3SE +/- 5.74, N = 3SE +/- 1.95, N = 3SE +/- 3.36, N = 3SE +/- 2.26, N = 3SE +/- 1.26, N = 3SE +/- 2.89, N = 3934.001277.152007.421971.502006.811885.981992.761884.311647.781422.071083.361. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7542EPYC 7532EPYC 7502PEPYC 7302PEPYC 7282EPYC 7232P30060090012001500Min: 920.59 / Avg: 934 / Max: 951.73Min: 1269.25 / Avg: 1277.15 / Max: 1288.21Min: 2003.75 / Avg: 2007.42 / Max: 2014.03Min: 1965.15 / Avg: 1971.5 / Max: 1979Min: 1983.12 / Avg: 2006.81 / Max: 2020.44Min: 1876.61 / Avg: 1885.98 / Max: 1896.4Min: 1988.95 / Avg: 1992.76 / Max: 1995.4Min: 1880.37 / Avg: 1884.31 / Max: 1890.99Min: 1643.54 / Avg: 1647.78 / Max: 1651.27Min: 1420.09 / Avg: 1422.07 / Max: 1424.4Min: 1077.67 / Avg: 1083.36 / Max: 1087.081. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P11002200330044005500SE +/- 4.58, N = 3SE +/- 1.90, N = 3SE +/- 8.32, N = 3SE +/- 6.37, N = 3SE +/- 4.63, N = 3SE +/- 14.85, N = 3SE +/- 8.65, N = 3SE +/- 5.09, N = 3SE +/- 3.83, N = 3SE +/- 1.62, N = 3SE +/- 4.18, N = 3SE +/- 5.33, N = 3SE +/- 1.17, N = 3SE +/- 8.60, N = 3SE +/- 10.38, N = 32649.352409.034525.325153.994732.734138.694203.813271.263754.853582.563315.633155.853342.133265.933090.58
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500Min: 2640.2 / Avg: 2649.35 / Max: 2654.02Min: 2405.41 / Avg: 2409.03 / Max: 2411.84Min: 4510.25 / Avg: 4525.32 / Max: 4538.98Min: 5142.04 / Avg: 5153.99 / Max: 5163.81Min: 4726.94 / Avg: 4732.73 / Max: 4741.89Min: 4119.13 / Avg: 4138.69 / Max: 4167.82Min: 4186.54 / Avg: 4203.81 / Max: 4213.33Min: 3264.96 / Avg: 3271.26 / Max: 3281.33Min: 3748.56 / Avg: 3754.85 / Max: 3761.77Min: 3579.4 / Avg: 3582.56 / Max: 3584.73Min: 3309.61 / Avg: 3315.63 / Max: 3323.66Min: 3145.96 / Avg: 3155.85 / Max: 3164.23Min: 3339.82 / Avg: 3342.13 / Max: 3343.55Min: 3248.92 / Avg: 3265.93 / Max: 3276.67Min: 3072.59 / Avg: 3090.58 / Max: 3108.55

Parboil

The Parboil Benchmarks from the IMPACT Research Group at University of Illinois are a set of throughput computing applications for looking at computing architecture and compilers. Parboil test-cases support OpenMP, OpenCL, and CUDA multi-processing environments. However, at this time the test profile is just making use of the OpenMP and OpenCL test workloads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP LBMEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1020304050SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.17, N = 11SE +/- 0.07, N = 3SE +/- 0.26, N = 5SE +/- 0.03, N = 3SE +/- 0.26, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.14, N = 335.4227.2523.3323.3223.8221.9426.4727.6422.9427.6029.0224.9246.0742.9545.761. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp
OpenBenchmarking.orgSeconds, Fewer Is BetterParboil 2.5Test: OpenMP LBMEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645Min: 35.36 / Avg: 35.42 / Max: 35.47Min: 27.2 / Avg: 27.25 / Max: 27.28Min: 22.5 / Avg: 23.33 / Max: 24.76Min: 23.22 / Avg: 23.32 / Max: 23.46Min: 23.2 / Avg: 23.82 / Max: 24.72Min: 21.88 / Avg: 21.94 / Max: 21.99Min: 26.02 / Avg: 26.47 / Max: 26.92Min: 27.53 / Avg: 27.64 / Max: 27.7Min: 22.73 / Avg: 22.94 / Max: 23.1Min: 27.55 / Avg: 27.6 / Max: 27.63Min: 28.89 / Avg: 29.02 / Max: 29.09Min: 24.77 / Avg: 24.92 / Max: 25.05Min: 46.05 / Avg: 46.07 / Max: 46.1Min: 42.88 / Avg: 42.95 / Max: 43.08Min: 45.56 / Avg: 45.76 / Max: 46.031. (CXX) g++ options: -lm -lpthread -lgomp -O3 -ffast-math -fopenmp

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 0.46, N = 3SE +/- 0.90, N = 3SE +/- 0.13, N = 3SE +/- 0.47, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 1.62, N = 3SE +/- 1.08, N = 3SE +/- 0.65, N = 3SE +/- 0.87, N = 3140.61227.13138.09139.36139.56138.08140.18135.97142.20137.44138.04164.81173.72199.54277.671. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 140.07 / Avg: 140.61 / Max: 141.52Min: 225.67 / Avg: 227.13 / Max: 228.77Min: 137.89 / Avg: 138.09 / Max: 138.34Min: 138.86 / Avg: 139.36 / Max: 140.29Min: 139.49 / Avg: 139.56 / Max: 139.6Min: 138.03 / Avg: 138.08 / Max: 138.13Min: 140 / Avg: 140.18 / Max: 140.43Min: 135.9 / Avg: 135.97 / Max: 136.04Min: 142.07 / Avg: 142.2 / Max: 142.38Min: 137.39 / Avg: 137.44 / Max: 137.53Min: 137.91 / Avg: 138.04 / Max: 138.21Min: 161.8 / Avg: 164.81 / Max: 167.37Min: 172.62 / Avg: 173.72 / Max: 175.87Min: 198.32 / Avg: 199.54 / Max: 200.54Min: 275.95 / Avg: 277.67 / Max: 278.781. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.30012.60023.90035.20046.5005SE +/- 0.401, N = 3SE +/- 0.002, N = 14SE +/- 0.009, N = 3SE +/- 0.015, N = 3SE +/- 0.006, N = 3SE +/- 0.024, N = 3SE +/- 0.004, N = 15SE +/- 0.005, N = 15SE +/- 0.008, N = 3SE +/- 0.010, N = 3SE +/- 0.012, N = 3SE +/- 0.005, N = 15SE +/- 0.060, N = 3SE +/- 0.015, N = 4SE +/- 0.007, N = 115.7783.6253.1273.1873.0733.0563.0222.8363.0032.9273.2133.8564.0815.3223.917MIN: 4.65 / MAX: 7.24MIN: 3.57 / MAX: 17.34MIN: 3.07 / MAX: 3.45MIN: 3.13 / MAX: 3.28MIN: 3.03 / MAX: 3.22MIN: 2.98 / MAX: 3.31MIN: 2.96 / MAX: 3.4MIN: 2.77 / MAX: 4.94MIN: 2.95 / MAX: 3.25MIN: 2.87 / MAX: 5.27MIN: 3.15 / MAX: 4.81MIN: 3.74 / MAX: 20.71MIN: 3.8 / MAX: 20.13MIN: 5.22 / MAX: 13.18MIN: 3.79 / MAX: 19.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: mobilenet-v1-1.0EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 5 / Avg: 5.78 / Max: 6.34Min: 3.61 / Avg: 3.62 / Max: 3.64Min: 3.11 / Avg: 3.13 / Max: 3.14Min: 3.17 / Avg: 3.19 / Max: 3.22Min: 3.06 / Avg: 3.07 / Max: 3.09Min: 3.02 / Avg: 3.06 / Max: 3.1Min: 3.01 / Avg: 3.02 / Max: 3.05Min: 2.81 / Avg: 2.84 / Max: 2.88Min: 2.99 / Avg: 3 / Max: 3.02Min: 2.92 / Avg: 2.93 / Max: 2.95Min: 3.19 / Avg: 3.21 / Max: 3.23Min: 3.82 / Avg: 3.86 / Max: 3.88Min: 3.98 / Avg: 4.08 / Max: 4.18Min: 5.3 / Avg: 5.32 / Max: 5.36Min: 3.86 / Avg: 3.92 / Max: 3.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P110220330440550SE +/- 1.43, N = 3SE +/- 0.26, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.24, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 3SE +/- 0.27, N = 3SE +/- 0.98, N = 3SE +/- 2.55, N = 3SE +/- 2.81, N = 3SE +/- 0.90, N = 3254.76416.27254.50257.01256.56254.88258.37250.41260.46253.10254.24301.52309.98361.48507.741. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P90180270360450Min: 252.29 / Avg: 254.76 / Max: 257.23Min: 415.89 / Avg: 416.27 / Max: 416.76Min: 254.45 / Avg: 254.5 / Max: 254.61Min: 256.86 / Avg: 257 / Max: 257.24Min: 256.37 / Avg: 256.56 / Max: 256.9Min: 254.57 / Avg: 254.88 / Max: 255.36Min: 258.22 / Avg: 258.37 / Max: 258.61Min: 250.08 / Avg: 250.41 / Max: 250.74Min: 260.18 / Avg: 260.46 / Max: 260.72Min: 252.84 / Avg: 253.1 / Max: 253.26Min: 253.9 / Avg: 254.24 / Max: 254.77Min: 300.37 / Avg: 301.52 / Max: 303.47Min: 305.77 / Avg: 309.98 / Max: 314.59Min: 356.31 / Avg: 361.48 / Max: 365.98Min: 506.68 / Avg: 507.74 / Max: 509.531. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200M400M600M800M1000MSE +/- 3149876.45, N = 4SE +/- 322962.48, N = 6SE +/- 1151151.66, N = 3SE +/- 507394.80, N = 3SE +/- 229126.62, N = 3SE +/- 857688.98, N = 3SE +/- 989060.38, N = 3SE +/- 142607.92, N = 3SE +/- 422202.49, N = 3SE +/- 998206.04, N = 3SE +/- 643499.89, N = 3SE +/- 405072.19, N = 4SE +/- 314598.13, N = 3SE +/- 252656.85, N = 4SE +/- 295157.17, N = 56431809258098445838809985678783753338837352008922488338568934007743294679095326677743048007784594337882668754557606334578553504499164601. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P160M320M480M640M800MMin: 634035200 / Avg: 643180925 / Max: 647428500Min: 808702900 / Avg: 809844583.33 / Max: 810642700Min: 878726400 / Avg: 880998566.67 / Max: 882456200Min: 877829000 / Avg: 878375333.33 / Max: 879389100Min: 883287900 / Avg: 883735200 / Max: 884045100Min: 890552000 / Avg: 892248833.33 / Max: 893315100Min: 854937100 / Avg: 856893400 / Max: 858125300Min: 774119000 / Avg: 774329466.67 / Max: 774601400Min: 908914700 / Avg: 909532666.67 / Max: 910340000Min: 772483400 / Avg: 774304800 / Max: 775923400Min: 777565100 / Avg: 778459433.33 / Max: 779708100Min: 787578900 / Avg: 788266875 / Max: 789313600Min: 455133300 / Avg: 455760633.33 / Max: 456116200Min: 457151100 / Avg: 457855350 / Max: 458299300Min: 448913300 / Avg: 449916460 / Max: 4506291001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.55, N = 3SE +/- 0.31, N = 3SE +/- 0.56, N = 3SE +/- 0.28, N = 3SE +/- 0.12, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 3SE +/- 0.08, N = 3SE +/- 0.20, N = 3SE +/- 0.09, N = 3SE +/- 0.38, N = 3SE +/- 0.17, N = 3SE +/- 0.13, N = 3129.77109.55190.04185.31190.19179.72178.70152.78146.29152.15135.69116.68114.49107.2594.36MIN: 84.35 / MAX: 268.95MIN: 70.77 / MAX: 249.84MIN: 126.76 / MAX: 308.03MIN: 125.96 / MAX: 294.64MIN: 126.93 / MAX: 307.7MIN: 121.84 / MAX: 276.81MIN: 121.07 / MAX: 277.42MIN: 98.52 / MAX: 278.09MIN: 94.75 / MAX: 256.37MIN: 97.46 / MAX: 272.72MIN: 85.88 / MAX: 272.31MIN: 74.3 / MAX: 261.93MIN: 72.02 / MAX: 259.87MIN: 67.93 / MAX: 243.83MIN: 60.14 / MAX: 225.561. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 129.43 / Avg: 129.77 / Max: 129.97Min: 109.41 / Avg: 109.55 / Max: 109.8Min: 188.97 / Avg: 190.04 / Max: 190.8Min: 184.71 / Avg: 185.31 / Max: 185.73Min: 189.1 / Avg: 190.19 / Max: 190.97Min: 179.44 / Avg: 179.72 / Max: 180.27Min: 178.47 / Avg: 178.7 / Max: 178.84Min: 152.47 / Avg: 152.78 / Max: 153.01Min: 146.03 / Avg: 146.29 / Max: 146.46Min: 152.02 / Avg: 152.15 / Max: 152.28Min: 135.36 / Avg: 135.69 / Max: 136.06Min: 116.54 / Avg: 116.68 / Max: 116.86Min: 113.88 / Avg: 114.49 / Max: 115.2Min: 106.92 / Avg: 107.25 / Max: 107.46Min: 94.18 / Avg: 94.36 / Max: 94.611. (CC) gcc options: -pthread

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620SE +/- 0.028662, N = 3SE +/- 0.039059, N = 3SE +/- 0.190487, N = 4SE +/- 0.163720, N = 4SE +/- 0.099835, N = 4SE +/- 0.030087, N = 3SE +/- 0.083534, N = 3SE +/- 0.091101, N = 3SE +/- 0.072162, N = 3SE +/- 0.066319, N = 3SE +/- 0.025438, N = 3SE +/- 0.017716, N = 3SE +/- 0.037637, N = 3SE +/- 0.008716, N = 3SE +/- 0.010754, N = 34.8976533.01159516.29885215.61481216.88116813.94246512.7350318.8633369.8288719.3269097.2082354.9637154.4524193.4640031.5837041. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620Min: 4.85 / Avg: 4.9 / Max: 4.95Min: 2.95 / Avg: 3.01 / Max: 3.08Min: 15.78 / Avg: 16.3 / Max: 16.69Min: 15.39 / Avg: 15.61 / Max: 16.1Min: 16.59 / Avg: 16.88 / Max: 17.02Min: 13.91 / Avg: 13.94 / Max: 14Min: 12.62 / Avg: 12.74 / Max: 12.9Min: 8.68 / Avg: 8.86 / Max: 8.96Min: 9.75 / Avg: 9.83 / Max: 9.97Min: 9.25 / Avg: 9.33 / Max: 9.46Min: 7.16 / Avg: 7.21 / Max: 7.24Min: 4.93 / Avg: 4.96 / Max: 4.99Min: 4.39 / Avg: 4.45 / Max: 4.52Min: 3.45 / Avg: 3.46 / Max: 3.48Min: 1.57 / Avg: 1.58 / Max: 1.611. (CC) gcc options: -O3 -march=native -fopenmp

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620SE +/- 0.012, N = 6SE +/- 0.031, N = 4SE +/- 0.015, N = 6SE +/- 0.012, N = 6SE +/- 0.009, N = 6SE +/- 0.006, N = 6SE +/- 0.013, N = 6SE +/- 0.015, N = 6SE +/- 0.010, N = 6SE +/- 0.027, N = 6SE +/- 0.009, N = 6SE +/- 0.020, N = 5SE +/- 0.012, N = 5SE +/- 0.018, N = 5SE +/- 0.032, N = 47.97312.7888.0618.1258.0528.0748.1627.9498.0557.9727.9749.3549.65411.13115.5471. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P48121620Min: 7.94 / Avg: 7.97 / Max: 8.02Min: 12.74 / Avg: 12.79 / Max: 12.88Min: 8.01 / Avg: 8.06 / Max: 8.11Min: 8.1 / Avg: 8.13 / Max: 8.17Min: 8.04 / Avg: 8.05 / Max: 8.09Min: 8.05 / Avg: 8.07 / Max: 8.09Min: 8.12 / Avg: 8.16 / Max: 8.21Min: 7.92 / Avg: 7.95 / Max: 8.02Min: 8.01 / Avg: 8.06 / Max: 8.08Min: 7.91 / Avg: 7.97 / Max: 8.08Min: 7.94 / Avg: 7.97 / Max: 8Min: 9.31 / Avg: 9.35 / Max: 9.43Min: 9.63 / Avg: 9.65 / Max: 9.69Min: 11.09 / Avg: 11.13 / Max: 11.19Min: 15.46 / Avg: 15.55 / Max: 15.621. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -rdynamic -lpthread -ljpeg -lwebp -lwebpdemux

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5001000150020002500145613012069196521222125204921072043202219261660153813721112

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.59, N = 3SE +/- 0.92, N = 7SE +/- 0.27, N = 3SE +/- 0.05, N = 3SE +/- 0.45, N = 3SE +/- 0.18, N = 3SE +/- 1.10, N = 3SE +/- 0.80, N = 3SE +/- 0.65, N = 3SE +/- 0.60, N = 3SE +/- 0.58, N = 3SE +/- 0.45, N = 3SE +/- 1.07, N = 3SE +/- 0.64, N = 3SE +/- 1.44, N = 375.2398.5283.4084.8384.7985.0386.0683.8386.7085.7385.7688.3296.85101.59141.25
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 74.45 / Avg: 75.23 / Max: 76.39Min: 95.01 / Avg: 98.52 / Max: 100.63Min: 83 / Avg: 83.4 / Max: 83.9Min: 84.74 / Avg: 84.83 / Max: 84.9Min: 84.12 / Avg: 84.79 / Max: 85.64Min: 84.8 / Avg: 85.03 / Max: 85.39Min: 83.86 / Avg: 86.06 / Max: 87.25Min: 82.24 / Avg: 83.83 / Max: 84.72Min: 85.78 / Avg: 86.7 / Max: 87.96Min: 85.04 / Avg: 85.73 / Max: 86.93Min: 84.6 / Avg: 85.76 / Max: 86.43Min: 87.53 / Avg: 88.32 / Max: 89.11Min: 94.95 / Avg: 96.85 / Max: 98.65Min: 100.3 / Avg: 101.59 / Max: 102.33Min: 138.42 / Avg: 141.25 / Max: 143.09

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.20, N = 3SE +/- 0.05, N = 3SE +/- 0.18, N = 3SE +/- 0.07, N = 3SE +/- 0.11, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 319.8627.2018.1418.7318.7919.0918.8618.6619.9419.3820.8723.3924.2326.9033.02
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835Min: 19.76 / Avg: 19.86 / Max: 19.94Min: 26.98 / Avg: 27.2 / Max: 27.33Min: 17.95 / Avg: 18.14 / Max: 18.35Min: 18.39 / Avg: 18.73 / Max: 19.1Min: 18.7 / Avg: 18.79 / Max: 18.88Min: 18.76 / Avg: 19.09 / Max: 19.39Min: 18.73 / Avg: 18.85 / Max: 18.96Min: 18.45 / Avg: 18.66 / Max: 18.82Min: 19.92 / Avg: 19.94 / Max: 19.97Min: 19.15 / Avg: 19.38 / Max: 19.51Min: 20.74 / Avg: 20.87 / Max: 21.08Min: 23.21 / Avg: 23.39 / Max: 23.54Min: 24.11 / Avg: 24.23 / Max: 24.33Min: 26.81 / Avg: 26.9 / Max: 26.96Min: 32.94 / Avg: 33.02 / Max: 33.14

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- TriadEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728220K40K60K80K100KSE +/- 22.26, N = 3SE +/- 46.49, N = 3SE +/- 9.40, N = 3SE +/- 22.34, N = 3SE +/- 9.56, N = 3104095.68103873.4490450.72104119.1657372.731. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- TriadEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728220K40K60K80K100KMin: 104053.45 / Avg: 104095.68 / Max: 104128.99Min: 103785.36 / Avg: 103873.44 / Max: 103943.29Min: 90432.15 / Avg: 90450.72 / Max: 90462.55Min: 104077.01 / Avg: 104119.16 / Max: 104153.07Min: 57357.79 / Avg: 57372.73 / Max: 57390.531. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- AddEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728220K40K60K80K100KSE +/- 34.53, N = 3SE +/- 40.00, N = 3SE +/- 6.47, N = 3SE +/- 8.82, N = 3SE +/- 10.53, N = 3103818.19103556.1390238.63103789.9957323.381. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- AddEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728220K40K60K80K100KMin: 103749.97 / Avg: 103818.19 / Max: 103861.63Min: 103485.19 / Avg: 103556.13 / Max: 103623.61Min: 90231.4 / Avg: 90238.63 / Max: 90251.53Min: 103778.66 / Avg: 103789.99 / Max: 103807.37Min: 57311.27 / Avg: 57323.38 / Max: 57344.351. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.016, N = 6SE +/- 0.034, N = 5SE +/- 0.030, N = 6SE +/- 0.029, N = 6SE +/- 0.038, N = 6SE +/- 0.036, N = 6SE +/- 0.025, N = 6SE +/- 0.016, N = 6SE +/- 0.044, N = 6SE +/- 0.020, N = 6SE +/- 0.041, N = 6SE +/- 0.018, N = 6SE +/- 0.033, N = 6SE +/- 0.043, N = 5SE +/- 0.047, N = 46.5869.7906.8246.9306.9717.0637.0336.9007.2206.9897.1217.6458.1058.79811.912
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 6.53 / Avg: 6.59 / Max: 6.62Min: 9.66 / Avg: 9.79 / Max: 9.84Min: 6.76 / Avg: 6.82 / Max: 6.94Min: 6.86 / Avg: 6.93 / Max: 7.04Min: 6.85 / Avg: 6.97 / Max: 7.12Min: 6.98 / Avg: 7.06 / Max: 7.22Min: 6.95 / Avg: 7.03 / Max: 7.12Min: 6.84 / Avg: 6.9 / Max: 6.95Min: 7.06 / Avg: 7.22 / Max: 7.34Min: 6.93 / Avg: 6.99 / Max: 7.06Min: 6.99 / Avg: 7.12 / Max: 7.23Min: 7.59 / Avg: 7.65 / Max: 7.7Min: 8 / Avg: 8.1 / Max: 8.23Min: 8.7 / Avg: 8.8 / Max: 8.93Min: 11.83 / Avg: 11.91 / Max: 12.03

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250175185135120120138132174144150168180196203217

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.21, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 346.5459.5641.8742.5643.0643.8243.8444.5546.1045.4447.4152.7555.2260.2075.33
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1428425670Min: 46.12 / Avg: 46.54 / Max: 46.75Min: 59.55 / Avg: 59.56 / Max: 59.58Min: 41.81 / Avg: 41.87 / Max: 41.92Min: 42.5 / Avg: 42.56 / Max: 42.67Min: 43 / Avg: 43.06 / Max: 43.15Min: 43.8 / Avg: 43.82 / Max: 43.84Min: 43.81 / Avg: 43.84 / Max: 43.91Min: 44.52 / Avg: 44.55 / Max: 44.58Min: 46.08 / Avg: 46.1 / Max: 46.13Min: 45.42 / Avg: 45.44 / Max: 45.46Min: 47.38 / Avg: 47.41 / Max: 47.44Min: 52.62 / Avg: 52.75 / Max: 52.9Min: 55.1 / Avg: 55.22 / Max: 55.29Min: 60.16 / Avg: 60.2 / Max: 60.27Min: 75.27 / Avg: 75.33 / Max: 75.44

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: CopyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KSE +/- 330.72, N = 5SE +/- 52.59, N = 5SE +/- 14.28, N = 5SE +/- 31.65, N = 5SE +/- 13.27, N = 5SE +/- 47.63, N = 5SE +/- 15.41, N = 5SE +/- 5.66, N = 5SE +/- 10.31, N = 5SE +/- 14.85, N = 5SE +/- 5.59, N = 5SE +/- 8.02, N = 5SE +/- 2.37, N = 5SE +/- 7.06, N = 5SE +/- 9.94, N = 566901.182399.790141.790511.890296.591703.288717.279674.190663.279342.979677.280140.151093.151714.452390.01. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: CopyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P16K32K48K64K80KMin: 65648.8 / Avg: 66901.12 / Max: 67581.9Min: 82189.9 / Avg: 82399.72 / Max: 82465.6Min: 90094.7 / Avg: 90141.7 / Max: 90180.7Min: 90390.8 / Avg: 90511.82 / Max: 90564Min: 90262 / Avg: 90296.46 / Max: 90328.8Min: 91548.7 / Avg: 91703.18 / Max: 91816.8Min: 88676.8 / Avg: 88717.18 / Max: 88755.4Min: 79657.3 / Avg: 79674.1 / Max: 79689.4Min: 90631.3 / Avg: 90663.16 / Max: 90687.7Min: 79306.1 / Avg: 79342.9 / Max: 79388.7Min: 79657.3 / Avg: 79677.16 / Max: 79688.5Min: 80119.5 / Avg: 80140.14 / Max: 80160.6Min: 51087.4 / Avg: 51093.14 / Max: 51100.2Min: 51688.2 / Avg: 51714.44 / Max: 51726.5Min: 52373.2 / Avg: 52390.04 / Max: 524281. (CC) gcc options: -O3 -march=native -fopenmp

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: TriadEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KSE +/- 194.23, N = 5SE +/- 59.32, N = 5SE +/- 20.07, N = 5SE +/- 26.33, N = 5SE +/- 21.10, N = 5SE +/- 33.62, N = 5SE +/- 11.91, N = 5SE +/- 36.22, N = 5SE +/- 16.47, N = 5SE +/- 19.43, N = 5SE +/- 7.48, N = 5SE +/- 5.88, N = 5SE +/- 8.84, N = 5SE +/- 2.27, N = 5SE +/- 65.26, N = 572315.689567.298248.098034.898343.297940.096497.187057.099248.287239.887308.488105.755596.356066.656788.61. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: TriadEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KMin: 71942.4 / Avg: 72315.56 / Max: 73068.3Min: 89468.9 / Avg: 89567.18 / Max: 89800.2Min: 98207.1 / Avg: 98247.98 / Max: 98320.3Min: 97939.6 / Avg: 98034.8 / Max: 98096.1Min: 98280 / Avg: 98343.22 / Max: 98393.4Min: 97851.1 / Avg: 97940.04 / Max: 98039.8Min: 96474.4 / Avg: 96497.14 / Max: 96541Min: 86972.9 / Avg: 87057.04 / Max: 87161.9Min: 99194.2 / Avg: 99248.22 / Max: 99293.1Min: 87168 / Avg: 87239.76 / Max: 87282.1Min: 87291.9 / Avg: 87308.42 / Max: 87333.6Min: 88086.3 / Avg: 88105.74 / Max: 88121.8Min: 55564.4 / Avg: 55596.28 / Max: 55617.5Min: 56061.4 / Avg: 56066.58 / Max: 56074.5Min: 56528 / Avg: 56788.62 / Max: 56863.91. (CC) gcc options: -O3 -march=native -fopenmp

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810SE +/- 0.391, N = 3SE +/- 0.017, N = 14SE +/- 0.020, N = 3SE +/- 0.016, N = 3SE +/- 0.022, N = 3SE +/- 0.016, N = 3SE +/- 0.008, N = 15SE +/- 0.009, N = 15SE +/- 0.022, N = 3SE +/- 0.028, N = 3SE +/- 0.026, N = 3SE +/- 0.046, N = 15SE +/- 0.033, N = 3SE +/- 0.155, N = 4SE +/- 0.030, N = 118.1714.5784.9195.0674.9204.8254.8524.7134.8214.8205.2705.7725.8555.9304.944MIN: 7.28 / MAX: 12.17MIN: 4.43 / MAX: 10.46MIN: 4.81 / MAX: 5.23MIN: 4.95 / MAX: 5.21MIN: 4.78 / MAX: 5.1MIN: 4.71 / MAX: 5.16MIN: 4.69 / MAX: 5.21MIN: 4.51 / MAX: 6.85MIN: 4.72 / MAX: 5.31MIN: 4.67 / MAX: 6.92MIN: 5.12 / MAX: 6.52MIN: 5.46 / MAX: 20.47MIN: 5.58 / MAX: 23.16MIN: 5.53 / MAX: 20.6MIN: 4.74 / MAX: 7.041. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: MobileNetV2_224EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 7.72 / Avg: 8.17 / Max: 8.95Min: 4.51 / Avg: 4.58 / Max: 4.67Min: 4.88 / Avg: 4.92 / Max: 4.95Min: 5.05 / Avg: 5.07 / Max: 5.1Min: 4.88 / Avg: 4.92 / Max: 4.96Min: 4.79 / Avg: 4.83 / Max: 4.84Min: 4.78 / Avg: 4.85 / Max: 4.89Min: 4.6 / Avg: 4.71 / Max: 4.75Min: 4.8 / Avg: 4.82 / Max: 4.87Min: 4.78 / Avg: 4.82 / Max: 4.87Min: 5.23 / Avg: 5.27 / Max: 5.32Min: 5.57 / Avg: 5.77 / Max: 6.19Min: 5.81 / Avg: 5.86 / Max: 5.92Min: 5.66 / Avg: 5.93 / Max: 6.37Min: 4.81 / Avg: 4.94 / Max: 5.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 7272EPYC 7232P50100150200250SE +/- 2.04, N = 12SE +/- 1.27, N = 12SE +/- 1.41, N = 3SE +/- 1.19, N = 3SE +/- 1.10, N = 3SE +/- 4.09, N = 12SE +/- 1.35, N = 3SE +/- 1.31, N = 12SE +/- 1.09, N = 12SE +/- 1.66, N = 3SE +/- 1.05, N = 3SE +/- 1.75, N = 3SE +/- 2.36, N = 3122.44119.44132.94132.62136.87160.82131.53130.09132.04128.92135.68137.59212.891. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 7272EPYC 7232P4080120160200Min: 113.19 / Avg: 122.44 / Max: 133.84Min: 112.44 / Avg: 119.44 / Max: 126.77Min: 130.13 / Avg: 132.94 / Max: 134.51Min: 130.9 / Avg: 132.62 / Max: 134.89Min: 134.99 / Avg: 136.87 / Max: 138.8Min: 137.76 / Avg: 160.82 / Max: 185.31Min: 128.85 / Avg: 131.53 / Max: 133.13Min: 122.93 / Avg: 130.09 / Max: 137.37Min: 125.23 / Avg: 132.04 / Max: 137.09Min: 126.46 / Avg: 128.92 / Max: 132.08Min: 133.59 / Avg: 135.68 / Max: 136.92Min: 134.17 / Avg: 137.59 / Max: 139.97Min: 209.86 / Avg: 212.89 / Max: 217.531. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: AddEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KSE +/- 39.00, N = 5SE +/- 18.23, N = 5SE +/- 23.05, N = 5SE +/- 25.81, N = 5SE +/- 29.12, N = 5SE +/- 23.20, N = 5SE +/- 16.17, N = 5SE +/- 16.89, N = 5SE +/- 7.96, N = 5SE +/- 13.36, N = 5SE +/- 9.19, N = 5SE +/- 17.97, N = 5SE +/- 6.11, N = 5SE +/- 9.00, N = 5SE +/- 61.95, N = 572752.289255.497124.397206.397180.997944.295807.086737.797912.586677.286805.687568.855437.255911.656795.11. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: AddEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KMin: 72639.1 / Avg: 72752.2 / Max: 72844.7Min: 89196.2 / Avg: 89255.38 / Max: 89299.1Min: 97079.1 / Avg: 97124.28 / Max: 97213.2Min: 97146.6 / Avg: 97206.28 / Max: 97299.6Min: 97111 / Avg: 97180.94 / Max: 97268.6Min: 97867.2 / Avg: 97944.18 / Max: 97991.1Min: 95762.2 / Avg: 95807.04 / Max: 95857.9Min: 86695.7 / Avg: 86737.72 / Max: 86780.2Min: 97891 / Avg: 97912.52 / Max: 97938.6Min: 86645.7 / Avg: 86677.24 / Max: 86711.4Min: 86780.2 / Avg: 86805.64 / Max: 86837.1Min: 87527.2 / Avg: 87568.78 / Max: 87617.1Min: 55417 / Avg: 55437.24 / Max: 55454.2Min: 55883.1 / Avg: 55911.58 / Max: 55937.4Min: 56547.6 / Avg: 56795.1 / Max: 56866.81. (CC) gcc options: -O3 -march=native -fopenmp

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- CopyEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728220K40K60K80K100KSE +/- 32.55, N = 3SE +/- 35.22, N = 3SE +/- 16.48, N = 3SE +/- 15.65, N = 3SE +/- 10.71, N = 393965.3893594.6182081.4394075.6753581.271. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- CopyEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728216K32K48K64K80KMin: 93901.29 / Avg: 93965.38 / Max: 94007.33Min: 93528.24 / Avg: 93594.61 / Max: 93648.23Min: 82062.83 / Avg: 82081.43 / Max: 82114.3Min: 94054.25 / Avg: 94075.67 / Max: 94106.13Min: 53561.74 / Avg: 53581.27 / Max: 53598.651. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

Stream

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: ScaleEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KSE +/- 31.64, N = 5SE +/- 30.51, N = 5SE +/- 19.79, N = 5SE +/- 27.01, N = 5SE +/- 21.55, N = 5SE +/- 55.03, N = 5SE +/- 32.31, N = 5SE +/- 14.00, N = 5SE +/- 18.59, N = 5SE +/- 12.22, N = 5SE +/- 7.47, N = 5SE +/- 10.74, N = 5SE +/- 3.08, N = 5SE +/- 4.61, N = 5SE +/- 61.32, N = 567011.981753.087905.687714.687848.187579.986805.678399.989487.678638.679147.379703.551044.251978.252630.11. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream 2013-01-17Type: ScaleEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P16K32K48K64K80KMin: 66926.2 / Avg: 67011.88 / Max: 67120.3Min: 81720.5 / Avg: 81753 / Max: 81875Min: 87840 / Avg: 87905.6 / Max: 87960.9Min: 87618.6 / Avg: 87714.62 / Max: 87786Min: 87786 / Avg: 87848.06 / Max: 87907.9Min: 87440.5 / Avg: 87579.9 / Max: 87776.8Min: 86725.2 / Avg: 86805.6 / Max: 86880.2Min: 78366.2 / Avg: 78399.86 / Max: 78439.4Min: 89445.1 / Avg: 89487.56 / Max: 89550.1Min: 78592.9 / Avg: 78638.56 / Max: 78666.6Min: 79125.7 / Avg: 79147.34 / Max: 79168.6Min: 79669.6 / Avg: 79703.46 / Max: 79733Min: 51038.4 / Avg: 51044.22 / Max: 51055.9Min: 51965.2 / Avg: 51978.22 / Max: 51992.1Min: 52385 / Avg: 52630.06 / Max: 52697.21. (CC) gcc options: -O3 -march=native -fopenmp

Stream-Dynamic

This is an open-source AMD modified copy of the Stream memory benchmark catered towards running the RAM benchmark on systems with the AMD Optimizing C/C++ Compiler (AOCC) among other by-default optimizations aiming for an easy and standardized deployment. This test profile though will attempt to fall-back to GCC / Clang for systems lacking AOCC, otherwise there is the existing "stream" test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- ScaleEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728220K40K60K80K100KSE +/- 28.79, N = 3SE +/- 32.33, N = 3SE +/- 12.91, N = 3SE +/- 25.57, N = 3SE +/- 11.66, N = 392337.6192049.4881529.0692368.4153610.351. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp
OpenBenchmarking.orgMB/s, More Is BetterStream-Dynamic 1.0- ScaleEPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728216K32K48K64K80KMin: 92280.83 / Avg: 92337.61 / Max: 92374.33Min: 91997.69 / Avg: 92049.48 / Max: 92108.89Min: 81504.71 / Avg: 81529.06 / Max: 81548.69Min: 92337.02 / Avg: 92368.41 / Max: 92419.07Min: 53588 / Avg: 53610.35 / Max: 53627.311. (CXX) g++ options: -Ofast -mcmodel=large -mavx2 -ffp-contract=fast -march=native -fopenmp

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 20.53, N = 3SE +/- 4.75, N = 3SE +/- 6.83, N = 3SE +/- 33.88, N = 3SE +/- 8.45, N = 3SE +/- 34.97, N = 3SE +/- 25.58, N = 3SE +/- 4.93, N = 3SE +/- 14.29, N = 3SE +/- 18.74, N = 3SE +/- 21.24, N = 3SE +/- 34.67, N = 3SE +/- 3.19, N = 3SE +/- 6.17, N = 3SE +/- 5.69, N = 37899.76573.98270.48248.38287.78499.28172.07885.08476.37903.68033.07699.46706.26498.45123.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P15003000450060007500Min: 7860.7 / Avg: 7899.7 / Max: 7930.3Min: 6567.3 / Avg: 6573.87 / Max: 6583.1Min: 8256.7 / Avg: 8270.37 / Max: 8277.3Min: 8204.5 / Avg: 8248.33 / Max: 8315Min: 8277.2 / Avg: 8287.67 / Max: 8304.4Min: 8460.8 / Avg: 8499.17 / Max: 8569Min: 8129 / Avg: 8172.03 / Max: 8217.5Min: 7875.6 / Avg: 7885 / Max: 7892.3Min: 8449.2 / Avg: 8476.3 / Max: 8497.7Min: 7870.9 / Avg: 7903.63 / Max: 7935.8Min: 7990.6 / Avg: 8033.03 / Max: 8056Min: 7651 / Avg: 7699.4 / Max: 7766.6Min: 6700.4 / Avg: 6706.17 / Max: 6711.4Min: 6486.1 / Avg: 6498.4 / Max: 6505.4Min: 5117.2 / Avg: 5123.23 / Max: 5134.61. (CC) gcc options: -O3 -pthread -lz -llzma

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P8001600240032004000261524743248308933413528339736413518351034723119292226792206

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430SE +/- 0.08, N = 4SE +/- 0.09, N = 3SE +/- 0.05, N = 4SE +/- 0.03, N = 4SE +/- 0.10, N = 4SE +/- 0.13, N = 4SE +/- 0.08, N = 4SE +/- 0.06, N = 4SE +/- 0.08, N = 4SE +/- 0.15, N = 4SE +/- 0.05, N = 4SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 314.6018.8815.5715.9415.9715.8915.9715.7216.3315.8516.1217.1818.1019.1123.56
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P612182430Min: 14.38 / Avg: 14.6 / Max: 14.74Min: 18.71 / Avg: 18.88 / Max: 19.01Min: 15.48 / Avg: 15.57 / Max: 15.7Min: 15.88 / Avg: 15.94 / Max: 16.04Min: 15.81 / Avg: 15.97 / Max: 16.23Min: 15.65 / Avg: 15.89 / Max: 16.17Min: 15.82 / Avg: 15.97 / Max: 16.13Min: 15.6 / Avg: 15.72 / Max: 15.88Min: 16.13 / Avg: 16.33 / Max: 16.51Min: 15.61 / Avg: 15.85 / Max: 16.23Min: 16.01 / Avg: 16.12 / Max: 16.23Min: 17 / Avg: 17.18 / Max: 17.52Min: 17.96 / Avg: 18.1 / Max: 18.29Min: 18.77 / Avg: 19.11 / Max: 19.31Min: 23.35 / Avg: 23.56 / Max: 23.91

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240SE +/- 0.11, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.19, N = 9SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 321.4329.6121.9822.0522.0522.0122.5422.3022.0922.4222.4023.1524.9025.6634.511. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835Min: 21.32 / Avg: 21.43 / Max: 21.65Min: 29.47 / Avg: 29.61 / Max: 29.69Min: 21.88 / Avg: 21.98 / Max: 22.15Min: 21.86 / Avg: 22.05 / Max: 22.28Min: 22 / Avg: 22.05 / Max: 22.15Min: 21.89 / Avg: 22.01 / Max: 22.17Min: 22.49 / Avg: 22.54 / Max: 22.64Min: 22.23 / Avg: 22.3 / Max: 22.43Min: 22.03 / Avg: 22.09 / Max: 22.22Min: 22.33 / Avg: 22.42 / Max: 22.59Min: 22.31 / Avg: 22.4 / Max: 22.53Min: 22.45 / Avg: 23.15 / Max: 23.78Min: 24.75 / Avg: 24.9 / Max: 25.01Min: 25.58 / Avg: 25.66 / Max: 25.79Min: 34.46 / Avg: 34.51 / Max: 34.531. (CC) gcc options: -pthread -fvisibility=hidden -O2

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.24, N = 3SE +/- 0.18, N = 3SE +/- 1.35, N = 3SE +/- 0.25, N = 3SE +/- 1.02, N = 3SE +/- 0.17, N = 3SE +/- 1.17, N = 4SE +/- 0.07, N = 3SE +/- 0.34, N = 3SE +/- 0.21, N = 3SE +/- 0.29, N = 3SE +/- 0.21, N = 3SE +/- 0.16, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 372.4372.89108.22115.27116.15102.45104.5286.7296.8194.2686.0085.9389.2087.6088.671. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 72.17 / Avg: 72.43 / Max: 72.9Min: 72.56 / Avg: 72.89 / Max: 73.16Min: 106.69 / Avg: 108.22 / Max: 110.92Min: 114.96 / Avg: 115.27 / Max: 115.77Min: 115.11 / Avg: 116.15 / Max: 118.18Min: 102.2 / Avg: 102.45 / Max: 102.77Min: 103.18 / Avg: 104.52 / Max: 108.01Min: 86.61 / Avg: 86.72 / Max: 86.86Min: 96.44 / Avg: 96.81 / Max: 97.49Min: 93.87 / Avg: 94.26 / Max: 94.59Min: 85.64 / Avg: 86 / Max: 86.58Min: 85.7 / Avg: 85.93 / Max: 86.35Min: 88.88 / Avg: 89.2 / Max: 89.39Min: 87.51 / Avg: 87.6 / Max: 87.67Min: 88.6 / Avg: 88.67 / Max: 88.811. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.27450.5490.82351.0981.3725SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 4SE +/- 0.00, N = 3SE +/- 0.01, N = 40.790.781.081.221.071.020.990.770.950.840.880.840.790.870.96
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.78 / Avg: 0.79 / Max: 0.79Min: 0.78 / Avg: 0.78 / Max: 0.79Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 1.21 / Avg: 1.22 / Max: 1.22Min: 1.06 / Avg: 1.07 / Max: 1.07Min: 1.01 / Avg: 1.02 / Max: 1.02Min: 0.98 / Avg: 0.99 / Max: 0.99Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.87 / Avg: 0.88 / Max: 0.91Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.77 / Avg: 0.79 / Max: 0.81Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.94 / Avg: 0.96 / Max: 0.99

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.27230.54460.81691.08921.3615SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 150.810.791.071.211.061.020.990.770.950.840.920.840.780.890.98
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.8 / Avg: 0.81 / Max: 0.82Min: 0.78 / Avg: 0.79 / Max: 0.81Min: 1.07 / Avg: 1.07 / Max: 1.08Min: 1.21 / Avg: 1.21 / Max: 1.21Min: 1.06 / Avg: 1.06 / Max: 1.07Min: 1.01 / Avg: 1.02 / Max: 1.02Min: 0.99 / Avg: 0.99 / Max: 0.99Min: 0.77 / Avg: 0.77 / Max: 0.78Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.91 / Avg: 0.92 / Max: 0.93Min: 0.83 / Avg: 0.84 / Max: 0.84Min: 0.77 / Avg: 0.78 / Max: 0.79Min: 0.87 / Avg: 0.89 / Max: 0.91Min: 0.91 / Avg: 0.98 / Max: 1.03

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645SE +/- 0.19, N = 3SE +/- 0.01, N = 14SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 15SE +/- 0.05, N = 15SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.97, N = 3SE +/- 0.09, N = 15SE +/- 0.12, N = 3SE +/- 0.11, N = 4SE +/- 0.12, N = 1133.5027.4525.4626.9225.2224.7825.4424.3025.2725.0629.0231.5433.5938.1031.44MIN: 32.85 / MAX: 46.41MIN: 27.01 / MAX: 58.07MIN: 25.03 / MAX: 27.67MIN: 26.49 / MAX: 27.67MIN: 24.73 / MAX: 27.24MIN: 24.46 / MAX: 25.37MIN: 24.73 / MAX: 28.58MIN: 23.72 / MAX: 27.28MIN: 24.96 / MAX: 28.42MIN: 24.51 / MAX: 27.82MIN: 27.23 / MAX: 33.12MIN: 30.05 / MAX: 48.95MIN: 32.54 / MAX: 48.77MIN: 36.85 / MAX: 55.75MIN: 30.44 / MAX: 49.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: resnet-v2-50EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240Min: 33.28 / Avg: 33.5 / Max: 33.89Min: 27.35 / Avg: 27.44 / Max: 27.53Min: 25.38 / Avg: 25.45 / Max: 25.59Min: 26.83 / Avg: 26.92 / Max: 27.06Min: 25.03 / Avg: 25.22 / Max: 25.35Min: 24.73 / Avg: 24.78 / Max: 24.82Min: 25.04 / Avg: 25.44 / Max: 25.64Min: 23.9 / Avg: 24.3 / Max: 24.57Min: 25.2 / Avg: 25.27 / Max: 25.36Min: 24.81 / Avg: 25.06 / Max: 25.18Min: 27.93 / Avg: 29.02 / Max: 30.96Min: 30.84 / Avg: 31.54 / Max: 32Min: 33.39 / Avg: 33.59 / Max: 33.8Min: 37.9 / Avg: 38.1 / Max: 38.42Min: 30.9 / Avg: 31.44 / Max: 32.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240SE +/- 0.07, N = 3SE +/- 0.22, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.25, N = 3SE +/- 0.41, N = 324.4228.1021.6421.9722.1322.5522.6522.6423.5523.1123.8726.0226.7929.9533.921. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P714212835Min: 24.31 / Avg: 24.42 / Max: 24.54Min: 27.88 / Avg: 28.1 / Max: 28.53Min: 21.63 / Avg: 21.64 / Max: 21.65Min: 21.94 / Avg: 21.97 / Max: 21.99Min: 22.09 / Avg: 22.13 / Max: 22.15Min: 22.48 / Avg: 22.55 / Max: 22.63Min: 22.58 / Avg: 22.65 / Max: 22.7Min: 22.58 / Avg: 22.64 / Max: 22.73Min: 23.54 / Avg: 23.55 / Max: 23.58Min: 23.07 / Avg: 23.11 / Max: 23.16Min: 23.81 / Avg: 23.87 / Max: 23.93Min: 25.95 / Avg: 26.02 / Max: 26.08Min: 26.76 / Avg: 26.79 / Max: 26.84Min: 29.5 / Avg: 29.95 / Max: 30.34Min: 33.49 / Avg: 33.92 / Max: 34.731. (CXX) g++ options: -std=c++0x -march=znver1 -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -mfma -mbmi2 -mno-avx -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -lpthread -ldl

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1020304050SE +/- 0.20, N = 3SE +/- 0.22, N = 3SE +/- 0.40, N = 3SE +/- 0.26, N = 3SE +/- 0.35, N = 3SE +/- 0.24, N = 3SE +/- 0.14, N = 3SE +/- 0.20, N = 3SE +/- 0.11, N = 3SE +/- 0.27, N = 3SE +/- 0.24, N = 3SE +/- 0.10, N = 3SE +/- 0.48, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 329.4535.6732.6233.2733.4133.6033.9632.8733.4332.6534.3134.8436.3538.0745.21
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645Min: 29.13 / Avg: 29.45 / Max: 29.83Min: 35.42 / Avg: 35.67 / Max: 36.1Min: 32.22 / Avg: 32.62 / Max: 33.42Min: 32.84 / Avg: 33.27 / Max: 33.73Min: 32.9 / Avg: 33.41 / Max: 34.08Min: 33.36 / Avg: 33.6 / Max: 34.09Min: 33.82 / Avg: 33.96 / Max: 34.23Min: 32.54 / Avg: 32.87 / Max: 33.22Min: 33.26 / Avg: 33.43 / Max: 33.62Min: 32.15 / Avg: 32.64 / Max: 33.08Min: 33.83 / Avg: 34.31 / Max: 34.61Min: 34.65 / Avg: 34.84 / Max: 35.01Min: 35.71 / Avg: 36.35 / Max: 37.28Min: 37.91 / Avg: 38.07 / Max: 38.32Min: 45 / Avg: 45.21 / Max: 45.37

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: AllEPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 728250100150200250SE +/- 0.29, N = 3SE +/- 0.06, N = 3SE +/- 0.19, N = 3SE +/- 0.42, N = 3SE +/- 0.90, N = 3SE +/- 0.41, N = 3SE +/- 0.06, N = 3207.63183.95145.07136.31188.06171.73197.76
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: AllEPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 72824080120160200Min: 207.06 / Avg: 207.63 / Max: 208.05Min: 183.83 / Avg: 183.95 / Max: 184.04Min: 144.68 / Avg: 145.07 / Max: 145.26Min: 135.58 / Avg: 136.31 / Max: 137.03Min: 186.86 / Avg: 188.06 / Max: 189.82Min: 171 / Avg: 171.73 / Max: 172.4Min: 197.65 / Avg: 197.76 / Max: 197.84

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 112.30, N = 4SE +/- 45.76, N = 3SE +/- 66.14, N = 6SE +/- 53.19, N = 12SE +/- 46.88, N = 3SE +/- 73.86, N = 12SE +/- 70.17, N = 12SE +/- 29.60, N = 3SE +/- 105.59, N = 12SE +/- 63.96, N = 12SE +/- 18.32, N = 3SE +/- 1.83, N = 3SE +/- 45.05, N = 3SE +/- 2.92, N = 3SE +/- 15.37, N = 39348982365276478668672217373869880708421881089758611872188361. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KMin: 9037.5 / Avg: 9348.13 / Max: 9565.5Min: 9743 / Avg: 9822.67 / Max: 9901.5Min: 6262 / Avg: 6526.58 / Max: 6727.5Min: 6037.5 / Avg: 6477.5 / Max: 6804Min: 6618 / Avg: 6686.17 / Max: 6776Min: 6810.5 / Avg: 7220.58 / Max: 7710.5Min: 6932 / Avg: 7373.04 / Max: 7736Min: 8644 / Avg: 8698 / Max: 8746Min: 7492 / Avg: 8070 / Max: 8524Min: 7942 / Avg: 8421.42 / Max: 8675.5Min: 8780.5 / Avg: 8809.83 / Max: 8843.5Min: 8973 / Avg: 8974.83 / Max: 8978.5Min: 8522 / Avg: 8611.17 / Max: 8667Min: 8717.5 / Avg: 8720.67 / Max: 8726.5Min: 8814.5 / Avg: 8836.33 / Max: 88661. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

BlogBench

BlogBench is designed to replicate the load of a real-world busy file server by stressing the file-system with multiple threads of random reads, writes, and rewrites. The behavior is mimicked of that of a blog by creating blogs with content and pictures, modifying blog posts, adding comments to these blogs, and then reading the content of the blogs. All of these blogs generated are created locally with fake content and pictures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400K800K1200K1600K2000KSE +/- 13461.20, N = 3SE +/- 14389.61, N = 3SE +/- 16576.40, N = 3SE +/- 10548.76, N = 3SE +/- 9874.65, N = 3SE +/- 14031.09, N = 3SE +/- 4883.16, N = 3SE +/- 7106.01, N = 3SE +/- 13194.50, N = 3SE +/- 7760.40, N = 3SE +/- 7805.03, N = 3SE +/- 7235.14, N = 3SE +/- 12175.94, N = 3SE +/- 20232.46, N = 3SE +/- 16960.13, N = 31676516177203713731571372955139747515833501771624192336816360921950080195961518984282043037202318019444961. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgFinal Score, More Is BetterBlogBench 1.1Test: ReadEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400K800K1200K1600K2000KMin: 1650220 / Avg: 1676515.67 / Max: 1694665Min: 1743950 / Avg: 1772037 / Max: 1791514Min: 1355831 / Avg: 1373156.67 / Max: 1406298Min: 1360224 / Avg: 1372955 / Max: 1393890Min: 1384215 / Avg: 1397475 / Max: 1416780Min: 1565457 / Avg: 1583350 / Max: 1611018Min: 1764285 / Avg: 1771624.33 / Max: 1780874Min: 1909275 / Avg: 1923368 / Max: 1932004Min: 1610844 / Avg: 1636092.33 / Max: 1655363Min: 1935120 / Avg: 1950080.33 / Max: 1961140Min: 1944836 / Avg: 1959614.67 / Max: 1971357Min: 1890301 / Avg: 1898428 / Max: 1912860Min: 2022967 / Avg: 2043037.33 / Max: 2065016Min: 1992185 / Avg: 2023179.67 / Max: 2061206Min: 1916141 / Avg: 1944496.33 / Max: 19747961. (CC) gcc options: -O2 -pthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P6K12K18K24K30KSE +/- 205.94, N = 13SE +/- 296.05, N = 4SE +/- 248.77, N = 3SE +/- 189.69, N = 6SE +/- 203.19, N = 6SE +/- 208.12, N = 5SE +/- 149.83, N = 11SE +/- 300.22, N = 3SE +/- 233.41, N = 5SE +/- 303.68, N = 3SE +/- 228.79, N = 5SE +/- 278.67, N = 4SE +/- 162.08, N = 13SE +/- 251.88, N = 3SE +/- 130.62, N = 324205.5427135.6318909.1019062.9219915.0020036.1920634.2021442.0020967.4821272.0421653.9122339.8522574.8023807.6723230.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5K10K15K20K25KMin: 22569.97 / Avg: 24205.54 / Max: 25313.47Min: 26266.6 / Avg: 27135.63 / Max: 27598.01Min: 18483.38 / Avg: 18909.1 / Max: 19344.98Min: 18118.9 / Avg: 19062.92 / Max: 19320.87Min: 18959.86 / Avg: 19915 / Max: 20292.89Min: 19216.66 / Avg: 20036.19 / Max: 20355.61Min: 19629.33 / Avg: 20634.2 / Max: 21282.63Min: 20857.09 / Avg: 21442 / Max: 21851.95Min: 20100.44 / Avg: 20967.48 / Max: 21472.74Min: 20665.78 / Avg: 21272.04 / Max: 21606.83Min: 20776.27 / Avg: 21653.91 / Max: 22118.67Min: 21516.1 / Avg: 22339.85 / Max: 22741.4Min: 20715.3 / Avg: 22574.8 / Max: 23030.24Min: 23305.32 / Avg: 23807.67 / Max: 24091.45Min: 22989.39 / Avg: 23230.19 / Max: 23438.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400800120016002000SE +/- 11.50, N = 3SE +/- 0.33, N = 3SE +/- 1.20, N = 3SE +/- 1.52, N = 3SE +/- 1.64, N = 3SE +/- 22.87, N = 9SE +/- 16.58, N = 4SE +/- 3.48, N = 3SE +/- 6.34, N = 3SE +/- 5.50, N = 3SE +/- 1.29, N = 3SE +/- 5.19, N = 3SE +/- 4.40, N = 3SE +/- 1.18, N = 3SE +/- 0.83, N = 31357.301356.261168.321208.311216.501372.731329.901317.271403.871386.781342.141403.521456.821520.431656.881. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30060090012001500Min: 1338.99 / Avg: 1357.3 / Max: 1378.5Min: 1355.84 / Avg: 1356.26 / Max: 1356.92Min: 1166.38 / Avg: 1168.32 / Max: 1170.52Min: 1205.82 / Avg: 1208.31 / Max: 1211.05Min: 1214.11 / Avg: 1216.5 / Max: 1219.63Min: 1251.31 / Avg: 1372.73 / Max: 1438.64Min: 1306.89 / Avg: 1329.9 / Max: 1377.55Min: 1313.6 / Avg: 1317.27 / Max: 1324.22Min: 1396.5 / Avg: 1403.87 / Max: 1416.5Min: 1377.07 / Avg: 1386.78 / Max: 1396.11Min: 1339.87 / Avg: 1342.14 / Max: 1344.32Min: 1394.1 / Avg: 1403.52 / Max: 1411.99Min: 1450.92 / Avg: 1456.82 / Max: 1465.43Min: 1518.1 / Avg: 1520.43 / Max: 1521.97Min: 1655.62 / Avg: 1656.88 / Max: 1658.441. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30060090012001500115911731179112412191403134815341475148815461459138413071094

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40K80K120K160K200KSE +/- 1333.26, N = 3SE +/- 101.93, N = 3SE +/- 204.77, N = 3SE +/- 182.06, N = 3SE +/- 146.69, N = 3SE +/- 114.49, N = 3SE +/- 1549.72, N = 7SE +/- 120.93, N = 3SE +/- 106.86, N = 3SE +/- 187.42, N = 3SE +/- 92.46, N = 3SE +/- 37.21, N = 3SE +/- 76.96, N = 3SE +/- 272.29, N = 3SE +/- 139.37, N = 31513271405581237711289651276991738121718851625441725081687281546931525391557631548711515591. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30K60K90K120K150KMin: 149779 / Avg: 151326.67 / Max: 153981Min: 140379 / Avg: 140558 / Max: 140732Min: 123410 / Avg: 123771 / Max: 124119Min: 128622 / Avg: 128965.33 / Max: 129242Min: 127449 / Avg: 127699.33 / Max: 127957Min: 173597 / Avg: 173811.67 / Max: 173988Min: 162598 / Avg: 171885 / Max: 173740Min: 162302 / Avg: 162543.67 / Max: 162673Min: 172364 / Avg: 172508.33 / Max: 172717Min: 168512 / Avg: 168727.67 / Max: 169101Min: 154545 / Avg: 154693 / Max: 154863Min: 152490 / Avg: 152539 / Max: 152612Min: 155611 / Avg: 155763 / Max: 155860Min: 154328 / Avg: 154871.33 / Max: 155175Min: 151334 / Avg: 151559 / Max: 1518141. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 31972212282422392192232152152142172282392502761. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 242 / Avg: 242.33 / Max: 243Min: 216 / Avg: 216.67 / Max: 217Min: 249 / Avg: 249.67 / Max: 2501. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 7272EPYC 7232P50100150200250SE +/- 2.33, N = 9SE +/- 1.05, N = 3SE +/- 2.27, N = 3SE +/- 1.88, N = 4SE +/- 1.33, N = 3SE +/- 2.10, N = 4SE +/- 1.89, N = 3SE +/- 1.90, N = 3SE +/- 1.85, N = 3SE +/- 1.93, N = 5SE +/- 1.23, N = 3SE +/- 1.59, N = 3SE +/- 2.54, N = 3151.46158.79171.84177.62172.37176.53170.22165.66175.85173.90183.22174.20211.491. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7282EPYC 7272EPYC 7232P4080120160200Min: 142 / Avg: 151.46 / Max: 162.3Min: 157.45 / Avg: 158.79 / Max: 160.86Min: 169.11 / Avg: 171.84 / Max: 176.34Min: 172.48 / Avg: 177.62 / Max: 181.39Min: 170.3 / Avg: 172.37 / Max: 174.84Min: 172.54 / Avg: 176.53 / Max: 181.84Min: 168.07 / Avg: 170.22 / Max: 173.98Min: 162.62 / Avg: 165.66 / Max: 169.15Min: 172.42 / Avg: 175.85 / Max: 178.76Min: 168.11 / Avg: 173.89 / Max: 179.12Min: 181.83 / Avg: 183.22 / Max: 185.67Min: 171.02 / Avg: 174.2 / Max: 175.94Min: 206.5 / Avg: 211.49 / Max: 214.781. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215SE +/- 0.064, N = 3SE +/- 0.062, N = 14SE +/- 0.002, N = 3SE +/- 0.014, N = 3SE +/- 0.018, N = 3SE +/- 0.003, N = 3SE +/- 0.271, N = 15SE +/- 0.252, N = 15SE +/- 0.016, N = 3SE +/- 0.048, N = 3SE +/- 0.114, N = 3SE +/- 0.061, N = 15SE +/- 0.027, N = 3SE +/- 0.115, N = 4SE +/- 0.071, N = 1110.2099.3577.6848.3757.5427.4798.1427.6257.3387.4938.9789.5049.6109.7689.650MIN: 10.06 / MAX: 11.23MIN: 8.98 / MAX: 23.95MIN: 7.56 / MAX: 8.11MIN: 8.25 / MAX: 8.55MIN: 7.38 / MAX: 8.01MIN: 7.34 / MAX: 8.04MIN: 7.43 / MAX: 13.26MIN: 7.08 / MAX: 14.04MIN: 7.23 / MAX: 7.87MIN: 7.29 / MAX: 9.78MIN: 8.7 / MAX: 10.42MIN: 9.01 / MAX: 25.56MIN: 9.28 / MAX: 25.71MIN: 9.3 / MAX: 12.44MIN: 9.16 / MAX: 25.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.1Model: SqueezeNetV1.0EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3691215Min: 10.14 / Avg: 10.21 / Max: 10.34Min: 9.16 / Avg: 9.36 / Max: 10Min: 7.68 / Avg: 7.68 / Max: 7.69Min: 8.35 / Avg: 8.37 / Max: 8.4Min: 7.51 / Avg: 7.54 / Max: 7.56Min: 7.47 / Avg: 7.48 / Max: 7.48Min: 7.63 / Avg: 8.14 / Max: 11.51Min: 7.17 / Avg: 7.63 / Max: 10.86Min: 7.32 / Avg: 7.34 / Max: 7.37Min: 7.44 / Avg: 7.49 / Max: 7.59Min: 8.84 / Avg: 8.98 / Max: 9.2Min: 9.26 / Avg: 9.5 / Max: 10.27Min: 9.58 / Avg: 9.61 / Max: 9.66Min: 9.49 / Avg: 9.77 / Max: 10.05Min: 9.3 / Avg: 9.65 / Max: 10.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1632486480SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.59, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 353.1562.2455.6156.0355.0454.4854.4952.1354.7752.7453.9358.5161.5462.7672.471. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1428425670Min: 52.9 / Avg: 53.15 / Max: 53.28Min: 62.13 / Avg: 62.24 / Max: 62.4Min: 55.54 / Avg: 55.61 / Max: 55.68Min: 55.95 / Avg: 56.03 / Max: 56.09Min: 55.03 / Avg: 55.04 / Max: 55.06Min: 54.32 / Avg: 54.48 / Max: 54.59Min: 54.34 / Avg: 54.49 / Max: 54.59Min: 52.04 / Avg: 52.12 / Max: 52.19Min: 54.71 / Avg: 54.77 / Max: 54.89Min: 52.69 / Avg: 52.74 / Max: 52.79Min: 53.88 / Avg: 53.93 / Max: 54Min: 58.49 / Avg: 58.51 / Max: 58.52Min: 60.92 / Avg: 61.54 / Max: 62.73Min: 62.66 / Avg: 62.76 / Max: 62.82Min: 72.44 / Avg: 72.47 / Max: 72.511. RawTherapee, version 5.8, command line.

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.60, N = 3SE +/- 0.67, N = 3SE +/- 0.78, N = 5SE +/- 0.44, N = 3SE +/- 0.33, N = 3SE +/- 0.17, N = 3SE +/- 1.04, N = 3SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.53, N = 12SE +/- 0.29, N = 3SE +/- 0.67, N = 3SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.00, N = 36959767479807969796472697770591. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1530456075Min: 68 / Avg: 69.17 / Max: 70Min: 58 / Avg: 59.33 / Max: 60Min: 74 / Avg: 76.2 / Max: 78.5Min: 73.5 / Avg: 74.33 / Max: 75Min: 78.5 / Avg: 79.17 / Max: 79.5Min: 80 / Avg: 80.17 / Max: 80.5Min: 77 / Avg: 79 / Max: 80.5Min: 68 / Avg: 69 / Max: 70Min: 78.5 / Avg: 79.17 / Max: 80.5Min: 61 / Avg: 64 / Max: 67.5Min: 71 / Avg: 71.5 / Max: 72Min: 68 / Avg: 69.33 / Max: 70Min: 76.5 / Avg: 77 / Max: 77.5Min: 69.5 / Avg: 69.83 / Max: 70Min: 58.5 / Avg: 58.5 / Max: 58.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 27.57, N = 3SE +/- 15.51, N = 3SE +/- 17.64, N = 3SE +/- 20.77, N = 3SE +/- 18.47, N = 3SE +/- 9.12, N = 3SE +/- 22.60, N = 3SE +/- 0.44, N = 3SE +/- 8.53, N = 3SE +/- 6.00, N = 3SE +/- 4.79, N = 3SE +/- 5.79, N = 3SE +/- 21.69, N = 3SE +/- 6.54, N = 3SE +/- 13.17, N = 310910.111206.88355.98372.78328.29301.09155.09878.010253.39808.010178.310476.89400.79442.99386.81. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KMin: 10869.2 / Avg: 10910.13 / Max: 10962.6Min: 11190.3 / Avg: 11206.8 / Max: 11237.8Min: 8327.6 / Avg: 8355.93 / Max: 8388.3Min: 8331.9 / Avg: 8372.7 / Max: 8399.9Min: 8294.6 / Avg: 8328.2 / Max: 8358.3Min: 9282.8 / Avg: 9301 / Max: 9311.2Min: 9132 / Avg: 9155 / Max: 9200.2Min: 9877.3 / Avg: 9878 / Max: 9878.8Min: 10242.8 / Avg: 10253.3 / Max: 10270.2Min: 9796 / Avg: 9808 / Max: 9814.1Min: 10170 / Avg: 10178.27 / Max: 10186.6Min: 10468.4 / Avg: 10476.8 / Max: 10487.9Min: 9370.8 / Avg: 9400.73 / Max: 9442.9Min: 9434.1 / Avg: 9442.93 / Max: 9455.7Min: 9360.9 / Avg: 9386.83 / Max: 9403.81. (CXX) g++ options: -rdynamic

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P110220330440550SE +/- 6.13, N = 12SE +/- 0.50, N = 3SE +/- 8.08, N = 9SE +/- 7.16, N = 12SE +/- 6.16, N = 12SE +/- 3.97, N = 3SE +/- 0.60, N = 3SE +/- 1.15, N = 3SE +/- 0.44, N = 3SE +/- 4.69, N = 12SE +/- 4.15, N = 9SE +/- 5.01, N = 3SE +/- 0.29, N = 3SE +/- 1.26, N = 3SE +/- 0.50, N = 33724803793903954314404754664595004564004034091. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P90180270360450Min: 329.5 / Avg: 371.92 / Max: 403.5Min: 479.5 / Avg: 480 / Max: 481Min: 341.5 / Avg: 379.17 / Max: 403.5Min: 335.5 / Avg: 389.54 / Max: 410.5Min: 343.5 / Avg: 395 / Max: 412.5Min: 424.5 / Avg: 430.5 / Max: 438Min: 439.5 / Avg: 440.33 / Max: 441.5Min: 472.5 / Avg: 474.5 / Max: 476.5Min: 465.5 / Avg: 466.33 / Max: 467Min: 425.5 / Avg: 459.13 / Max: 469.5Min: 468 / Avg: 500.22 / Max: 507Min: 450.5 / Avg: 456 / Max: 466Min: 399.5 / Avg: 400 / Max: 400.5Min: 401.5 / Avg: 403 / Max: 405.5Min: 408 / Avg: 409 / Max: 409.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P90K180K270K360K450KSE +/- 1699.46, N = 3SE +/- 95.91, N = 3SE +/- 198.26, N = 3SE +/- 730.51, N = 3SE +/- 1226.45, N = 3SE +/- 10886.13, N = 9SE +/- 18239.03, N = 9SE +/- 147.16, N = 3SE +/- 422.64, N = 3SE +/- 440.86, N = 3SE +/- 338.07, N = 3SE +/- 60.14, N = 3SE +/- 456.29, N = 3SE +/- 226.68, N = 3SE +/- 533.40, N = 33820203401683326283474843456264298753996104116814419784279673910153871733886993783673690271. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80K160K240K320K400KMin: 379249 / Avg: 382019.67 / Max: 385110Min: 339980 / Avg: 340168 / Max: 340295Min: 332266 / Avg: 332628.33 / Max: 332949Min: 346044 / Avg: 347483.67 / Max: 348419Min: 344068 / Avg: 345626.33 / Max: 348046Min: 364614 / Avg: 429875.11 / Max: 446994Min: 331340 / Avg: 399610.44 / Max: 446964Min: 411435 / Avg: 411681.33 / Max: 411944Min: 441242 / Avg: 441977.67 / Max: 442706Min: 427345 / Avg: 427966.67 / Max: 428819Min: 390425 / Avg: 391015.33 / Max: 391596Min: 387058 / Avg: 387173 / Max: 387261Min: 387821 / Avg: 388699.33 / Max: 389353Min: 378044 / Avg: 378367 / Max: 378804Min: 367965 / Avg: 369027.33 / Max: 3696431. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P11002200330044005500SE +/- 88.35, N = 12SE +/- 5.63, N = 3SE +/- 97.77, N = 9SE +/- 84.35, N = 9SE +/- 95.52, N = 12SE +/- 52.31, N = 12SE +/- 27.98, N = 3SE +/- 67.46, N = 11SE +/- 35.49, N = 11SE +/- 62.74, N = 3SE +/- 71.97, N = 12SE +/- 73.62, N = 12SE +/- 24.57, N = 3SE +/- 16.48, N = 3SE +/- 3.69, N = 35212451842294087417545064642498648644755494645224044394339411. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500Min: 4665 / Avg: 5211.92 / Max: 5596Min: 4508 / Avg: 4517.67 / Max: 4527.5Min: 3682.5 / Avg: 4228.56 / Max: 4633Min: 3791.5 / Avg: 4086.5 / Max: 4435Min: 3788 / Avg: 4174.71 / Max: 4608.5Min: 4219.5 / Avg: 4506.42 / Max: 4722Min: 4586 / Avg: 4641.67 / Max: 4674.5Min: 4318 / Avg: 4986.05 / Max: 5084Min: 4539 / Avg: 4864.36 / Max: 4940.5Min: 4633.5 / Avg: 4754.67 / Max: 4843.5Min: 4494.5 / Avg: 4946.25 / Max: 5164.5Min: 4171 / Avg: 4521.92 / Max: 5107Min: 3996 / Avg: 4043.83 / Max: 4077.5Min: 3922 / Avg: 3943 / Max: 3975.5Min: 3934 / Avg: 3941 / Max: 3946.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P300K600K900K1200K1500KSE +/- 2686.24, N = 3SE +/- 1176.92, N = 3SE +/- 546.91, N = 3SE +/- 628.66, N = 3SE +/- 762.66, N = 3SE +/- 1733.41, N = 3SE +/- 686.09, N = 3SE +/- 295.02, N = 3SE +/- 759.90, N = 3SE +/- 702.94, N = 3SE +/- 2008.07, N = 3SE +/- 1121.87, N = 3SE +/- 1061.91, N = 3SE +/- 1242.45, N = 3SE +/- 1442.60, N = 31438050.41279851.81294605.51308517.21294664.51295588.81288738.61206497.81281490.51303437.61290897.81249264.61211354.31185857.81113861.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 1433192.9 / Avg: 1438050.37 / Max: 1442466.9Min: 1277678.1 / Avg: 1279851.77 / Max: 1281720.8Min: 1293513.9 / Avg: 1294605.53 / Max: 1295211.3Min: 1307504.2 / Avg: 1308517.23 / Max: 1309668.7Min: 1293775.5 / Avg: 1294664.5 / Max: 1296182.4Min: 1293258.5 / Avg: 1295588.83 / Max: 1298976.9Min: 1287797.7 / Avg: 1288738.57 / Max: 1290074Min: 1205942.3 / Avg: 1206497.83 / Max: 1206947.8Min: 1279979.8 / Avg: 1281490.47 / Max: 1282389.9Min: 1302037.7 / Avg: 1303437.6 / Max: 1304249.7Min: 1287374.1 / Avg: 1290897.83 / Max: 1294328.4Min: 1247865.2 / Avg: 1249264.6 / Max: 1251483.2Min: 1209909.6 / Avg: 1211354.27 / Max: 1213424.8Min: 1183868.7 / Avg: 1185857.83 / Max: 1188142.2Min: 1110976.3 / Avg: 1113861.27 / Max: 1115334.9

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80160240320400SE +/- 0.20, N = 3SE +/- 0.65, N = 3SE +/- 0.66, N = 3SE +/- 0.70, N = 3SE +/- 0.41, N = 3SE +/- 0.24, N = 3SE +/- 0.35, N = 3SE +/- 0.40, N = 3SE +/- 0.48, N = 3SE +/- 0.31, N = 3SE +/- 0.30, N = 3SE +/- 1.35, N = 3SE +/- 0.76, N = 3SE +/- 0.30, N = 3SE +/- 0.53, N = 3348.66345.36308.39306.50303.21303.75303.45311.97303.33309.10307.47301.28296.96298.27270.83
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300Min: 348.4 / Avg: 348.66 / Max: 349.06Min: 344.12 / Avg: 345.36 / Max: 346.34Min: 307.21 / Avg: 308.39 / Max: 309.51Min: 305.69 / Avg: 306.5 / Max: 307.9Min: 302.77 / Avg: 303.21 / Max: 304.03Min: 303.29 / Avg: 303.75 / Max: 304.08Min: 302.78 / Avg: 303.45 / Max: 303.96Min: 311.24 / Avg: 311.97 / Max: 312.62Min: 302.7 / Avg: 303.33 / Max: 304.28Min: 308.57 / Avg: 309.1 / Max: 309.64Min: 307 / Avg: 307.47 / Max: 308.03Min: 298.67 / Avg: 301.28 / Max: 303.16Min: 295.48 / Avg: 296.96 / Max: 298.01Min: 297.91 / Avg: 298.27 / Max: 298.86Min: 269.87 / Avg: 270.83 / Max: 271.69

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.45, N = 3SE +/- 0.48, N = 3SE +/- 0.42, N = 3SE +/- 0.55, N = 3SE +/- 0.34, N = 3SE +/- 0.03, N = 3SE +/- 0.35, N = 3SE +/- 0.45, N = 3SE +/- 0.64, N = 3SE +/- 1.14, N = 3SE +/- 0.52, N = 3SE +/- 0.51, N = 3SE +/- 0.89, N = 3SE +/- 1.02, N = 4SE +/- 0.41, N = 383.2893.7997.2397.9597.3091.5192.5986.5791.3286.2386.6889.4387.9492.35107.011. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 82.43 / Avg: 83.28 / Max: 83.97Min: 92.83 / Avg: 93.79 / Max: 94.31Min: 96.6 / Avg: 97.23 / Max: 98.03Min: 96.92 / Avg: 97.95 / Max: 98.78Min: 96.62 / Avg: 97.29 / Max: 97.72Min: 91.47 / Avg: 91.51 / Max: 91.55Min: 91.9 / Avg: 92.59 / Max: 93.05Min: 86.01 / Avg: 86.57 / Max: 87.47Min: 90.24 / Avg: 91.31 / Max: 92.46Min: 84.36 / Avg: 86.23 / Max: 88.29Min: 85.65 / Avg: 86.68 / Max: 87.25Min: 88.41 / Avg: 89.43 / Max: 89.96Min: 86.34 / Avg: 87.94 / Max: 89.41Min: 90.47 / Avg: 92.35 / Max: 94.9Min: 106.25 / Avg: 107.01 / Max: 107.661. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 728220406080100SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.21, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 373.8363.5177.3477.7180.0576.1879.5967.291. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72821530456075Min: 73.58 / Avg: 73.83 / Max: 73.98Min: 63.47 / Avg: 63.51 / Max: 63.54Min: 77.24 / Avg: 77.34 / Max: 77.39Min: 77.6 / Avg: 77.71 / Max: 77.78Min: 79.94 / Avg: 80.05 / Max: 80.11Min: 75.81 / Avg: 76.18 / Max: 76.52Min: 79.43 / Avg: 79.59 / Max: 79.84Min: 67.24 / Avg: 67.29 / Max: 67.351. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.58, N = 3SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 3SE +/- 0.33, N = 3114115130133134135134135134132133135139138143
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 129 / Avg: 130 / Max: 131Min: 132 / Avg: 133 / Max: 134Min: 133 / Avg: 133.67 / Max: 135Min: 134 / Avg: 135 / Max: 136Min: 131 / Avg: 131.67 / Max: 132

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000SE +/- 5.28, N = 3SE +/- 2.91, N = 3SE +/- 6.63, N = 3SE +/- 3.26, N = 3SE +/- 7.17, N = 3SE +/- 1.66, N = 3SE +/- 3.34, N = 3SE +/- 3.23, N = 3SE +/- 1.89, N = 3SE +/- 3.41, N = 3SE +/- 2.49, N = 3SE +/- 1.97, N = 3SE +/- 3.07, N = 3SE +/- 3.25, N = 3SE +/- 1.85, N = 3773.86967.37902.38861.35897.59903.30895.52908.46895.33898.84899.45898.21858.54864.62861.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 763.33 / Avg: 773.86 / Max: 779.83Min: 961.58 / Avg: 967.37 / Max: 970.7Min: 889.15 / Avg: 902.38 / Max: 909.81Min: 854.83 / Avg: 861.35 / Max: 864.7Min: 885.88 / Avg: 897.59 / Max: 910.61Min: 900.01 / Avg: 903.3 / Max: 905.33Min: 888.84 / Avg: 895.52 / Max: 899.09Min: 902.91 / Avg: 908.46 / Max: 914.1Min: 892.13 / Avg: 895.33 / Max: 898.66Min: 892.02 / Avg: 898.84 / Max: 902.4Min: 895.16 / Avg: 899.45 / Max: 903.77Min: 894.42 / Avg: 898.21 / Max: 901.07Min: 852.55 / Avg: 858.54 / Max: 862.73Min: 858.29 / Avg: 864.62 / Max: 869.06Min: 857.36 / Avg: 861.07 / Max: 863.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.13950.2790.41850.5580.6975SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.610.620.530.520.520.520.520.530.510.520.530.520.500.500.501. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.61 / Avg: 0.61 / Max: 0.62Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.53 / Max: 0.54Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.49 / Avg: 0.5 / Max: 0.511. (CXX) g++ options: -O3 -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 317.517.219.920.220.520.520.519.920.419.920.220.421.321.021.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P510152025Min: 17.5 / Avg: 17.5 / Max: 17.5Min: 17.2 / Avg: 17.2 / Max: 17.2Min: 19.9 / Avg: 19.9 / Max: 19.9Min: 20.2 / Avg: 20.2 / Max: 20.2Min: 20.5 / Avg: 20.53 / Max: 20.6Min: 20.5 / Avg: 20.5 / Max: 20.5Min: 20.5 / Avg: 20.5 / Max: 20.5Min: 19.8 / Avg: 19.87 / Max: 19.9Min: 20.4 / Avg: 20.43 / Max: 20.5Min: 19.9 / Avg: 19.9 / Max: 19.9Min: 20.2 / Avg: 20.23 / Max: 20.3Min: 20.4 / Avg: 20.43 / Max: 20.5Min: 21.2 / Avg: 21.27 / Max: 21.3Min: 20.9 / Avg: 20.97 / Max: 21Min: 21.1 / Avg: 21.1 / Max: 21.1

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2M4M6M8M10MSE +/- 3201.01, N = 3SE +/- 9056.97, N = 3SE +/- 6345.73, N = 3SE +/- 4763.31, N = 3SE +/- 6430.53, N = 3SE +/- 10773.68, N = 3SE +/- 15133.11, N = 3SE +/- 15198.80, N = 3SE +/- 8887.25, N = 3SE +/- 2921.91, N = 3SE +/- 5954.89, N = 3SE +/- 16881.14, N = 3SE +/- 18409.69, N = 3SE +/- 4235.69, N = 3SE +/- 8448.82, N = 37940047793988369360926899948675390266526966674881698394565452116804305688076067876926547690655135164146091. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.4M2.8M4.2M5.6M7MMin: 7934298 / Avg: 7940047 / Max: 7945361Min: 7924088 / Avg: 7939883 / Max: 7955460Min: 6928402 / Avg: 6936092.33 / Max: 6948681Min: 6892591 / Avg: 6899948 / Max: 6908868Min: 6743220 / Avg: 6753902.33 / Max: 6765446Min: 6632241 / Avg: 6652695.67 / Max: 6668790Min: 6646698 / Avg: 6674880.67 / Max: 6698529Min: 6962515 / Avg: 6983944.67 / Max: 7013330Min: 6532471 / Avg: 6545211 / Max: 6562315Min: 6798906 / Avg: 6804305.33 / Max: 6808941Min: 6872751 / Avg: 6880759.67 / Max: 6892398Min: 6763111 / Avg: 6787691.67 / Max: 6820026Min: 6524614 / Avg: 6547690 / Max: 6584075Min: 6543694 / Avg: 6551351.33 / Max: 6558318Min: 6399116 / Avg: 6414609 / Max: 64281971. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.14180.28360.42540.56720.709SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.620.630.540.540.530.520.520.540.520.530.530.530.510.510.521. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.61 / Avg: 0.62 / Max: 0.63Min: 0.62 / Avg: 0.63 / Max: 0.63Min: 0.54 / Avg: 0.54 / Max: 0.55Min: 0.53 / Avg: 0.54 / Max: 0.54Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.54 / Avg: 0.54 / Max: 0.55Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.54Min: 0.53 / Avg: 0.53 / Max: 0.54Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.51 / Avg: 0.51 / Max: 0.52Min: 0.51 / Avg: 0.52 / Max: 0.521. (CXX) g++ options: -O3 -pthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1224364860SE +/- 0.42, N = 3SE +/- 0.36, N = 15SE +/- 0.51, N = 4SE +/- 0.30, N = 14SE +/- 0.42, N = 3SE +/- 0.30, N = 3SE +/- 0.48, N = 5SE +/- 0.28, N = 3SE +/- 0.02, N = 3SE +/- 0.39, N = 3SE +/- 0.34, N = 3SE +/- 0.42, N = 3SE +/- 0.13, N = 3SE +/- 0.61, N = 3SE +/- 0.44, N = 552.0251.9545.0344.5444.8043.8644.4945.1343.6944.6845.0943.9842.2542.9443.171. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1020304050Min: 51.53 / Avg: 52.02 / Max: 52.86Min: 49.68 / Avg: 51.95 / Max: 54.03Min: 44.35 / Avg: 45.03 / Max: 46.55Min: 42.74 / Avg: 44.54 / Max: 46.27Min: 44.34 / Avg: 44.8 / Max: 45.63Min: 43.26 / Avg: 43.86 / Max: 44.2Min: 43.45 / Avg: 44.49 / Max: 45.67Min: 44.77 / Avg: 45.13 / Max: 45.69Min: 43.65 / Avg: 43.69 / Max: 43.71Min: 44.24 / Avg: 44.68 / Max: 45.45Min: 44.42 / Avg: 45.09 / Max: 45.46Min: 43.51 / Avg: 43.98 / Max: 44.82Min: 42.11 / Avg: 42.25 / Max: 42.52Min: 42.28 / Avg: 42.94 / Max: 44.16Min: 42.2 / Avg: 43.17 / Max: 44.211. (CC) gcc options: -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.33, N = 3110111127129133134133129131128131130135134135
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 110 / Avg: 110.33 / Max: 111

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.08550.1710.25650.3420.4275SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.380.380.330.330.320.320.320.330.320.330.330.330.320.320.311. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12345Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.38 / Avg: 0.38 / Max: 0.39Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.34Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.33 / Max: 0.33Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.31 / Avg: 0.31 / Max: 0.311. (CXX) g++ options: -O3 -pthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P10002000300040005000SE +/- 3.14, N = 3SE +/- 2.49, N = 3SE +/- 8.28, N = 3SE +/- 5.35, N = 3SE +/- 2.07, N = 3SE +/- 3.12, N = 3SE +/- 2.54, N = 3SE +/- 5.67, N = 3SE +/- 3.97, N = 3SE +/- 4.11, N = 3SE +/- 2.96, N = 3SE +/- 1.09, N = 3SE +/- 2.72, N = 3SE +/- 1.24, N = 3SE +/- 5.48, N = 34834.054835.024199.914152.604091.784072.764085.534207.914089.124139.984151.294086.083949.003965.033949.441. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P8001600240032004000Min: 4828.11 / Avg: 4834.05 / Max: 4838.82Min: 4830.71 / Avg: 4835.02 / Max: 4839.34Min: 4183.38 / Avg: 4199.91 / Max: 4208.9Min: 4142.26 / Avg: 4152.6 / Max: 4160.14Min: 4087.91 / Avg: 4091.78 / Max: 4095.01Min: 4068.43 / Avg: 4072.76 / Max: 4078.81Min: 4080.99 / Avg: 4085.53 / Max: 4089.76Min: 4199.27 / Avg: 4207.91 / Max: 4218.6Min: 4081.58 / Avg: 4089.12 / Max: 4095.03Min: 4132.23 / Avg: 4139.98 / Max: 4146.24Min: 4145.43 / Avg: 4151.29 / Max: 4154.96Min: 4084.12 / Avg: 4086.08 / Max: 4087.91Min: 3943.6 / Avg: 3949 / Max: 3952.3Min: 3963.31 / Avg: 3965.03 / Max: 3967.43Min: 3939.83 / Avg: 3949.44 / Max: 3958.811. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.33, N = 3SE +/- 0.33, N = 3116116133135137138137133137135135137142141141
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 137 / Avg: 137.67 / Max: 138Min: 141 / Avg: 141.67 / Max: 142

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30060090012001500SE +/- 5.46, N = 3SE +/- 8.33, N = 3SE +/- 6.89, N = 3SE +/- 6.84, N = 3SE +/- 9.54, N = 3SE +/- 7.55, N = 3SE +/- 9.35, N = 3SE +/- 1.20, N = 3SE +/- 7.33, N = 3SE +/- 10.82, N = 3SE +/- 1.20, N = 3SE +/- 1.86, N = 3SE +/- 8.01, N = 3SE +/- 9.68, N = 39999981166116611841184117811371180117311591172122112081213
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 992 / Avg: 999.33 / Max: 1010Min: 990 / Avg: 998.33 / Max: 1015Min: 1156 / Avg: 1165.67 / Max: 1179Min: 1159 / Avg: 1166.33 / Max: 1180Min: 1165 / Avg: 1184 / Max: 1195Min: 1175 / Avg: 1184 / Max: 1199Min: 1168 / Avg: 1178.33 / Max: 1197Min: 1135 / Avg: 1137.33 / Max: 1139Min: 1173 / Avg: 1180.33 / Max: 1195Min: 1152 / Avg: 1173 / Max: 1188Min: 1157 / Avg: 1159.33 / Max: 1161Min: 1168 / Avg: 1171.67 / Max: 1174Min: 1209 / Avg: 1220.67 / Max: 1236Min: 1202 / Avg: 1212.67 / Max: 1232

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P140280420560700SE +/- 0.14, N = 3SE +/- 0.29, N = 3SE +/- 0.40, N = 3SE +/- 0.06, N = 3SE +/- 0.63, N = 3SE +/- 0.31, N = 3SE +/- 0.13, N = 3SE +/- 0.77, N = 3SE +/- 0.27, N = 3SE +/- 0.31, N = 3SE +/- 0.56, N = 3SE +/- 0.13, N = 3SE +/- 0.21, N = 3SE +/- 0.07, N = 3SE +/- 0.59, N = 3628.21627.44546.25539.45529.62529.60530.80546.72531.14539.17538.09530.36513.60514.85513.651. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P110220330440550Min: 628.01 / Avg: 628.21 / Max: 628.49Min: 627.02 / Avg: 627.44 / Max: 627.99Min: 545.53 / Avg: 546.25 / Max: 546.93Min: 539.37 / Avg: 539.45 / Max: 539.56Min: 528.75 / Avg: 529.62 / Max: 530.84Min: 528.99 / Avg: 529.6 / Max: 530.03Min: 530.54 / Avg: 530.8 / Max: 530.93Min: 545.3 / Avg: 546.72 / Max: 547.95Min: 530.69 / Avg: 531.14 / Max: 531.61Min: 538.79 / Avg: 539.17 / Max: 539.79Min: 537.38 / Avg: 538.09 / Max: 539.19Min: 530.22 / Avg: 530.36 / Max: 530.61Min: 513.18 / Avg: 513.6 / Max: 513.82Min: 514.7 / Avg: 514.85 / Max: 514.94Min: 512.6 / Avg: 513.65 / Max: 514.651. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P11K22K33K44K55KSE +/- 174.69, N = 3SE +/- 107.65, N = 3SE +/- 5.24, N = 3SE +/- 45.54, N = 3SE +/- 71.36, N = 3SE +/- 6.70, N = 3SE +/- 241.95, N = 3SE +/- 84.26, N = 3SE +/- 133.22, N = 3SE +/- 71.59, N = 3SE +/- 17.38, N = 3SE +/- 383.79, N = 3SE +/- 109.50, N = 3SE +/- 108.67, N = 3SE +/- 275.73, N = 343527.2843548.6349870.8251015.8751437.7951397.3651630.5050076.2751428.8950609.0450474.7051629.1052977.6052936.9853237.681. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9K18K27K36K45KMin: 43324.05 / Avg: 43527.28 / Max: 43875.02Min: 43343.38 / Avg: 43548.63 / Max: 43707.56Min: 49860.73 / Avg: 49870.82 / Max: 49878.3Min: 50939.99 / Avg: 51015.87 / Max: 51097.43Min: 51363.7 / Avg: 51437.79 / Max: 51580.48Min: 51383.97 / Avg: 51397.36 / Max: 51404.23Min: 51347.45 / Avg: 51630.5 / Max: 52111.92Min: 49985.34 / Avg: 50076.27 / Max: 50244.61Min: 51271.63 / Avg: 51428.89 / Max: 51693.79Min: 50528.84 / Avg: 50609.04 / Max: 50751.85Min: 50450.23 / Avg: 50474.7 / Max: 50508.32Min: 51241.3 / Avg: 51629.1 / Max: 52396.66Min: 52852.14 / Avg: 52977.6 / Max: 53195.79Min: 52784.59 / Avg: 52936.98 / Max: 53147.39Min: 52860.58 / Avg: 53237.68 / Max: 53774.71. (CXX) g++ options: -O3 -march=native -fopenmp

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P90180270360450SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.22, N = 3SE +/- 0.16, N = 3SE +/- 0.16, N = 3SE +/- 0.20, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.14, N = 3SE +/- 0.40, N = 3SE +/- 0.15, N = 3SE +/- 0.25, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 3429.14429.41374.20368.93362.99361.80363.50374.52363.10368.69368.38362.20351.14352.07352.291. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80160240320400Min: 429.06 / Avg: 429.14 / Max: 429.27Min: 429.18 / Avg: 429.41 / Max: 429.7Min: 373.85 / Avg: 374.2 / Max: 374.6Min: 368.71 / Avg: 368.93 / Max: 369.25Min: 362.68 / Avg: 362.99 / Max: 363.15Min: 361.43 / Avg: 361.8 / Max: 362.1Min: 363.36 / Avg: 363.5 / Max: 363.69Min: 374.32 / Avg: 374.52 / Max: 374.67Min: 362.85 / Avg: 363.1 / Max: 363.25Min: 368.54 / Avg: 368.69 / Max: 368.97Min: 367.96 / Avg: 368.38 / Max: 369.17Min: 362.03 / Avg: 362.2 / Max: 362.49Min: 350.66 / Avg: 351.14 / Max: 351.45Min: 351.75 / Avg: 352.07 / Max: 352.26Min: 352.1 / Avg: 352.29 / Max: 352.531. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P70M140M210M280M350MSE +/- 33906.80, N = 3SE +/- 60046.42, N = 3SE +/- 44137.19, N = 3SE +/- 320354.97, N = 3SE +/- 136985.64, N = 3SE +/- 426281.88, N = 3SE +/- 234513.76, N = 3SE +/- 40543.01, N = 3SE +/- 12833.10, N = 3SE +/- 131222.60, N = 3SE +/- 465752.47, N = 3SE +/- 206566.86, N = 3SE +/- 38644.36, N = 3SE +/- 380619.49, N = 3SE +/- 333820.85, N = 3346505619.64346733990.00302249577.83297478250.43293169027.15292412973.62293216264.80302406827.18293437386.11297816529.28297827555.09292597759.78283588722.34284710986.66284332324.151. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60M120M180M240M300MMin: 346439918.8 / Avg: 346505619.64 / Max: 346553015.3Min: 346619184.22 / Avg: 346733990 / Max: 346821912.43Min: 302186816.9 / Avg: 302249577.83 / Max: 302334717.7Min: 296837585.98 / Avg: 297478250.43 / Max: 297805193.98Min: 292897063.96 / Avg: 293169027.15 / Max: 293333683.01Min: 291560846.18 / Avg: 292412973.62 / Max: 292862655.91Min: 292759126.74 / Avg: 293216264.8 / Max: 293535711.28Min: 302325902.16 / Avg: 302406827.18 / Max: 302451712.79Min: 293416576.95 / Avg: 293437386.11 / Max: 293460802.19Min: 297554997.89 / Avg: 297816529.28 / Max: 297966245.2Min: 297325728.53 / Avg: 297827555.09 / Max: 298758103.32Min: 292233343.29 / Avg: 292597759.78 / Max: 292948520.88Min: 283512887.38 / Avg: 283588722.34 / Max: 283639560.88Min: 284060087.32 / Avg: 284710986.66 / Max: 285378285.89Min: 283664869.62 / Avg: 284332324.15 / Max: 284679741.151. (CC) gcc options: -O3 -march=native -lm

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80160240320400SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.37, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.18, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3351.63351.99306.48302.14297.81296.91297.10306.62297.71302.25302.12297.65287.93288.62288.701. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300Min: 351.56 / Avg: 351.63 / Max: 351.71Min: 351.85 / Avg: 351.99 / Max: 352.11Min: 306.27 / Avg: 306.48 / Max: 306.76Min: 302.1 / Avg: 302.14 / Max: 302.21Min: 297.64 / Avg: 297.81 / Max: 297.92Min: 296.81 / Avg: 296.91 / Max: 296.96Min: 297.02 / Avg: 297.1 / Max: 297.2Min: 305.88 / Avg: 306.62 / Max: 307Min: 297.66 / Avg: 297.7 / Max: 297.77Min: 302.19 / Avg: 302.25 / Max: 302.32Min: 301.76 / Avg: 302.12 / Max: 302.35Min: 297.5 / Avg: 297.65 / Max: 297.86Min: 287.77 / Avg: 287.93 / Max: 288.1Min: 288.49 / Avg: 288.62 / Max: 288.69Min: 288.49 / Avg: 288.7 / Max: 288.831. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P160320480640800SE +/- 0.59, N = 3SE +/- 0.17, N = 3SE +/- 0.66, N = 3SE +/- 0.57, N = 3SE +/- 0.57, N = 3SE +/- 0.76, N = 3SE +/- 0.70, N = 3SE +/- 0.70, N = 3SE +/- 0.46, N = 3SE +/- 0.11, N = 3SE +/- 0.32, N = 3SE +/- 0.13, N = 3SE +/- 0.36, N = 3SE +/- 0.12, N = 3SE +/- 0.27, N = 3739.72740.58643.82635.92624.73623.84626.57645.87627.06637.38636.43627.74605.83608.05608.611. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_100EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P130260390520650Min: 738.56 / Avg: 739.72 / Max: 740.43Min: 740.39 / Avg: 740.58 / Max: 740.92Min: 642.67 / Avg: 643.82 / Max: 644.97Min: 634.78 / Avg: 635.92 / Max: 636.53Min: 623.79 / Avg: 624.73 / Max: 625.77Min: 622.33 / Avg: 623.84 / Max: 624.72Min: 625.29 / Avg: 626.57 / Max: 627.72Min: 644.48 / Avg: 645.87 / Max: 646.66Min: 626.15 / Avg: 627.06 / Max: 627.62Min: 637.17 / Avg: 637.38 / Max: 637.55Min: 635.86 / Avg: 636.43 / Max: 636.98Min: 627.49 / Avg: 627.74 / Max: 627.91Min: 605.12 / Avg: 605.82 / Max: 606.33Min: 607.84 / Avg: 608.05 / Max: 608.27Min: 608.15 / Avg: 608.61 / Max: 609.071. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1326395265SE +/- 0.23, N = 3SE +/- 0.45, N = 3SE +/- 0.19, N = 3SE +/- 0.09, N = 3SE +/- 0.19, N = 3SE +/- 0.37, N = 3SE +/- 0.52, N = 3SE +/- 0.22, N = 3SE +/- 0.26, N = 3SE +/- 0.37, N = 3SE +/- 0.03, N = 3SE +/- 0.44, N = 3SE +/- 0.27, N = 3SE +/- 0.15, N = 3SE +/- 0.40, N = 350.449.156.857.558.658.458.156.758.356.957.257.659.160.059.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1224364860Min: 50.2 / Avg: 50.43 / Max: 50.9Min: 48.2 / Avg: 49.07 / Max: 49.7Min: 56.6 / Avg: 56.83 / Max: 57.2Min: 57.3 / Avg: 57.47 / Max: 57.6Min: 58.2 / Avg: 58.57 / Max: 58.8Min: 57.9 / Avg: 58.37 / Max: 59.1Min: 57.2 / Avg: 58.1 / Max: 59Min: 56.4 / Avg: 56.67 / Max: 57.1Min: 57.8 / Avg: 58.27 / Max: 58.7Min: 56.2 / Avg: 56.93 / Max: 57.4Min: 57.2 / Avg: 57.23 / Max: 57.3Min: 56.8 / Avg: 57.63 / Max: 58.3Min: 58.6 / Avg: 59.13 / Max: 59.4Min: 59.8 / Avg: 60.03 / Max: 60.3Min: 58.7 / Avg: 59.5 / Max: 59.9

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20K40K60K80K100KSE +/- 395.61, N = 3SE +/- 185.79, N = 3SE +/- 152.13, N = 3SE +/- 232.73, N = 3SE +/- 966.88, N = 3SE +/- 380.43, N = 3SE +/- 75.52, N = 3SE +/- 26.41, N = 3SE +/- 287.97, N = 3SE +/- 294.03, N = 3SE +/- 397.99, N = 3SE +/- 461.99, N = 3SE +/- 402.31, N = 3SE +/- 203.38, N = 3SE +/- 200.17, N = 376410.1176027.9887489.3888553.7890672.4590409.5990310.9086946.6190157.2888487.5788634.3189863.2692896.6292387.2592426.491. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P16K32K48K64K80KMin: 75619.2 / Avg: 76410.11 / Max: 76824.91Min: 75656.88 / Avg: 76027.98 / Max: 76230.05Min: 87189.84 / Avg: 87489.38 / Max: 87685.39Min: 88306.55 / Avg: 88553.78 / Max: 89018.95Min: 89694.92 / Avg: 90672.45 / Max: 92606.17Min: 89648.99 / Avg: 90409.59 / Max: 90807.3Min: 90159.87 / Avg: 90310.9 / Max: 90387.48Min: 86896.07 / Avg: 86946.61 / Max: 86985.19Min: 89581.38 / Avg: 90157.28 / Max: 90451.57Min: 88176.03 / Avg: 88487.57 / Max: 89075.27Min: 88093.03 / Avg: 88634.31 / Max: 89410.36Min: 89400.2 / Avg: 89863.26 / Max: 90787.24Min: 92266.52 / Avg: 92896.62 / Max: 93645.02Min: 92154.74 / Avg: 92387.25 / Max: 92792.54Min: 92207.57 / Avg: 92426.49 / Max: 92826.231. (CXX) g++ options: -O3 -march=native -fopenmp

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1224364860SE +/- 0.01, N = 3SE +/- 0.35, N = 3SE +/- 0.31, N = 13SE +/- 0.42, N = 7SE +/- 0.31, N = 3SE +/- 0.56, N = 3SE +/- 0.65, N = 3SE +/- 0.04, N = 3SE +/- 0.37, N = 3SE +/- 0.29, N = 3SE +/- 0.55, N = 4SE +/- 0.53, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.43, N = 352.8252.7946.2045.6945.1944.8445.3845.9044.6945.7745.9844.5344.3843.2344.931. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1122334455Min: 52.79 / Avg: 52.82 / Max: 52.84Min: 52.36 / Avg: 52.79 / Max: 53.48Min: 45.1 / Avg: 46.2 / Max: 48.21Min: 44.72 / Avg: 45.69 / Max: 47.44Min: 44.56 / Avg: 45.19 / Max: 45.52Min: 44.27 / Avg: 44.84 / Max: 45.95Min: 44.32 / Avg: 45.38 / Max: 46.55Min: 45.84 / Avg: 45.9 / Max: 45.97Min: 44.12 / Avg: 44.69 / Max: 45.37Min: 45.44 / Avg: 45.77 / Max: 46.36Min: 45.35 / Avg: 45.98 / Max: 47.62Min: 44 / Avg: 44.53 / Max: 45.58Min: 44.15 / Avg: 44.38 / Max: 44.66Min: 42.93 / Avg: 43.23 / Max: 43.4Min: 44.07 / Avg: 44.93 / Max: 45.441. (CC) gcc options: -O3

Swet

Swet is a synthetic CPU/RAM benchmark, includes multi-processor test cases. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P150M300M450M600M750MSE +/- 2209414.80, N = 3SE +/- 2325206.90, N = 3SE +/- 4045781.60, N = 3SE +/- 2469610.76, N = 3SE +/- 5706451.14, N = 3SE +/- 3622759.74, N = 3SE +/- 807930.96, N = 3SE +/- 8506660.76, N = 3SE +/- 1400113.05, N = 3SE +/- 4052005.46, N = 3SE +/- 2164773.96, N = 3SE +/- 3828695.03, N = 3SE +/- 5177314.46, N = 3SE +/- 2920022.24, N = 3SE +/- 6957648.38, N = 36851888106863954196065671226024348856022038425920492256056443646122184255826843335973611976000655725985892285718255805811945885618167721. (CC) gcc options: -lm -lpthread -lcurses -lrt
OpenBenchmarking.orgOperations Per Second, More Is BetterSwet 1.5.16AverageEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P120M240M360M480M600MMin: 680933266 / Avg: 685188809.67 / Max: 688347262Min: 682854921 / Avg: 686395419.33 / Max: 690776878Min: 600677355 / Avg: 606567122 / Max: 614317006Min: 597611652 / Avg: 602434885.33 / Max: 605768048Min: 590802604 / Avg: 602203842.33 / Max: 608351202Min: 586617205 / Avg: 592049224.67 / Max: 598917708Min: 604227306 / Avg: 605644364.33 / Max: 607025364Min: 596416785 / Avg: 612218425.33 / Max: 625580102Min: 579892857 / Avg: 582684332.67 / Max: 584271636Min: 589487365 / Avg: 597361197.33 / Max: 602958932Min: 595836568 / Avg: 600065571.67 / Max: 602983432Min: 593041731 / Avg: 598589227.67 / Max: 605934155Min: 566389680 / Avg: 571825580.33 / Max: 582175830Min: 575619977 / Avg: 581194588 / Max: 585489333Min: 547924209 / Avg: 561816771.67 / Max: 5694516311. (CC) gcc options: -lm -lpthread -lcurses -lrt

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.28, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3139.71139.83121.71120.00118.23117.99118.32121.92118.29120.06119.99117.88114.47114.69114.591. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 139.63 / Avg: 139.71 / Max: 139.74Min: 139.8 / Avg: 139.83 / Max: 139.86Min: 121.56 / Avg: 121.71 / Max: 121.89Min: 119.99 / Avg: 119.99 / Max: 120.01Min: 118.06 / Avg: 118.23 / Max: 118.33Min: 117.81 / Avg: 117.99 / Max: 118.19Min: 118.29 / Avg: 118.32 / Max: 118.36Min: 121.89 / Avg: 121.92 / Max: 121.94Min: 118.22 / Avg: 118.29 / Max: 118.36Min: 119.97 / Avg: 120.06 / Max: 120.12Min: 119.95 / Avg: 119.99 / Max: 120.04Min: 117.32 / Avg: 117.88 / Max: 118.21Min: 114.34 / Avg: 114.47 / Max: 114.63Min: 114.63 / Avg: 114.69 / Max: 114.8Min: 114.51 / Avg: 114.59 / Max: 114.641. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 390.4290.4678.8677.7076.5576.2276.5878.8876.5377.6977.5776.5174.0674.2174.271. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMIEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 90.37 / Avg: 90.42 / Max: 90.45Min: 90.45 / Avg: 90.46 / Max: 90.47Min: 78.82 / Avg: 78.86 / Max: 78.88Min: 77.66 / Avg: 77.7 / Max: 77.77Min: 76.54 / Avg: 76.55 / Max: 76.55Min: 76.12 / Avg: 76.22 / Max: 76.31Min: 76.57 / Avg: 76.57 / Max: 76.58Min: 78.84 / Avg: 78.88 / Max: 78.91Min: 76.42 / Avg: 76.53 / Max: 76.6Min: 77.68 / Avg: 77.69 / Max: 77.7Min: 77.5 / Avg: 77.57 / Max: 77.66Min: 76.49 / Avg: 76.51 / Max: 76.52Min: 73.98 / Avg: 74.06 / Max: 74.21Min: 74.18 / Avg: 74.21 / Max: 74.22Min: 74.23 / Avg: 74.27 / Max: 74.291. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645SE +/- 0.08, N = 4SE +/- 0.07, N = 4SE +/- 0.15, N = 4SE +/- 0.12, N = 4SE +/- 0.09, N = 4SE +/- 0.06, N = 4SE +/- 0.11, N = 4SE +/- 0.04, N = 4SE +/- 0.07, N = 4SE +/- 0.08, N = 4SE +/- 0.07, N = 4SE +/- 0.12, N = 4SE +/- 0.10, N = 4SE +/- 0.08, N = 4SE +/- 0.11, N = 430.7530.8435.3035.9336.2036.2836.4035.2736.2435.7335.6936.3637.5537.4037.401. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240Min: 30.52 / Avg: 30.75 / Max: 30.86Min: 30.64 / Avg: 30.84 / Max: 30.96Min: 34.92 / Avg: 35.3 / Max: 35.57Min: 35.63 / Avg: 35.93 / Max: 36.21Min: 35.97 / Avg: 36.2 / Max: 36.42Min: 36.17 / Avg: 36.28 / Max: 36.45Min: 36.11 / Avg: 36.4 / Max: 36.59Min: 35.16 / Avg: 35.27 / Max: 35.37Min: 36.12 / Avg: 36.24 / Max: 36.4Min: 35.5 / Avg: 35.73 / Max: 35.85Min: 35.55 / Avg: 35.69 / Max: 35.87Min: 36.06 / Avg: 36.36 / Max: 36.65Min: 37.36 / Avg: 37.55 / Max: 37.83Min: 37.26 / Avg: 37.4 / Max: 37.61Min: 37.13 / Avg: 37.4 / Max: 37.651. (CC) gcc options: -O2 -std=c99

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.03520.07040.10560.14080.176SE +/- 0.00013172, N = 3SE +/- 0.00007037, N = 3SE +/- 0.00081699, N = 3SE +/- 0.00022828, N = 3SE +/- 0.00009609, N = 3SE +/- 0.00113052, N = 3SE +/- 0.00076232, N = 3SE +/- 0.00010921, N = 3SE +/- 0.00097851, N = 3SE +/- 0.00050264, N = 3SE +/- 0.00052181, N = 3SE +/- 0.00104595, N = 3SE +/- 0.00047942, N = 3SE +/- 0.00070595, N = 3SE +/- 0.00101629, N = 30.129242850.128195450.145541430.149543390.149767950.148682530.151249760.148508120.149882930.148592730.145735970.149784890.153417050.154578900.15653208
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: Pod2htmlEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.14 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.14 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.16Min: 0.16 / Avg: 0.16 / Max: 0.16

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P11002200330044005500SE +/- 1.30, N = 3SE +/- 1.69, N = 3SE +/- 2.46, N = 3SE +/- 1.00, N = 3SE +/- 0.85, N = 3SE +/- 3.58, N = 3SE +/- 3.88, N = 3SE +/- 2.98, N = 3SE +/- 1.33, N = 3SE +/- 3.31, N = 3SE +/- 0.83, N = 3SE +/- 0.26, N = 3SE +/- 0.80, N = 3SE +/- 7.02, N = 3SE +/- 13.87, N = 35226.755238.764549.534499.504429.194425.924421.414561.194429.474497.774487.904430.724293.874290.854311.061. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500Min: 5224.38 / Avg: 5226.75 / Max: 5228.88Min: 5235.4 / Avg: 5238.76 / Max: 5240.69Min: 4544.76 / Avg: 4549.53 / Max: 4552.93Min: 4497.86 / Avg: 4499.5 / Max: 4501.3Min: 4427.63 / Avg: 4429.19 / Max: 4430.55Min: 4421.28 / Avg: 4425.91 / Max: 4432.96Min: 4413.66 / Avg: 4421.41 / Max: 4425.33Min: 4555.36 / Avg: 4561.19 / Max: 4565.19Min: 4426.94 / Avg: 4429.47 / Max: 4431.46Min: 4492.53 / Avg: 4497.77 / Max: 4503.89Min: 4486.88 / Avg: 4487.9 / Max: 4489.54Min: 4430.21 / Avg: 4430.72 / Max: 4431.05Min: 4292.82 / Avg: 4293.87 / Max: 4295.45Min: 4276.91 / Avg: 4290.85 / Max: 4299.27Min: 4293.42 / Avg: 4311.06 / Max: 4338.431. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.00, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3159.60159.79138.89137.28135.24135.26135.14139.25135.01137.34137.09135.18130.88131.15131.181. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P306090120150Min: 159.56 / Avg: 159.6 / Max: 159.65Min: 159.77 / Avg: 159.79 / Max: 159.8Min: 138.88 / Avg: 138.89 / Max: 138.91Min: 137.26 / Avg: 137.28 / Max: 137.3Min: 135.2 / Avg: 135.23 / Max: 135.25Min: 135.2 / Avg: 135.26 / Max: 135.29Min: 135.11 / Avg: 135.14 / Max: 135.2Min: 139.03 / Avg: 139.25 / Max: 139.39Min: 134.91 / Avg: 135.01 / Max: 135.17Min: 137.34 / Avg: 137.34 / Max: 137.35Min: 136.98 / Avg: 137.09 / Max: 137.3Min: 135.12 / Avg: 135.18 / Max: 135.24Min: 130.77 / Avg: 130.88 / Max: 131.05Min: 131.11 / Avg: 131.15 / Max: 131.19Min: 131.16 / Avg: 131.18 / Max: 131.221. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P300K600K900K1200K1500KSE +/- 791.31, N = 12SE +/- 1531.45, N = 12SE +/- 584.52, N = 12SE +/- 658.02, N = 11SE +/- 1511.23, N = 11SE +/- 661.25, N = 11SE +/- 587.46, N = 11SE +/- 372.53, N = 12SE +/- 986.25, N = 11SE +/- 1366.07, N = 11SE +/- 1181.68, N = 11SE +/- 609.53, N = 11SE +/- 1355.09, N = 11SE +/- 863.61, N = 11SE +/- 504.67, N = 1111781401180682102805410158589978099968059974611030295999442101434210133209971339681499690669693711. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 1174366 / Avg: 1178140.42 / Max: 1181927Min: 1166902 / Avg: 1180681.92 / Max: 1187021Min: 1025657 / Avg: 1028053.5 / Max: 1031419Min: 1010601 / Avg: 1015858.45 / Max: 1018073Min: 987057 / Avg: 997809 / Max: 1001414Min: 994184 / Avg: 996805.09 / Max: 1001414Min: 994184 / Avg: 997460.73 / Max: 999597Min: 1029491 / Avg: 1030294.92 / Max: 1033354Min: 990607 / Avg: 999441.64 / Max: 1003238Min: 1001414 / Avg: 1014342.27 / Max: 1018073Min: 1003238 / Avg: 1013319.55 / Max: 1016195Min: 994184 / Avg: 997132.73 / Max: 999597Min: 956211 / Avg: 968148.55 / Max: 971389Min: 961218 / Avg: 969066.36 / Max: 971389Min: 966277 / Avg: 969371.36 / Max: 9713891. (CC) gcc options: -O3 -march=native

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P5001000150020002500SE +/- 10.66, N = 3SE +/- 13.33, N = 3SE +/- 11.68, N = 3SE +/- 11.47, N = 3SE +/- 10.12, N = 3SE +/- 13.12, N = 3SE +/- 13.45, N = 3SE +/- 11.83, N = 3SE +/- 10.26, N = 3SE +/- 9.97, N = 3SE +/- 12.22, N = 3SE +/- 10.13, N = 3SE +/- 8.78, N = 3SE +/- 12.91, N = 3SE +/- 10.95, N = 32300.72285.22004.41970.11955.31944.51945.32016.41922.31984.51977.31952.51887.01893.21892.01. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400800120016002000Min: 2279.8 / Avg: 2300.73 / Max: 2314.7Min: 2258.5 / Avg: 2285.17 / Max: 2298.8Min: 1981.1 / Avg: 2004.43 / Max: 2017.2Min: 1947.7 / Avg: 1970.13 / Max: 1985.5Min: 1935.1 / Avg: 1955.3 / Max: 1966.5Min: 1918.3 / Avg: 1944.53 / Max: 1958.4Min: 1918.7 / Avg: 1945.33 / Max: 1961.9Min: 1993.3 / Avg: 2016.43 / Max: 2032.3Min: 1902.5 / Avg: 1922.27 / Max: 1936.9Min: 1964.7 / Avg: 1984.5 / Max: 1996.4Min: 1953.5 / Avg: 1977.33 / Max: 1993.9Min: 1932.3 / Avg: 1952.53 / Max: 1963.6Min: 1869.4 / Avg: 1886.97 / Max: 1895.9Min: 1867.5 / Avg: 1893.2 / Max: 1908.2Min: 1870.4 / Avg: 1892 / Max: 1905.91. (CXX) g++ options: -O3 -march=native -rdynamic

Perl Benchmarks

Perl benchmark suite that can be used to compare the relative speed of different versions of perl. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.00020.00040.00060.00080.001SE +/- 0.00000483, N = 3SE +/- 0.00001015, N = 3SE +/- 0.00000656, N = 3SE +/- 0.00000621, N = 3SE +/- 0.00000365, N = 3SE +/- 0.00000264, N = 3SE +/- 0.00000921, N = 3SE +/- 0.00000879, N = 3SE +/- 0.00000252, N = 3SE +/- 0.00000724, N = 3SE +/- 0.00000962, N = 3SE +/- 0.00000413, N = 3SE +/- 0.00000497, N = 3SE +/- 0.00001257, N = 3SE +/- 0.00000737, N = 30.000825700.000835630.000961160.000970840.000974550.000965790.000961530.000954850.000967530.000960820.000955860.000954780.000992100.000980740.00100608
OpenBenchmarking.orgSeconds, Fewer Is BetterPerl BenchmarksTest: InterpreterEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100SE +/- 0.14, N = 3SE +/- 0.05, N = 3SE +/- 0.18, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.28, N = 3SE +/- 0.31, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.16, N = 380.9480.8593.1794.7695.5695.3995.5392.7795.4893.8693.9295.4598.2498.3198.411. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P20406080100Min: 80.67 / Avg: 80.94 / Max: 81.13Min: 80.76 / Avg: 80.85 / Max: 80.91Min: 92.84 / Avg: 93.17 / Max: 93.47Min: 94.65 / Avg: 94.76 / Max: 94.97Min: 95.43 / Avg: 95.56 / Max: 95.8Min: 95.12 / Avg: 95.39 / Max: 95.75Min: 95.11 / Avg: 95.53 / Max: 96.06Min: 92.29 / Avg: 92.77 / Max: 93.36Min: 95.35 / Avg: 95.48 / Max: 95.62Min: 93.53 / Avg: 93.86 / Max: 94.1Min: 93.8 / Avg: 93.92 / Max: 94.11Min: 95.27 / Avg: 95.45 / Max: 95.58Min: 98.08 / Avg: 98.24 / Max: 98.48Min: 98.25 / Avg: 98.31 / Max: 98.39Min: 98.12 / Avg: 98.41 / Max: 98.671. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P9001800270036004500SE +/- 38.21, N = 3SE +/- 33.69, N = 3SE +/- 21.82, N = 3SE +/- 36.48, N = 3SE +/- 47.38, N = 4SE +/- 44.49, N = 4SE +/- 31.29, N = 9SE +/- 9.42, N = 3SE +/- 42.46, N = 3SE +/- 43.10, N = 4SE +/- 37.84, N = 3SE +/- 46.66, N = 4SE +/- 17.87, N = 3SE +/- 37.21, N = 3SE +/- 25.83, N = 34367.124336.413937.233762.493837.563816.083793.173988.943830.203853.503765.853792.813756.963634.173589.931. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P8001600240032004000Min: 4297.85 / Avg: 4367.12 / Max: 4429.7Min: 4277.41 / Avg: 4336.41 / Max: 4394.1Min: 3909.26 / Avg: 3937.23 / Max: 3980.22Min: 3718.35 / Avg: 3762.49 / Max: 3834.86Min: 3701.86 / Avg: 3837.56 / Max: 3910.17Min: 3692 / Avg: 3816.08 / Max: 3902.55Min: 3632 / Avg: 3793.17 / Max: 3925.79Min: 3976.02 / Avg: 3988.94 / Max: 4007.28Min: 3768.21 / Avg: 3830.2 / Max: 3911.46Min: 3731.7 / Avg: 3853.5 / Max: 3929.87Min: 3719.42 / Avg: 3765.85 / Max: 3840.83Min: 3699.39 / Avg: 3792.81 / Max: 3917.69Min: 3721.38 / Avg: 3756.96 / Max: 3777.75Min: 3590.61 / Avg: 3634.17 / Max: 3708.22Min: 3541.75 / Avg: 3589.93 / Max: 3630.151. (CC) gcc options: -O3 -mavx2

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3174171195198202203202198202197200202208208208
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4080120160200Min: 170 / Avg: 171 / Max: 172Min: 195 / Avg: 195.33 / Max: 196Min: 201 / Avg: 201.67 / Max: 203Min: 202 / Avg: 202.67 / Max: 203Min: 197 / Avg: 197.67 / Max: 198Min: 196 / Avg: 197.33 / Max: 198Min: 208 / Avg: 208.33 / Max: 209

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P130K260K390K520K650KSE +/- 1476.72, N = 3SE +/- 7086.33, N = 3SE +/- 1456.19, N = 3SE +/- 1921.47, N = 3SE +/- 1174.39, N = 3SE +/- 215.70, N = 3SE +/- 142.64, N = 3SE +/- 741.11, N = 3SE +/- 4131.15, N = 3SE +/- 948.15, N = 3SE +/- 4035.67, N = 3SE +/- 957.34, N = 3SE +/- 973.54, N = 3SE +/- 2157.92, N = 3SE +/- 569.07, N = 3614039621672540797529609524751521572523575539060532957531740542521524189511228513356511906
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P110K220K330K440K550KMin: 611253 / Avg: 614039 / Max: 616281Min: 613060 / Avg: 621672 / Max: 635726Min: 538241 / Avg: 540796.67 / Max: 543284Min: 525962 / Avg: 529608.67 / Max: 532482Min: 523197 / Avg: 524750.67 / Max: 527053Min: 521324 / Avg: 521572.33 / Max: 522002Min: 523393 / Avg: 523574.67 / Max: 523856Min: 538010 / Avg: 539060 / Max: 540491Min: 524902 / Avg: 532956.67 / Max: 538578Min: 530354 / Avg: 531740.33 / Max: 533554Min: 534521 / Avg: 542520.67 / Max: 547450Min: 522517 / Avg: 524188.67 / Max: 525833Min: 510027 / Avg: 511228.33 / Max: 513156Min: 510924 / Avg: 513356.33 / Max: 517660Min: 511300 / Avg: 511905.67 / Max: 513043

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000SE +/- 0.51, N = 3SE +/- 1.50, N = 3SE +/- 1.50, N = 3SE +/- 1.46, N = 3SE +/- 0.53, N = 3SE +/- 1.18, N = 3SE +/- 0.82, N = 3SE +/- 0.71, N = 3SE +/- 0.79, N = 3SE +/- 0.42, N = 3SE +/- 0.40, N = 3SE +/- 0.19, N = 3SE +/- 1.22, N = 3SE +/- 0.49, N = 3SE +/- 0.40, N = 31062.321039.60969.31954.20966.04963.00956.82983.45949.15974.40962.88943.29928.43919.01876.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 1061.29 / Avg: 1062.32 / Max: 1062.84Min: 1037.63 / Avg: 1039.6 / Max: 1042.55Min: 966.48 / Avg: 969.31 / Max: 971.59Min: 951.37 / Avg: 954.2 / Max: 956.21Min: 965.08 / Avg: 966.04 / Max: 966.9Min: 961.75 / Avg: 963 / Max: 965.37Min: 955.86 / Avg: 956.82 / Max: 958.45Min: 982.72 / Avg: 983.45 / Max: 984.87Min: 947.85 / Avg: 949.15 / Max: 950.57Min: 973.64 / Avg: 974.4 / Max: 975.08Min: 962.13 / Avg: 962.88 / Max: 963.5Min: 942.97 / Avg: 943.29 / Max: 943.63Min: 926.64 / Avg: 928.43 / Max: 930.77Min: 918.04 / Avg: 919.01 / Max: 919.66Min: 876.05 / Avg: 876.45 / Max: 877.261. (CXX) g++ options: -O3 -std=c++11 -fopenmp

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P300K600K900K1200K1500KSE +/- 771.81, N = 3SE +/- 2229.96, N = 3SE +/- 2430.14, N = 3SE +/- 1936.89, N = 3SE +/- 2041.67, N = 3SE +/- 472.72, N = 3SE +/- 2507.53, N = 3SE +/- 577.19, N = 3SE +/- 2350.83, N = 3SE +/- 1998.87, N = 3SE +/- 607.09, N = 3SE +/- 842.53, N = 3SE +/- 1258.02, N = 3SE +/- 1165.03, N = 3SE +/- 1045.78, N = 31247037.61155871.81217027.71212918.21209698.01215056.11208480.81162690.91197778.91262530.41248284.11188387.21172021.11143360.51041996.8
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 1245534.1 / Avg: 1247037.63 / Max: 1248092.1Min: 1152601.1 / Avg: 1155871.8 / Max: 1160133Min: 1212618.6 / Avg: 1217027.7 / Max: 1221003.3Min: 1209097.9 / Avg: 1212918.2 / Max: 1215383.9Min: 1206399.7 / Avg: 1209698.03 / Max: 1213431.9Min: 1214294.3 / Avg: 1215056.1 / Max: 1215921.9Min: 1203466.2 / Avg: 1208480.8 / Max: 1211047.5Min: 1161606.1 / Avg: 1162690.93 / Max: 1163575.1Min: 1194820.8 / Avg: 1197778.87 / Max: 1202422.8Min: 1259246.9 / Avg: 1262530.43 / Max: 1266147.1Min: 1247624.2 / Avg: 1248284.1 / Max: 1249496.7Min: 1187003 / Avg: 1188387.17 / Max: 1189911.5Min: 1169540.4 / Avg: 1172021.13 / Max: 1173625.2Min: 1141134.6 / Avg: 1143360.47 / Max: 1145070.1Min: 1040104.9 / Avg: 1041996.77 / Max: 1043715.1

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30060090012001500SE +/- 2.05, N = 8SE +/- 0.24, N = 8SE +/- 2.22, N = 7SE +/- 1.34, N = 7SE +/- 2.20, N = 7SE +/- 1.84, N = 7SE +/- 1.97, N = 7SE +/- 1.94, N = 7SE +/- 2.27, N = 7SE +/- 1.96, N = 7SE +/- 1.99, N = 7SE +/- 1.74, N = 7SE +/- 2.23, N = 7SE +/- 1.94, N = 7SE +/- 1.98, N = 71179.021179.301035.371019.211007.221008.511008.241035.231006.921022.481018.971007.95975.22978.85976.591. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 1173.37 / Avg: 1179.02 / Max: 1186.77Min: 1178.67 / Avg: 1179.3 / Max: 1180.64Min: 1028.15 / Avg: 1035.37 / Max: 1041.31Min: 1017.42 / Avg: 1019.21 / Max: 1027.18Min: 1002.3 / Avg: 1007.22 / Max: 1013.81Min: 1001.29 / Avg: 1008.51 / Max: 1011.8Min: 1002.46 / Avg: 1008.24 / Max: 1013.3Min: 1031.59 / Avg: 1035.22 / Max: 1042.84Min: 999.92 / Avg: 1006.92 / Max: 1013.6Min: 1017.68 / Avg: 1022.48 / Max: 1028.23Min: 1012.27 / Avg: 1018.97 / Max: 1026.96Min: 1001.21 / Avg: 1007.95 / Max: 1011.76Min: 970.09 / Avg: 975.22 / Max: 983.12Min: 972.88 / Avg: 978.85 / Max: 983.32Min: 970.13 / Avg: 976.59 / Max: 981.111. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P70140210280350SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.55, N = 3SE +/- 0.01, N = 3SE +/- 0.18, N = 3SE +/- 0.93, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.25, N = 3SE +/- 0.17, N = 3SE +/- 0.14, N = 3342.93343.61301.54298.73293.81292.60293.88302.93294.27298.71298.49294.44285.25286.18286.191. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300Min: 342.6 / Avg: 342.93 / Max: 343.11Min: 343.56 / Avg: 343.61 / Max: 343.68Min: 300.44 / Avg: 301.54 / Max: 302.12Min: 298.7 / Avg: 298.73 / Max: 298.75Min: 293.61 / Avg: 293.81 / Max: 294.16Min: 290.74 / Avg: 292.6 / Max: 293.53Min: 293.7 / Avg: 293.88 / Max: 294.04Min: 302.75 / Avg: 302.93 / Max: 303.14Min: 294.13 / Avg: 294.27 / Max: 294.49Min: 298.57 / Avg: 298.71 / Max: 298.93Min: 298.33 / Avg: 298.49 / Max: 298.7Min: 294.21 / Avg: 294.44 / Max: 294.59Min: 284.78 / Avg: 285.25 / Max: 285.61Min: 286.01 / Avg: 286.18 / Max: 286.52Min: 285.95 / Avg: 286.19 / Max: 286.431. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.530.460.460.450.450.450.460.450.460.460.450.440.440.441. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.46 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.43 / Avg: 0.44 / Max: 0.441. (CXX) g++ options: -O3 -pthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 8EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72820.17780.35560.53340.71120.889SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.790.790.700.690.710.690.690.661. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 8EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 7282246810Min: 0.79 / Avg: 0.79 / Max: 0.79Min: 0.78 / Avg: 0.79 / Max: 0.79Min: 0.7 / Avg: 0.7 / Max: 0.71Min: 0.69 / Avg: 0.69 / Max: 0.69Min: 0.7 / Avg: 0.71 / Max: 0.71Min: 0.68 / Avg: 0.69 / Max: 0.69Min: 0.69 / Avg: 0.69 / Max: 0.69Min: 0.66 / Avg: 0.66 / Max: 0.661. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 7282612182430SE +/- 0.04, N = 5SE +/- 0.02, N = 5SE +/- 0.03, N = 5SE +/- 0.03, N = 5SE +/- 0.03, N = 5SE +/- 0.02, N = 5SE +/- 0.03, N = 3SE +/- 0.02, N = 527.5927.4424.4723.9625.2324.0625.0323.051. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 8EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 7282612182430Min: 27.44 / Avg: 27.59 / Max: 27.67Min: 27.38 / Avg: 27.44 / Max: 27.47Min: 24.43 / Avg: 24.47 / Max: 24.58Min: 23.88 / Avg: 23.96 / Max: 24.04Min: 25.13 / Avg: 25.23 / Max: 25.3Min: 24.02 / Avg: 24.06 / Max: 24.15Min: 24.98 / Avg: 25.03 / Max: 25.07Min: 22.99 / Avg: 23.05 / Max: 23.111. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

JPEG XL Decoding

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7282918273645SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 337.9438.4233.1532.7634.2232.6932.16
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding 0.3.1CPU Threads: 1EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7282816243240Min: 37.88 / Avg: 37.94 / Max: 37.97Min: 38.34 / Avg: 38.42 / Max: 38.47Min: 33.13 / Avg: 33.15 / Max: 33.18Min: 32.68 / Avg: 32.76 / Max: 32.82Min: 34.2 / Avg: 34.22 / Max: 34.24Min: 32.59 / Avg: 32.69 / Max: 32.77Min: 32.13 / Avg: 32.16 / Max: 32.2

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P918273645SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.19, N = 3SE +/- 0.23, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 338.0138.5934.7334.5034.0132.3133.6434.3833.0634.1134.0333.7133.0833.1232.481. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P816243240Min: 37.96 / Avg: 38.01 / Max: 38.06Min: 38.57 / Avg: 38.59 / Max: 38.62Min: 34.46 / Avg: 34.73 / Max: 34.97Min: 34.32 / Avg: 34.5 / Max: 34.63Min: 33.71 / Avg: 34.01 / Max: 34.31Min: 32.12 / Avg: 32.31 / Max: 32.69Min: 33.2 / Avg: 33.64 / Max: 34Min: 34.31 / Avg: 34.38 / Max: 34.48Min: 32.93 / Avg: 33.06 / Max: 33.13Min: 33.96 / Avg: 34.11 / Max: 34.27Min: 33.89 / Avg: 34.03 / Max: 34.12Min: 33.67 / Avg: 33.71 / Max: 33.74Min: 32.96 / Avg: 33.08 / Max: 33.27Min: 33.04 / Avg: 33.12 / Max: 33.21Min: 32.44 / Avg: 32.48 / Max: 32.511. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400K800K1200K1600K2000KSE +/- 17849.12, N = 5SE +/- 19118.80, N = 4SE +/- 14975.61, N = 3SE +/- 18256.32, N = 3SE +/- 13883.40, N = 3SE +/- 11135.64, N = 15SE +/- 8071.25, N = 3SE +/- 13573.21, N = 15SE +/- 21214.43, N = 3SE +/- 18437.40, N = 3SE +/- 13170.58, N = 15SE +/- 13583.32, N = 6SE +/- 14367.89, N = 15SE +/- 12289.67, N = 3SE +/- 18495.33, N = 151635794.571601070.501493222.211482738.921417246.671437534.231450496.711508082.411481804.001419600.131466557.031438157.731414840.411372468.881421564.351. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P300K600K900K1200K1500KMin: 1586048 / Avg: 1635794.57 / Max: 1671133.62Min: 1561051.5 / Avg: 1601070.5 / Max: 1650709.75Min: 1469521.88 / Avg: 1493222.21 / Max: 1520932Min: 1456664.25 / Avg: 1482738.92 / Max: 1517911.38Min: 1397042.75 / Avg: 1417246.67 / Max: 1443844Min: 1363529.88 / Avg: 1437534.23 / Max: 1507163.75Min: 1439056 / Avg: 1450496.71 / Max: 1466079.5Min: 1433897.38 / Avg: 1508082.41 / Max: 1604899.38Min: 1456461.38 / Avg: 1481804 / Max: 1523945.12Min: 1387942.88 / Avg: 1419600.13 / Max: 1451804.88Min: 1388893.38 / Avg: 1466557.03 / Max: 1567157.5Min: 1387351.75 / Avg: 1438157.73 / Max: 1487252.88Min: 1349900.62 / Avg: 1414840.41 / Max: 1535877.12Min: 1350621.38 / Avg: 1372468.88 / Max: 1393145.75Min: 1340662.25 / Avg: 1421564.35 / Max: 1628409.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Tinymembench

This benchmark tests the system memory (RAM) performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemsetEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P4K8K12K16K20KSE +/- 39.37, N = 3SE +/- 28.30, N = 3SE +/- 58.19, N = 9SE +/- 217.98, N = 3SE +/- 52.61, N = 3SE +/- 169.52, N = 3SE +/- 217.68, N = 3SE +/- 39.21, N = 3SE +/- 27.08, N = 3SE +/- 13.93, N = 3SE +/- 46.42, N = 3SE +/- 20.90, N = 3SE +/- 18.80, N = 3SE +/- 37.39, N = 3SE +/- 21.61, N = 315585.716357.216300.817329.815640.116261.516494.415872.014786.915097.114961.414820.314921.314776.414571.71. (CC) gcc options: -O2 -lm
OpenBenchmarking.orgMB/s, More Is BetterTinymembench 2018-05-28Standard MemsetEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3K6K9K12K15KMin: 15512.6 / Avg: 15585.7 / Max: 15647.6Min: 16301.7 / Avg: 16357.2 / Max: 16394.6Min: 16107.6 / Avg: 16300.84 / Max: 16573.2Min: 16900.6 / Avg: 17329.83 / Max: 17610.5Min: 15534.9 / Avg: 15640.1 / Max: 15694.5Min: 15924.5 / Avg: 16261.47 / Max: 16462.4Min: 16092.5 / Avg: 16494.43 / Max: 16840.3Min: 15794.6 / Avg: 15872.03 / Max: 15921.5Min: 14734.2 / Avg: 14786.93 / Max: 14824Min: 15074.8 / Avg: 15097.07 / Max: 15122.7Min: 14876.5 / Avg: 14961.37 / Max: 15036.4Min: 14781.5 / Avg: 14820.27 / Max: 14853.2Min: 14884.6 / Avg: 14921.27 / Max: 14946.8Min: 14704.8 / Avg: 14776.4 / Max: 14830.9Min: 14546.4 / Avg: 14571.7 / Max: 14614.71. (CC) gcc options: -O2 -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300SE +/- 1.00, N = 3SE +/- 0.88, N = 3SE +/- 1.30, N = 3SE +/- 0.93, N = 3SE +/- 1.01, N = 3SE +/- 0.50, N = 3SE +/- 0.67, N = 3SE +/- 0.17, N = 3SE +/- 0.88, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 0.33, N = 3SE +/- 0.29, N = 3SE +/- 0.60, N = 32482622462402432632642852712772802772672592431. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50100150200250Min: 260 / Avg: 262 / Max: 263Min: 244.5 / Avg: 245.83 / Max: 247.5Min: 238 / Avg: 240.33 / Max: 242.5Min: 242 / Avg: 243.17 / Max: 245Min: 261 / Avg: 262.83 / Max: 264.5Min: 263 / Avg: 263.5 / Max: 264.5Min: 284.5 / Avg: 285.17 / Max: 286.5Min: 270.5 / Avg: 270.67 / Max: 271Min: 275.5 / Avg: 276.83 / Max: 278.5Min: 279.5 / Avg: 279.67 / Max: 280Min: 276.5 / Avg: 277 / Max: 277.5Min: 266.5 / Avg: 267.17 / Max: 267.5Min: 258.5 / Avg: 259 / Max: 259.5Min: 242.5 / Avg: 243.33 / Max: 244.51. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P300K600K900K1200K1500KSE +/- 8091.69, N = 3SE +/- 13442.74, N = 3SE +/- 9391.21, N = 3SE +/- 22602.36, N = 12SE +/- 18435.82, N = 15SE +/- 16083.41, N = 14SE +/- 14106.76, N = 3SE +/- 15275.38, N = 3SE +/- 15826.22, N = 15SE +/- 14615.10, N = 4SE +/- 4313.29, N = 3SE +/- 12613.83, N = 5SE +/- 15930.17, N = 3SE +/- 17229.52, N = 15SE +/- 11537.36, N = 41336546.211315900.461185927.041228831.791200527.311183717.451170719.461189270.871186674.321180459.911147451.381173570.001143594.631156241.741126925.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P200K400K600K800K1000KMin: 1320510.25 / Avg: 1336546.21 / Max: 1346451.62Min: 1289503 / Avg: 1315900.46 / Max: 1333515.38Min: 1176055.5 / Avg: 1185927.04 / Max: 1204701.12Min: 1142730.12 / Avg: 1228831.79 / Max: 1378956.5Min: 1107542.38 / Avg: 1200527.31 / Max: 1349007.38Min: 1125387.38 / Avg: 1183717.45 / Max: 1363145Min: 1151024 / Avg: 1170719.46 / Max: 1198062Min: 1161717.5 / Avg: 1189270.87 / Max: 1214476.5Min: 1117600.38 / Avg: 1186674.32 / Max: 1295341Min: 1147582.38 / Avg: 1180459.91 / Max: 1209351.5Min: 1143020.75 / Avg: 1147451.38 / Max: 1156076.88Min: 1141950.88 / Avg: 1173570 / Max: 1208467Min: 1112121 / Avg: 1143594.63 / Max: 1163617.38Min: 1088613.12 / Avg: 1156241.74 / Max: 1310666.25Min: 1105957.12 / Avg: 1126925.5 / Max: 1159285.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72823691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.699.7810.069.8810.099.7010.029.081. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: PNG - Encode Speed: 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72823691215Min: 10.68 / Avg: 10.69 / Max: 10.7Min: 9.77 / Avg: 9.78 / Max: 9.79Min: 10.05 / Avg: 10.06 / Max: 10.09Min: 9.87 / Avg: 9.88 / Max: 9.89Min: 10.08 / Avg: 10.09 / Max: 10.09Min: 9.68 / Avg: 9.7 / Max: 9.71Min: 10.01 / Avg: 10.02 / Max: 10.03Min: 9.06 / Avg: 9.08 / Max: 9.091. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1326395265SE +/- 0.24, N = 3SE +/- 0.33, N = 3SE +/- 0.19, N = 3SE +/- 0.32, N = 3SE +/- 0.23, N = 3SE +/- 0.25, N = 3SE +/- 0.28, N = 3SE +/- 0.30, N = 3SE +/- 0.36, N = 3SE +/- 0.42, N = 3SE +/- 0.41, N = 3SE +/- 0.45, N = 3SE +/- 0.50, N = 3SE +/- 0.07, N = 3SE +/- 0.80, N = 350.5952.9955.6055.6855.9555.3055.4252.9955.5753.7253.3655.7756.1057.2459.31
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1224364860Min: 50.13 / Avg: 50.59 / Max: 50.91Min: 52.38 / Avg: 52.99 / Max: 53.51Min: 55.35 / Avg: 55.6 / Max: 55.97Min: 55.04 / Avg: 55.68 / Max: 56.12Min: 55.51 / Avg: 55.95 / Max: 56.27Min: 54.8 / Avg: 55.3 / Max: 55.59Min: 54.86 / Avg: 55.42 / Max: 55.71Min: 52.43 / Avg: 52.99 / Max: 53.44Min: 55.11 / Avg: 55.57 / Max: 56.28Min: 52.98 / Avg: 53.72 / Max: 54.43Min: 52.68 / Avg: 53.36 / Max: 54.09Min: 54.93 / Avg: 55.77 / Max: 56.46Min: 55.21 / Avg: 56.1 / Max: 56.93Min: 57.12 / Avg: 57.24 / Max: 57.34Min: 57.78 / Avg: 59.31 / Max: 60.47

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P90K180K270K360K450KSE +/- 428.23, N = 3SE +/- 183.84, N = 3SE +/- 3531.23, N = 15SE +/- 4507.61, N = 3SE +/- 4382.81, N = 3SE +/- 4280.31, N = 3SE +/- 5135.26, N = 3SE +/- 3270.77, N = 3SE +/- 2791.26, N = 3SE +/- 430.27, N = 3SE +/- 3460.73, N = 3SE +/- 639.41, N = 3SE +/- 1079.81, N = 3SE +/- 1365.59, N = 3SE +/- 533.37, N = 3433091.73424600.07380625.69371448.21376241.11386824.54394639.23422944.46404313.15413354.59415583.96405905.78420324.96418080.47399110.881. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80K160K240K320K400KMin: 432304.57 / Avg: 433091.73 / Max: 433777.59Min: 424252.48 / Avg: 424600.07 / Max: 424877.7Min: 366624.37 / Avg: 380625.69 / Max: 410991.61Min: 362545.96 / Avg: 371448.21 / Max: 377131.41Min: 368183.94 / Avg: 376241.11 / Max: 383259.45Min: 379984.9 / Avg: 386824.54 / Max: 394702.8Min: 385396.59 / Avg: 394639.23 / Max: 403139.08Min: 416505.35 / Avg: 422944.46 / Max: 427162.66Min: 399536.5 / Avg: 404313.15 / Max: 409203.69Min: 412498.94 / Avg: 413354.59 / Max: 413861.79Min: 412054.17 / Avg: 415583.96 / Max: 422504.97Min: 404993.42 / Avg: 405905.78 / Max: 407138Min: 419061.55 / Avg: 420324.96 / Max: 422473.5Min: 416043.78 / Avg: 418080.47 / Max: 420674.69Min: 398044.81 / Avg: 399110.88 / Max: 399676.811. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72821428425670SE +/- 0.18, N = 3SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 3SE +/- 0.16, N = 360.2560.5755.7454.7558.6854.9758.5953.841. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 5EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72821224364860Min: 59.93 / Avg: 60.25 / Max: 60.57Min: 60.37 / Avg: 60.57 / Max: 60.77Min: 55.68 / Avg: 55.74 / Max: 55.79Min: 54.72 / Avg: 54.75 / Max: 54.81Min: 58.62 / Avg: 58.68 / Max: 58.75Min: 54.84 / Avg: 54.97 / Max: 55.11Min: 58.4 / Avg: 58.59 / Max: 58.93Min: 53.64 / Avg: 53.84 / Max: 54.151. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72821326395265SE +/- 0.24, N = 4SE +/- 0.15, N = 4SE +/- 0.15, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.18, N = 3SE +/- 0.13, N = 360.1260.2555.7854.8058.5954.7358.6253.751. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.1Input: JPEG - Encode Speed: 7EPYC 7F52EPYC 7F32EPYC 7742EPYC 7642EPYC 7542EPYC 7532EPYC 7502PEPYC 72821224364860Min: 59.67 / Avg: 60.12 / Max: 60.74Min: 59.9 / Avg: 60.25 / Max: 60.56Min: 55.61 / Avg: 55.78 / Max: 56.08Min: 54.64 / Avg: 54.8 / Max: 55.08Min: 58.56 / Avg: 58.59 / Max: 58.65Min: 54.6 / Avg: 54.73 / Max: 54.97Min: 58.33 / Avg: 58.62 / Max: 58.95Min: 53.53 / Avg: 53.75 / Max: 53.971. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie -pthread -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 33.96, N = 3SE +/- 12.37, N = 15SE +/- 34.36, N = 4SE +/- 25.22, N = 14SE +/- 42.91, N = 3SE +/- 69.66, N = 3SE +/- 28.28, N = 5SE +/- 71.90, N = 3SE +/- 51.28, N = 3SE +/- 31.02, N = 3SE +/- 36.49, N = 3SE +/- 35.59, N = 3SE +/- 51.98, N = 3SE +/- 37.27, N = 3SE +/- 28.32, N = 510068.510561.910249.510219.510249.310229.610156.510292.510188.610184.510256.310132.110110.910070.210057.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KMin: 10031 / Avg: 10068.5 / Max: 10136.3Min: 10508.8 / Avg: 10561.89 / Max: 10609.7Min: 10164.3 / Avg: 10249.5 / Max: 10330.1Min: 10124.1 / Avg: 10219.52 / Max: 10411.5Min: 10163.9 / Avg: 10249.27 / Max: 10299.5Min: 10097.1 / Avg: 10229.57 / Max: 10333.2Min: 10088.8 / Avg: 10156.48 / Max: 10226.8Min: 10179.5 / Avg: 10292.47 / Max: 10426Min: 10089.5 / Avg: 10188.6 / Max: 10261Min: 10134.8 / Avg: 10184.5 / Max: 10241.5Min: 10188.2 / Avg: 10256.27 / Max: 10313.1Min: 10093.6 / Avg: 10132.1 / Max: 10203.2Min: 10010.1 / Avg: 10110.87 / Max: 10183.4Min: 10010.8 / Avg: 10070.17 / Max: 10138.9Min: 10025.8 / Avg: 10057.76 / Max: 10170.91. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 89.78, N = 3SE +/- 34.53, N = 3SE +/- 16.13, N = 13SE +/- 18.64, N = 7SE +/- 44.06, N = 3SE +/- 27.58, N = 3SE +/- 54.81, N = 3SE +/- 29.33, N = 3SE +/- 43.54, N = 3SE +/- 14.46, N = 3SE +/- 36.01, N = 4SE +/- 93.14, N = 3SE +/- 66.67, N = 3SE +/- 29.05, N = 3SE +/- 44.49, N = 310099.610567.210233.110162.010170.710123.010159.710222.410118.010232.810209.310172.710112.210101.010063.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KMin: 9990.9 / Avg: 10099.57 / Max: 10277.7Min: 10498.8 / Avg: 10567.23 / Max: 10609.5Min: 10146.7 / Avg: 10233.12 / Max: 10323.4Min: 10103.5 / Avg: 10162.01 / Max: 10237.8Min: 10082.6 / Avg: 10170.7 / Max: 10215.9Min: 10073.8 / Avg: 10123.03 / Max: 10169.2Min: 10076.5 / Avg: 10159.67 / Max: 10263.1Min: 10167.9 / Avg: 10222.43 / Max: 10268.4Min: 10073.2 / Avg: 10118.03 / Max: 10205.1Min: 10214 / Avg: 10232.77 / Max: 10261.2Min: 10125 / Avg: 10209.25 / Max: 10270.7Min: 10073 / Avg: 10172.67 / Max: 10358.8Min: 9978.9 / Avg: 10112.23 / Max: 10180.1Min: 10050 / Avg: 10101 / Max: 10150.6Min: 10013.3 / Avg: 10063.03 / Max: 10151.81. (CC) gcc options: -O3

MBW

This is a basic/simple memory (RAM) bandwidth benchmark for memory copy operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 8192 MiBEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3K6K9K12K15KSE +/- 1.29, N = 3SE +/- 4.94, N = 3SE +/- 38.86, N = 3SE +/- 67.71, N = 3SE +/- 2.47, N = 3SE +/- 36.16, N = 3SE +/- 23.45, N = 3SE +/- 24.53, N = 3SE +/- 36.28, N = 3SE +/- 83.23, N = 3SE +/- 93.82, N = 3SE +/- 60.36, N = 3SE +/- 87.56, N = 3SE +/- 85.10, N = 3SE +/- 33.58, N = 314958.1915666.7015534.9815599.9015616.9015528.6515459.1615641.0915503.0515621.6415510.9215480.2915482.7415482.7115523.141. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgMiB/s, More Is BetterMBW 2018-09-08Test: Memory Copy - Array Size: 8192 MiBEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P3K6K9K12K15KMin: 14955.9 / Avg: 14958.19 / Max: 14960.35Min: 15657.32 / Avg: 15666.7 / Max: 15674.1Min: 15458.19 / Avg: 15534.97 / Max: 15583.72Min: 15488.47 / Avg: 15599.9 / Max: 15722.26Min: 15613.31 / Avg: 15616.9 / Max: 15621.62Min: 15471.64 / Avg: 15528.65 / Max: 15595.7Min: 15413.52 / Avg: 15459.16 / Max: 15491.31Min: 15608.52 / Avg: 15641.09 / Max: 15689.15Min: 15458.28 / Avg: 15503.05 / Max: 15574.89Min: 15473.05 / Avg: 15621.64 / Max: 15760.91Min: 15344.4 / Avg: 15510.92 / Max: 15669.08Min: 15391.63 / Avg: 15480.29 / Max: 15595.56Min: 15323.81 / Avg: 15482.73 / Max: 15625.9Min: 15312.58 / Avg: 15482.71 / Max: 15571.59Min: 15473.92 / Avg: 15523.14 / Max: 15587.311. (CC) gcc options: -O3 -march=native

Stress-NG

OpenBenchmarking.orgBogo Ops/s Per Watt, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P0.15530.31060.46590.62120.77650.470.380.560.500.530.570.630.690.580.660.620.540.550.500.63

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1224364860SE +/- 1.23, N = 12SE +/- 0.72, N = 15SE +/- 1.24, N = 15SE +/- 0.81, N = 15SE +/- 0.36, N = 3SE +/- 0.62, N = 3SE +/- 0.96, N = 12SE +/- 0.67, N = 3SE +/- 1.06, N = 15SE +/- 1.28, N = 12SE +/- 0.69, N = 12SE +/- 0.68, N = 15SE +/- 0.37, N = 15SE +/- 0.77, N = 12SE +/- 0.49, N = 1346.3924.8548.8643.7044.3651.1349.7849.1951.3945.2341.1132.7230.1225.1732.531. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1020304050Min: 38.66 / Avg: 46.39 / Max: 52.29Min: 20.06 / Avg: 24.85 / Max: 30.63Min: 40.51 / Avg: 48.86 / Max: 54.53Min: 38.22 / Avg: 43.7 / Max: 48.67Min: 43.65 / Avg: 44.36 / Max: 44.75Min: 50.43 / Avg: 51.13 / Max: 52.36Min: 45.03 / Avg: 49.78 / Max: 56.32Min: 48.23 / Avg: 49.19 / Max: 50.47Min: 45.08 / Avg: 51.39 / Max: 61.54Min: 35.87 / Avg: 45.23 / Max: 51.94Min: 35.35 / Avg: 41.11 / Max: 44.78Min: 28.49 / Avg: 32.72 / Max: 38.56Min: 28.09 / Avg: 30.12 / Max: 33.69Min: 20.46 / Avg: 25.17 / Max: 29.16Min: 29.83 / Avg: 32.53 / Max: 36.161. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

SVT-VP9

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P1.15652.3133.46954.6265.78252.191.673.833.674.044.114.485.144.005.004.593.553.893.021.90

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P80160240320400SE +/- 2.97, N = 15SE +/- 0.53, N = 7SE +/- 5.67, N = 15SE +/- 4.45, N = 15SE +/- 5.77, N = 15SE +/- 6.69, N = 15SE +/- 6.16, N = 15SE +/- 6.75, N = 15SE +/- 5.85, N = 15SE +/- 5.89, N = 15SE +/- 5.48, N = 15SE +/- 2.50, N = 15SE +/- 1.82, N = 15SE +/- 1.04, N = 9SE +/- 0.51, N = 7230.31124.35322.83305.10332.56346.63336.24350.63325.84334.18319.05242.24229.82184.83107.111. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P60120180240300Min: 188.8 / Avg: 230.31 / Max: 234.56Min: 121.26 / Avg: 124.35 / Max: 125.39Min: 248.14 / Avg: 322.83 / Max: 336.7Min: 248.76 / Avg: 305.1 / Max: 317.63Min: 253.16 / Avg: 332.56 / Max: 350.88Min: 253.27 / Avg: 346.63 / Max: 357.36Min: 250.31 / Avg: 336.24 / Max: 345.82Min: 256.63 / Avg: 350.63 / Max: 362.54Min: 244.6 / Avg: 325.84 / Max: 336.51Min: 252.42 / Avg: 334.18 / Max: 348.23Min: 242.72 / Avg: 319.05 / Max: 326.26Min: 207.33 / Avg: 242.24 / Max: 245.7Min: 204.43 / Avg: 229.82 / Max: 232.47Min: 176.68 / Avg: 184.83 / Max: 187.03Min: 104.11 / Avg: 107.11 / Max: 107.861. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Cpuminer-Opt

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P130260390520650175.98128.88610.96562.57565.67432.56474.77438.65286.60441.39318.51224.80255.88201.02140.99

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P10K20K30K40K50KSE +/- 333.56, N = 15SE +/- 73.09, N = 6SE +/- 1097.21, N = 15SE +/- 2134.76, N = 15SE +/- 1295.28, N = 15SE +/- 1521.20, N = 15SE +/- 964.94, N = 15SE +/- 991.42, N = 15SE +/- 38.44, N = 3SE +/- 1900.37, N = 15SE +/- 94.04, N = 3SE +/- 14.53, N = 3SE +/- 407.34, N = 15SE +/- 78.72, N = 3SE +/- 0.23, N = 315338.007644.1048794.0046113.0045051.0034363.0033589.0026493.0023013.0026546.0019363.0012697.0012793.009361.066207.891. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: DeepcoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P8K16K24K32K40KMin: 14690 / Avg: 15338 / Max: 19990Min: 7535.18 / Avg: 7644.1 / Max: 7997.88Min: 44980 / Avg: 48794 / Max: 63320Min: 42730 / Avg: 46112.67 / Max: 75810Min: 42590 / Avg: 45050.67 / Max: 63100Min: 32480 / Avg: 34363.33 / Max: 55610Min: 32160 / Avg: 33588.67 / Max: 46960Min: 25310 / Avg: 26492.67 / Max: 40340Min: 22940 / Avg: 23013.33 / Max: 23070Min: 23580 / Avg: 26546 / Max: 44860Min: 19250 / Avg: 19363.33 / Max: 19550Min: 12670 / Avg: 12696.67 / Max: 12720Min: 12330 / Avg: 12793.33 / Max: 18480Min: 9269.8 / Avg: 9361.06 / Max: 9517.79Min: 6207.47 / Avg: 6207.89 / Max: 6208.241. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30609012015033.5527.22111.72101.85105.2488.0397.5789.0663.1786.4862.9145.5852.4641.6631.40

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KSE +/- 3.02, N = 3SE +/- 8.24, N = 3SE +/- 106.13, N = 14SE +/- 215.48, N = 14SE +/- 72.85, N = 13SE +/- 113.98, N = 14SE +/- 49.30, N = 3SE +/- 157.39, N = 15SE +/- 135.96, N = 15SE +/- 179.55, N = 12SE +/- 8.85, N = 3SE +/- 6.92, N = 3SE +/- 46.80, N = 15SE +/- 22.88, N = 15SE +/- 24.63, N = 153522.441769.4610194.009581.069507.957961.647811.996242.215725.796104.274490.862965.292991.732192.821473.311. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: GarlicoinEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2K4K6K8K10KMin: 3516.78 / Avg: 3522.44 / Max: 3527.08Min: 1758.86 / Avg: 1769.46 / Max: 1785.69Min: 10000 / Avg: 10194.29 / Max: 11570Min: 9331.1 / Avg: 9581.06 / Max: 12370Min: 9387.02 / Avg: 9507.95 / Max: 10330Min: 7778.26 / Avg: 7961.64 / Max: 9440.93Min: 7760.8 / Avg: 7811.99 / Max: 7910.57Min: 5942.72 / Avg: 6242.21 / Max: 7690.85Min: 5501.12 / Avg: 5725.79 / Max: 7108.86Min: 5581 / Avg: 6104.27 / Max: 7538.77Min: 4477.91 / Avg: 4490.86 / Max: 4507.79Min: 2956.9 / Avg: 2965.29 / Max: 2979.02Min: 2871.77 / Avg: 2991.73 / Max: 3335.77Min: 2113.61 / Avg: 2192.82 / Max: 2419.18Min: 1428.33 / Avg: 1473.31 / Max: 1776.691. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P36912152.992.7210.279.949.957.808.887.255.887.537.074.464.953.883.24

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P30060090012001500SE +/- 1.25, N = 3SE +/- 1.31, N = 3SE +/- 19.88, N = 3SE +/- 29.63, N = 14SE +/- 32.30, N = 15SE +/- 1.46, N = 3SE +/- 11.98, N = 15SE +/- 2.23, N = 3SE +/- 0.86, N = 3SE +/- 37.08, N = 15SE +/- 39.35, N = 15SE +/- 0.74, N = 3SE +/- 1.04, N = 3SE +/- 0.69, N = 3SE +/- 1.33, N = 3524.31262.901429.251360.061398.771147.361139.38891.02807.02883.87862.81447.01429.51322.83216.581. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: x25xEPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P2004006008001000Min: 521.85 / Avg: 524.31 / Max: 525.94Min: 260.27 / Avg: 262.9 / Max: 264.3Min: 1405.75 / Avg: 1429.25 / Max: 1468.78Min: 1313.18 / Avg: 1360.06 / Max: 1735.96Min: 1361.4 / Avg: 1398.77 / Max: 1850.16Min: 1144.44 / Avg: 1147.36 / Max: 1148.92Min: 1114.29 / Avg: 1139.38 / Max: 1274.78Min: 887.85 / Avg: 891.02 / Max: 895.31Min: 806.02 / Avg: 807.02 / Max: 808.73Min: 822.42 / Avg: 883.87 / Max: 1393.5Min: 675.58 / Avg: 862.81 / Max: 1074.46Min: 445.61 / Avg: 447.01 / Max: 448.15Min: 427.95 / Avg: 429.51 / Max: 431.48Min: 321.57 / Avg: 322.83 / Max: 323.96Min: 214.04 / Avg: 216.58 / Max: 218.551. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Kripke

OpenBenchmarking.orgThroughput FoM Per Watt, More Is BetterKripke 1.2.4EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P400K800K1200K1600K2000K514628.711163515.811441967.521423158.291604877.741663875.881673028.981635500.081673190.411575405.351932212.231638766.481357552.351508621.961349588.10

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P50M100M150M200M250MSE +/- 574642.20, N = 15SE +/- 1036502.10, N = 15SE +/- 4938353.34, N = 12SE +/- 2966178.44, N = 15SE +/- 2208966.30, N = 15SE +/- 5572341.62, N = 12SE +/- 1883204.11, N = 15SE +/- 3811837.38, N = 15SE +/- 5452949.69, N = 15SE +/- 2175939.87, N = 3SE +/- 481377.88, N = 4SE +/- 3144064.63, N = 15SE +/- 927929.45, N = 9SE +/- 1366026.89, N = 3SE +/- 1075776.09, N = 4714133941295369202120542001996839602157712272308517832113103071873970072165251331767988672098111501628983871124897221212564331008987581. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4EPYC 7F52EPYC 7F32EPYC 7742EPYC 7702EPYC 7662EPYC 7642EPYC 7552EPYC 7542EPYC 7532EPYC 7502PEPYC 7402PEPYC 7302PEPYC 7282EPYC 7272EPYC 7232P40M80M120M160M200MMin: 68795100 / Avg: 71413394 / Max: 74273840Min: 124322100 / Avg: 129536920 / Max: 133041300Min: 176234200 / Avg: 212054200 / Max: 228984100Min: 179253900 / Avg: 199683960 / Max: 214308000Min: 199388700 / Avg: 215771226.67 / Max: 228027900Min: 196106600 / Avg: 230851783.33 / Max: 255760700Min: 200163900 / Avg: 211310306.67 / Max: 221141000Min: 160767700 / Avg: 187397006.67 / Max: 215708300Min: 170848300 / Avg: 216525133.33 / Max: 232847700Min: 172473100 / Avg: 176798866.67 / Max: 179374000Min: 208902500 / Avg: 209811150 / Max: 211002600Min: 148262800 / Avg: 162898386.67 / Max: 178935200Min: 107412100 / Avg: 112489722.22 / Max: 117685700Min: 119380500 / Avg: 121256433.33 / Max: 123914500Min: 97675130 / Avg: 100898757.5 / Max: 1020972001. (CXX) g++ options: -O3 -fopenmp

276 Results Shown

Cpuminer-Opt
Sysbench
Stress-NG
NAS Parallel Benchmarks
Stress-NG
NAS Parallel Benchmarks
OpenVINO
OSPray
John The Ripper
Stress-NG
OpenVINO
m-queens
Pennant
Stockfish
Coremark
oneDNN
Aircrack-ng
C-Ray
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
BRL-CAD
OSPray
Stress-NG
OSPray:
  XFrog Forest - SciVis
  NASA Streamlines - Path Tracer
ASKAP
Tachyon
John The Ripper
oneDNN
Chaos Group V-RAY
ASTC Encoder
asmFish
Stress-NG
OSPray
Blender
NAMD
Blender
7-Zip Compression
Facebook RocksDB
OSPray
Blender
Chaos Group V-RAY
Pennant
ASKAP
Rodinia
PostgreSQL pgbench:
  100 - 250 - Read Only
  100 - 250 - Read Only - Average Latency
Facebook RocksDB
OSPray
LuxCoreRender:
  Rainbow Colors and Prism
  DLSC
ASTC Encoder
OpenVKL
rays1bench
toyBrot Fractal Generator:
  TBB
  C++ Threads
Blender
OpenVINO
PostgreSQL pgbench:
  100 - 100 - Read Only
  100 - 100 - Read Only - Average Latency
OpenVINO
toyBrot Fractal Generator
POV-Ray
Blender
toyBrot Fractal Generator
OpenVINO:
  Face Detection 0106 FP32 - CPU
  Face Detection 0106 FP16 - CPU
CloverLeaf
ASKAP
OpenFOAM
TensorFlow Lite:
  Mobilenet Quant
  Mobilenet Float
  Inception V4
GROMACS
TensorFlow Lite
oneDNN
LAMMPS Molecular Dynamics Simulator
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
Intel Open Image Denoise
Tungsten Renderer
ebizzy
Apache Cassandra
ASKAP
TensorFlow Lite
Appleseed
Stress-NG
oneDNN
NWChem
LAMMPS Molecular Dynamics Simulator
Basis Universal
Zstd Compression
LeelaChessZero
SVT-AV1
Timed Linux Kernel Compilation
Timed MPlayer Compilation
GROMACS
LeelaChessZero
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
Facebook RocksDB
FFTE
Kvazaar
GPAW
oneDNN
PlaidML
Timed LLVM Compilation
Appleseed
PostgreSQL pgbench:
  100 - 100 - Read Write
  100 - 100 - Read Write - Average Latency
Rodinia
PlaidML
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 1D - f32 - CPU
PostgreSQL pgbench:
  100 - 250 - Read Write
  100 - 250 - Read Write - Average Latency
Rodinia
SVT-VP9
oneDNN
NAS Parallel Benchmarks
Basis Universal
oneDNN
dav1d
Kvazaar
Timed FFmpeg Compilation
YafaRay
miniFE
Timed Godot Game Engine Compilation
dav1d
x265
SVT-AV1
x264
TensorFlow Lite
Kvazaar
NAS Parallel Benchmarks
Sysbench
High Performance Conjugate Gradient
OpenFOAM
dav1d
NAS Parallel Benchmarks
Incompact3D
NAS Parallel Benchmarks
Timed ImageMagick Compilation
Rodinia
Build2
OpenVINO
LULESH
OpenVINO:
  Face Detection 0106 FP32 - CPU
  Person Detection 0106 FP16 - CPU
NAS Parallel Benchmarks
OpenVINO
Parboil
WebP2 Image Encode
Mobile Neural Network
WebP2 Image Encode
Algebraic Multi-Grid Benchmark
dav1d
ACES DGEMM
WebP2 Image Encode
AI Benchmark Alpha
Numenta Anomaly Benchmark
OCRMyPDF
Stream-Dynamic:
  - Triad
  - Add
Numenta Anomaly Benchmark
ctx_clock
Timed PHP Compilation
Stream:
  Copy
  Triad
Mobile Neural Network
Ngspice
Stream
Stream-Dynamic
Stream
Stream-Dynamic
Zstd Compression
AI Benchmark Alpha
Numenta Anomaly Benchmark
XZ Compression
Timed MrBayes Analysis
OpenVINO:
  Age Gender Recognition Retail 0013 FP32 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
Mobile Neural Network
Tungsten Renderer
Numenta Anomaly Benchmark
JPEG XL Decoding
ONNX Runtime
BlogBench
Darmstadt Automotive Parallel Heterogeneous Suite
Quantum ESPRESSO
AI Benchmark Alpha
Caffe
Monte Carlo Simulations of Ionised Nebulae
Ngspice
Mobile Neural Network
RawTherapee
ONNX Runtime
C-Blosc
ONNX Runtime
Caffe
ONNX Runtime
InfluxDB
Numpy Benchmark
Apache CouchDB
JPEG XL
PyPerformance
Darmstadt Automotive Parallel Heterogeneous Suite
simdjson
PyPerformance
Crafty
simdjson
LZ4 Compression
PyPerformance
simdjson
Crypto++
PyPerformance
PyBench
Crypto++
FinanceBench
Botan
Hierarchical INTegration
Botan
Google SynthMark
PyPerformance
FinanceBench
LZ4 Compression
Swet
Botan:
  CAST-256
  KASUMI
eSpeak-NG Speech Engine
Perl Benchmarks
Botan
Etcpak
TSCP
QuantLib
Perl Benchmarks
Montage Astronomical Image Mosaic Engine
Himeno Benchmark
PyPerformance
PHPBench
Darmstadt Automotive Parallel Heterogeneous Suite
InfluxDB
Etcpak
Crypto++
simdjson
JPEG XL:
  PNG - 8
  JPEG - 8
JPEG XL Decoding
LibRaw
Redis
Tinymembench
ONNX Runtime
Redis
JPEG XL
Hugin
KeyDB
JPEG XL:
  JPEG - 5
  JPEG - 7
LZ4 Compression:
  9 - Decompression Speed
  3 - Decompression Speed
MBW
Stress-NG
Stress-NG
SVT-VP9
SVT-VP9
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Cpuminer-Opt
Kripke
Kripke